TCP/UDP 的延迟是多少?
请帮忙!我有一个需要尽可能接近实时处理的应用程序,并且我一直遇到 TCP 和 UDP 的异常延迟问题。延迟就像发条一样发生,并且总是相同的时间长度(大多为 15 到 16 毫秒)。当传输到任何机器(甚至是本地)和任何网络(我们有两个)时,都会发生这种情况。
快速解决这个问题:
我总是使用 C++ 中的 Winsock,在 VS 2008 Pro 中编译,但我编写了几个程序,使用 TCP 和 UDP 以各种方式发送和接收。我总是使用用各种语言(MATLAB、C#、C++)编写的中间程序(本地或远程运行)将信息从一个程序转发到另一个程序。两个 winsock 程序都在同一台计算机上运行,因此它们显示来自同一时钟的 Tx 和 Rx 的时间戳。我不断看到一种模式出现,其中突发数据包将被传输,然后在下一个突发之前有大约 15 到 16 毫秒的延迟,尽管没有编程延迟。有时每个数据包之间可能有 15 到 16 毫秒,而不是一阵数据包。其他时候(很少)我会有不同长度的延迟,例如〜47毫秒。我似乎总是在数据包传输后的一毫秒内收到返回的数据包,尽管在传输的突发之间表现出相同的延迟模式。
我怀疑winsock或NIC在每次传输之前缓冲数据包,但我还没有找到任何证据。我与一个网络有千兆位连接,该网络获得不同级别的流量,但在集群上运行中间程序时,我也遇到了同样的情况,该集群具有没有流量(至少来自用户)的专用网络和 2 千兆位连接。当我在本地运行中间程序以及发送和接收程序时,我什至会遇到这种延迟。
HELP PLEASE! I have an application that needs as close to real-time processing as possible and I keep running into this unusual delay issue with both TCP and UDP. The delay occurs like clockwork and it is always the same length of time (mostly 15 to 16 ms). It occurs when transmitting to any machine (eve local) and on any network (we have two).
A quick run down of the problem:
I am always using winsock in C++, compiled in VS 2008 Pro, but I have written several programs to send and receive in various ways using both TCP and UDP. I always use an intermediate program (running locally or remotely) written in various languages (MATLAB, C#, C++) to forward the information from one program to the other. Both winsock programs run on the same machine so they display timestamps for Tx and Rx from the same clock. I keep seeing a pattern emerge where a burst of packets will get transmitted and then there is a delay of around 15 to 16 milliseconds before the next burst despite no delay being programmed in. Sometimes it may be 15 to 16 ms between each packet instead of a burst of packets. Other times (rarely) I will have a different length delay, such as ~ 47 ms. I always seem to receive the packets back within a millisecond of them being transmitted though with the same pattern of delay being exhibited between the transmitted bursts.
I have a suspicion that winsock or the NIC is buffering packets before each transmit but I haven't found any proof. I have a Gigabit connection to one network that gets various levels of traffic, but I also experience the same thing when running the intermediate program on a cluster that has a private network with no traffic (from users at least) and a 2 Gigabit connection. I will even experience this delay when running the intermediate program locally with the sending and receiving programs.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
今天早上我在用Java重写服务器时发现了这个问题。我的 Windows 系统时钟的分辨率在 15 到 16 毫秒之间。这意味着每个显示与其传输时间相同的毫秒的数据包实际上是在 16 毫秒间隔内以不同的毫秒发送的,但我的时间戳仅每 15 到 16 毫秒增加一次,因此它们看起来是相同的。
我来这里是为了回答我的问题,我看到了关于提高我的程序优先级的回复。因此,我启动了所有三个程序,进入任务管理器,将所有三个程序都提高到“实时”优先级(没有其他进程处于该优先级)并运行它们。我得到了同样的 15 到 16 毫秒间隔。
不过还是谢谢你的回复。
I figured out the problem this morning while rewriting the server in Java. The resolution of my Windows system clock is between 15 and 16 milliseconds. That means that every packet that shows the same millisecond as its transmit time is actually being sent at different milliseconds in a 16 millisecond interval, but my timestamps only increment every 15 to 16 milliseconds so they appear the same.
I came here to answer my question and I saw the response about raising the priority of my program. So I started all three programs, went into task manager, raised all three to "real time" priority (which no other process was at) and ran them. I got the same 15 to 16 millisecond intervals.
Thanks for the responses though.
总是涉及缓冲,并且它在硬件/驱动程序/操作系统等之间有所不同。数据包调度程序也发挥着重要作用。
如果你想要“硬实时”保证,你可能应该远离Windows......
There is always buffering involved and it varies between hardware/drivers/os etc. The packet schedulers also play a big role.
If you want "hard real-time" guarantees, you probably should stay away from Windows...
您可能看到的是调度程序延迟 - 您的应用程序正在等待其他进程完成其时间片并放弃 CPU。多处理器 Windows 上的标准时间片为 15 毫秒到 180 毫秒。
您可以尝试提高应用程序/线程的优先级。
What you're probably seeing is a scheduler delay - your application is waiting for other process(s) to finish their timeslice and give up the CPU. Standard timeslices on multiprocessor Windows are from 15ms to 180ms.
You could try raising the priority of your application/thread.
哦,是的,我知道你的意思。 Windows 及其缓冲区...尝试调整发送方的 SO_SNDBUF 和接收方的 SO_RCVBUF 的值。另外,检查涉及的网络硬件(路由器、交换机、媒体网关)——尽可能消除机器之间的网络硬件,以避免延迟。
Oh yeah, I know what you mean. Windows and its buffers... try adjusting the values of SO_SNDBUF on sender and SO_RCVBUF on reciever side. Also, check involved networking hardware (routers, switches, media gateways) - eliminate as many as possible between the machines to avoid latency.
我遇到了同样的问题。
但就我而言,我使用 GetTickCount() 来获取当前系统时间,不幸的是它的分辨率始终为 15-16ms。
当我使用 QueryPerformanceCounter 而不是 GetTickCount() 时,一切都正常。
事实上,TCP套接字接收数据是均匀的,不是15ms处理一次。
I meet the same problem.
But in my case,I use
GetTickCount()
to get current system time,unfortunately it always has a resolution of 15-16ms.When I use
QueryPerformanceCounter
instead ofGetTickCount()
, everything's all right.In fact,TCP socket recv data evenly,not 15ms deal once.