测量服务器/客户端延迟(TCP& udp)的正确方法是什么
我正在两个木板之间发送和收回数据包(Jeston和pi)。我尝试使用TCP然后使用UDP,理论上的UDP更快,但我想用数字验证这一点。我希望能够运行我的脚本,发送和收到数据包,同时还计算延迟。稍后,我将研究使用RF模块而不是在延迟上两个板之间的直接电缆的效果(这是我想要数字的另一个原因)。
解决这个问题的正确方法是什么?
我尝试发送时间戳以获得差异,但他们的时间没有同步。我阅读了有关NTP和IPERF的信息,但我不确定如何在脚本中运行它们。 IPERF衡量了Trafic,但是如果您的真正TCP或UDP应用程序未运行并交换了实际数据包,那该如何准确?
I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).
What is the right way to tackle this?
I tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
事实证明不可能测量(100% 准确度)延迟,因为没有全局时钟。 NTP 通过假设上行和下行延迟相等来估计它(但实际上上行缓冲区延迟/抖动通常更大)。
UDP 只是“更快”,因为它不使用 ack 并且开销较低。这个“更快”并不是延迟。数据摄像机“速度”是延迟、带宽、串行化延迟(“时钟输出”数据的时间)、缓冲区延迟、pkt 开销以及有时处理延迟和/或协议开销的组合。
It is provably impossible to measure (with 100% accuracy) the latency, since there is no global clock. NTP estimates it by presuming the upstream and downstream delays are equal (but actually upstream buffer delay/jitter is often greater).
UDP is only "faster" because it does not use acks and has lower overhead. This "faster" is not latency. Datacam "speed" is a combo of latency, BW, serialization delay (time to "clock-out" data), buffer delay, pkt over-head, and sometimes processing delay, and/or protocol over-head.
虽然获得单向延迟可能很困难,并且取决于很好的同步时钟,但您可以简化地假设一个方向的延迟与另一个方向相同(不,不是总是这样)和测量往返时间,然后除以两个。 ping将是一种方法,Netperf和“ TCP_RR”测试将是另一种方法。
取决于网络/链接速度和数据包大小以及CPU“马力”,即使不是大多数延迟,都在两侧的数据包处理中。您可以通过服务需求数字来了解这一点,如果您拥有CPU利用率,Netperf将报告。 (NB -NetPerf认为这是测试时唯一有意义地消耗CPU的唯一东西)
While getting one-way latency can be rather difficult and depend on very well synchronized clocks, you could make a simplifying assumption that the latency in one direction is the same as in the other (and no, that isn't always the case) and measure round-trip-time and divide by two. Ping would be one way to do that, netperf and a "TCP_RR" test would be another.
Depending on the network/link speed and the packet size and the CPU "horsepower," much if not most of the latency is in the packet processing overhead on either side. You can get an idea of that with the service demand figures netperf will report if you have it include CPU utilization. (n.b. - netperf assumes it is the only thing meaningfully consuming CPU on either end at the time of the test)