如何测量使用UDP协议通信的服务器和客户端之间的响应时间?
测试的目的是检查两台主机(客户端和服务器)之间的网络响应时间的形状。 网络响应 = 发送数据包和接收回来所需的往返时间。 我使用的是UDP协议。 我如何计算响应时间? 我可以减去 TimeOfClientRequest - TimeOfClientResponseRecieved。 但我不确定这是否是最好的方法。 我不能仅从代码内部执行此操作,我认为操作系统和计算机负载可能会干扰客户端发起的测量过程。 顺便说一句,我正在使用Java。
我想听听你的想法。
The aim of the test is to check the shape of the network response time between two hosts (client and server). Network response = the round trip time it takes to send a packet of data and receive it back. I am using the UDP protocol. How could I compute the response time ? I could just subtract TimeOfClientRequest - TimeOfClientResponseRecieved. But I'm not sure if this is the best approach. I can't do this only from inside the code and I'm thinking that the OS and computer load might interfere in the measuring process initiated by the client. By the way, I'm using Java.
I would like to listen to your ideas.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
只需使用 ping - RTT(往返时间)是它测量的标准内容之一。 如果您发送的数据包的大小很重要,那么 ping 还可以让您指定每个数据包中数据的大小。
例如,我刚刚向网关发送了 10 个数据包,每个数据包的负载为 1024 字节,仅显示摘要统计信息:
ping -c 10 -s 1024 -q 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 1024(1052) bytes of数据。
--- 192.168.2.1 ping 统计数据 ---
发送 10 个数据包,接收 10 个数据包,丢包 0%,时间 9004ms
rtt min/avg/max/mdev = 2.566/4.921/8.411/2.035 ms
最后一行以 rtt 开头 ( round行程时间)是您可能正在寻找的信息。
Just use ping - RTT ( round trip time ) is one of the standard things it measures. If the size of the packets you're sending matters then ping also lets you specify the size of the data in each packet.
For example, I just sent 10 packets each with a 1024 byte payload to my gateway displaying only the summary statistics:
ping -c 10 -s 1024 -q 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 1024(1052) bytes of data.
--- 192.168.2.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9004ms
rtt min/avg/max/mdev = 2.566/4.921/8.411/2.035 ms
The last line starting with rtt ( round trip time ) is the info you're probably looking for.
我觉得你说的方法很好。 操作系统和计算机负载可能会产生干扰,但与通过网络发送数据包所需的时间相比,它们的影响可能可以忽略不计。
为了使事情变得更平衡,您总是可以来回发送多个数据包并平均时间。
I think the method you mention is fine. OS and computer load might interfere, but their effect would probably be negligible compared to the amount of time it takes to send the packets over the network.
To even things out a bit, you could always send several packets back and forth and average the times out.
如果您有权访问该代码,那么可以,只需测量发送请求和收到答复之间的时间即可。 请记住,Java 中的标准计时器只有毫秒分辨率。
或者,使用 Wireshark 捕获线路上的数据包 - 该软件还会记录数据包的时间戳。
显然,在这两种情况下,测量的时间取决于另一端响应您的原始请求的速度。
如果您确实只想测量网络延迟并自己控制远端,请使用许多UNIX服务器仍然支持的
echo 7/udp
服务(尽管它通常被禁用)以防止其在反射式 DDoS 攻击中使用)。If you have access to the code, then yes, just measure the time between when the request was sent and the receipt of the answer. Bear in mind that the standard timer in Java only has millisecond resolution.
Alternatively, use Wireshark to capture the packets on the wire - that software also records the timestamps against packets.
Clearly in both cases the measured time depends on how fast the other end responds to your original request.
If you really just want to measure network latency and control the far end yourself, use something like the
echo 7/udp
service that many UNIX servers still support (albeit it's usually disabled to prevent its use in reflected DDoS attacks).如果您可以发送 ICMP 数据包,那就太好了 - 我想,因为它们是由网络层直接应答的,所以您的应答将不会在服务器上的用户模式下浪费时间。
然而,在 java 中发送 ICMP 包似乎是不可能的。 您可以:
这将发送 ICMP 数据包,但这不是您想要的。
然而,如果您在服务器端以更高的优先级启动响应者守护进程,您将减少服务器负载的影响。
实际上,服务器负载并不重要,只要 CPU 利用率低于 100% 即可。
it would be nice if you could send ICMP packages - I guess, because they are answered directly by the network layer, your answer will loose no time in user mode on the server.
Sending ICMP packages in java seems however not to be possible. You could:
this will send an ICMP package, but that is not what you want.
However if you start the responder deamon on the server side with a higher priority, you will reduce the effect of server load.
Actually server load does not play a role, as long as it is bellow 100% CPU.
首先使用 ping,但您可以通过发送数据包并让另一端发回数据包来测量 RTT。
测量盒子何时处于典型负载非常重要,因为这会告诉您通常可以获得的 RTT。
您可以对多个数据包(数百万甚至数十亿)的延迟进行平均,以获得一致的值。
Use ping first, but you can measure the RTT by sending a packet and having the other end sending the packet back.
It is important that you measure when the boxes are under typical load because that will tell you the RTT you can expect to typically get.
You can average the latencies over many packets, millions or even billions to get a consistent value.
除了少数答案提到使用 ICMP ping 来测量 RTT 时间是一个好方法。
如果您可以同时控制服务器和客户端,我想提供另一种通过 UDP 测量 RTT 的方法。 基本流程如下:
之后,我们可以计算RTT = (C2-C1) - (S2-S1)。
它可能不像 ICMP ping 那样准确,并且需要在客户端和服务器端进行额外的控制,但它是可以管理的。
Besides few answers mentioned about use ICMP ping to measure the RTT time is a good way.
I'd like to provide another way to measure the RTT via UDP if you can both control the server&client side. The basic flow as follow:
After that, we can calculate the RTT = (C2-C1) - (S2-S1).
It is may not accurate as ICMP ping and need extra control both on client&server side, but it is manageable.
虽然 ping 是测量延迟的良好开端,但它使用 ICMP 协议而不是 UDP。 不同协议的数据包通常在路由器等上有不同的优先级。
您可以使用 netperf 来测量 UDP 往返时间:
http://www.netperf.org/netperf /training/Netperf.html#0.2.2Z141Z1.SUJSTF.9R2DBD.T
While ping is a good start to measure latency, it is using the ICMP protocol instead of UDP. Packets of different protols usually have different priorities on routers etc.
You could use netperf to measure the UDP roundtrip time:
http://www.netperf.org/netperf/training/Netperf.html#0.2.2Z141Z1.SUJSTF.9R2DBD.T