如何测量运行时的网络吞吐量
我想知道如何在运行时最好地测量网络吞吐量。我正在编写一个客户端/服务器应用程序(都是用java编写的)。服务器定期通过套接字向客户端发送(压缩媒体数据的)消息。我想调整服务器使用的压缩级别以匹配网络质量。
所以我想测量一大块数据(比如 500kb)完全到达客户端所需的时间,包括其间的所有延迟。像 Iperf 这样的工具似乎不是一个选择,因为它们通过创建自己的流量来进行测量。
我能想到的最好的想法是:以某种方式确定客户端和服务器的时钟差异,包括服务器在每条消息中发送时间戳,然后让客户端向服务器报告该时间戳与客户端收到消息的时间之间的差异信息。然后,服务器可以确定消息到达客户端所花费的时间。
有没有更简单的方法来做到这一点?有这方面的库吗?
I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality.
So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option because they do their measurements by creating their own traffic.
The best idea I could come up with is: somehow determine the clock difference of client and server, include a server send timestamp with each message and then have the client report back to the server the difference between this timestamp and the time the client received the message. The server can then determine the time it took a message to reach the client.
Is there an easier way to do this? Are there any libraries for this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
一个简单的解决方案:
在发送指定数量的包裹之前,在服务器上保存时间戳。
然后将包发送给客户端,并让客户端在收到最后一个包时向服务器报告。
当客户端应答时,在服务器上保存新的时间戳。
现在您需要做的就是确定 RTT 并从两个时间戳之间的差异中减去 RTT/2。
这应该可以让您获得相当准确的测量结果。
A simple solution:
Save a timestamp on the server before you send a defined amount of packages.
Then send the packages to the client and let the client report back to the server when it has recieved the last package.
Save a new timestamp on the server when the client has answered.
all you need to to now is determine die RTT and substract RTT/2 from the difference between the two timestamps.
This should get you a fairly accurate measurement.