C# Socket,发送过程中经过的时间
我想知道是否有办法知道数据在网络上传输过程中所花费的时间。
例如,我从计算机 A 向计算机 B 和 C 发送一个数据包(因此,每个客户端的经过时间可能会有所不同,具体取决于距离等),并且我想知道每个客户端发送和接收之间的时间(以准确同步)精确数据)。
此外,重要的是要知道我的客户端必须在异步模式下工作(这不是问题)。
有人知道该怎么做吗?
风筝。
I'd like to know if there was a way to know the time elapsed during the travel of a data on the network.
For example, I send a packet from computer A to computer B and C (so elapsed time might be different for each depending on the distance, etc), and I want to know the time between sending and receiving for each client (to synchronize exactly precize data).
Moreover, it is important to know that my client MUST work in asynchronous mode (that's not a problem).
Somebody knows how to do it?
KiTe.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Corvil是一款专门针对延迟分析的知名软件。
对于您的分析,涉及多个不同的软件和硬件层,因此实现起来非常复杂。
当谈到同步时,拥有一个像序列号这样值得信赖的密钥更为重要——当您使用 TCP 时,当出现丢失包的问题时,您会遇到一个大问题,因为这会触发多个包的重新排队。
Corvil is a well-known software specifically aiming at latency analysis.
For your analysis there are several different layers software and hardware-wise involved and thus it is very complex to implement.
When it comes to synchronizing, it is more important to have a trustworthy key like a sequence number - as you use TCP you have a large problem when there is a problem with losing a package as this triggers a requeue of several packags.
除非所有节点都有同步时钟,否则这几乎是不可能做到的。如果您确实有一个准确的同步机制并且可以相信时钟是相同的,那么您可以在从 A 发送数据包时将时间戳插入到数据包中,然后在 C 中将其与当前时间进行比较。
但同样,您需要高分辨率时间同步才能使这种方法发挥作用。
如果您只想进行基准测试并了解平均时间,您可以做的就是使数据包反弹。基本上,告诉 C 将相同的数据包发送回 B,然后发送到 A,然后在 A 中将原始时间戳与当前时间(将使用相同的时钟)进行比较。这将为您提供往返延迟,您可以将其除以二以获得单向延迟时间。
如果您担心发回消息会增加开销,那么您可以执行以下一项(或两项)操作:
Unless all your nodes have synchronized clocks this would be near impossible to do. If you do have an accurate sync mechanism in place and can trust that the clocks are the same then you could just insert a timestamp into the packet when you send it from A and then in C you compare that to the current time.
But again, you need high res time sync for this approach to work.
What you could do if you just want to benchmark and get an idea of an average time is to make the packet bounce back. Basically, tell C to send the same packet back to B and then to A and in A you compare the original timestamp with the current time (which will be using the same clock). This will give you a round trip latency which you can divide by two to get one-way latency time.
If you are worried about the overhead added by sending messages back then you could do one (or both of) the following