计算两台主机之间的网络距离
我想计算一些与网络应用程序上两个主机之间的“距离”相关的指标。受 ping
的启发,我想出了以下简单的解决方案。
- 发送不同大小的 UDP 数据包。
- 等待其他节点的响应。
- 计算发送和接收之间的时间。
- 标准化这些数据并根据它计算我的指标。
我想避免管理原始套接字,但如果这是更好的选择,请告诉我。
您会推荐另一种解决方案吗?
编辑:
我想我对此并不清楚。我知道什么是 TTL 和 traceroute
,但这不是我要搜索的内容。
我正在寻找一个更好的指标,将延迟、带宽和主机之间的传统距离结合起来(因为我认为单独使用 traceroute
对于管理协议来说并没有多大用处)。这就是使用类似 ping
措施的动机。
I want to compute some metrics relative to the "distance" between two hosts on a network app. I came up with the following naïve solution inspired by ping
.
- Sending UDP packages of varying size.
- Wait for a response of the other node.
- Compute the time between send and recieve.
- Normalize this data and compute my metrics over it.
I'd like to avoid managing raw sockets, but if that's a better option, please tell me.
Would you reccomend another solution?
EDIT:
I think I was not clear on this. I know what's TTL and traceroute
and that's not what I am searching for.
What I am searching for is for a better metric that combine latency, bandwidth and yes, the traditional distance between hosts (because I think traceroute
alone is not that useful for managing a protocol). That's the motivation of using ping
-like measures.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
问题是您能否不修改现有协议,或者更加勤奋并从以下位置捕获 RTT 详细信息现有的请求-回复消息?
如果您修改现有协议,例如添加传输时间戳,您可以在服务器端执行额外的分析。如果有从服务器到客户端的请求-答复,您也许仍然能够推断时间。
主要思想是,为路径延迟测量显式添加附加消息通常是高度冗余的,只会增加网络干扰和复杂性。
The question becomes can you not modify the existing protocol or be more industrious and capture RTT details from existing request-reply messages?
If you modify the existing protocol, by say adding on transmission time stamp you can perform additional analytics server side. You might be able to still infer times if there is a request-reply from the server to the client.
The main idea being that adding additional messages explicitly for path latency measurement is often highly redundant and only serves to increase network chatter and complexity.
您正在寻找的指标的定义取决于其目的 - 您可以通过多种方式来实现,而哪种方式最好始终取决于目的。
一般来说,您正在寻找某个函数
distance(A, B)
。一般来说,这将是 A 和 B 之间的带宽和延迟的函数:函数 f() 的形状将取决于目的、应用程序 - 您真正需要优化的内容。最简单的是使用线性函数:
同样,系数 alpha 和 beta 将取决于您要优化的内容。
如果您测量了一些衡量系统性能的变量,则可以进行统计分析(回归)来找到最佳参数:
在谈论指标时也要小心。每个指标必须满足以下条件:
在计算机网络中这并不总是正确的,因为它取决于路由器的决策。
The definition of metrics you are looking for depends on the purpose of it - you can do it in many ways, and which way is the best always depends on the purpose.
In general, you are looking for some function
distance(A, B)
. In general that would be a function of bandwidth and latency between A and B:the shape of function f() would depend on the purpose, on the application - what you really need to optimize. The simplest would be to use linear function:
and again, the coefficients alpha and beta would depend on what you are trying to optimize.
If you have measured some variable that measures your system performance, you can do statistical analysis (regression) to find optimal parameters:
Also be careful when you speak about metrics. Each metric must fulfil the following condition:
Which is not always true in computer networks, as it depends on router decision.
恕我直言,这很大程度上取决于您的应用程序的具体细节。
可能有很多指标,还包括当可靠性优先时主机之间的冗余连接。所以这很大程度上取决于应用。
IMHO it highly depends on specific details of your application.
There could be many manyo ther metrics, also including redundant connections between hosts when reliability is priority. So it highly depends on application.
在网络中,“距离”通常以跳数来衡量。时间并不能真正准确地表示距离,因为它很容易出现短期拥塞和其他网络问题。查看traceroute,了解如何通过发送具有递增 TTL 的数据包来测量跳数方面的距离。
编辑:现在您的问题有了更多详细信息 - 延迟和带宽永远无法有意义地组合在一起形成通用指标。您可能需要根据应用程序的偏好(延迟与带宽)来设置权重。
在我看来,平滑的 RTT 将为您提供更好的服务。就像 TCP 所维护的那样,RTT 的长期平均值带有平滑因子来解释异常情况。没有一种好的方法可以做到这一点,因此您可能需要搜索“RTT smoothing”并尝试其中的一些方法。
In networks "distance" usualy is measured in terms of hops. Time does not really represent distance accurately because it is prone to short-term congestion and other network issues. Take a look at traceroute to see how to measure distance in terms of hops by sending packets with increasing TTLs.
Edit: Now that your question has additional details - Latency and bandwidth can never be meaningfully combined together into a generic metric. You may want to device a weightage depending on what your application prefers (latency vs bandwidth).
It seems to me like a smoothed RTT is going to serve you better. Something like what TCP maintains, a long time average of RTTs with a smoothing factor to account for anomalies. There is no one good way of doing this, so you may want to search for "RTT smoothing" and experiment with a few of them.
我认为您想要的是使用数据包的 生存时间 字段:
简而言之,您可以发送连续的 IP 数据包,从而减少发送的每个数据包的生存时间。一旦停止收到响应,您就大致知道源主机和目标主机之间必须存在多少跳。
如果您不想自己使用套接字,您可以简单地使用 ping 命令,该命令提供了一个选项,可让您指定用于 ping 数据包的生存时间值。
I think what you want is to use the packet's time-to-live field:
In a nutshell, you can send successive IP packets, decrementing the time-to-live for each one that you send. Once you stop getting a response back, you know roughly how many hops must exist between the source and destination hosts.
If you don't want to work with the sockets yourself you can simply using the ping command, which provides an option that lets you specify the time-to-live value to use for the ping packets.