路径 MTU 发现 - ICMP 响应在哪里?

发布于 2024-11-25 11:57:26 字数 1499 浏览 3 评论 0原文

我正在 Linux 中进行一些有关路径 MTU 发现的实验。据我从 RFC 1191 中了解到,如果路由器收到一个具有非零 DF 位的数据包,并且该数据包无法在没有分段的情况下发送到下一个主机,那么路由器应该丢弃该数据包并将 ICMP 消息发送到初始主机。发件人。

我在我的计算机上创建了多个虚拟机,并通过以下方式链接它们:

VM1 (192.168.100.2)

R1  (192.168.100.1, 
     192.168.150.1)

R2  (192.168.150.2, 
     192.168.200.1)

VM2 (192.168.200.2)

Rx - 是安装了 Linux 的虚拟机,它们有两个带有静态路由的网络接口。从 V1 ping V2 成功,反之亦然。

traceroute from 192.168.100.2 to 192.168.200.2 (192.168.200.2)
 1  192.168.100.1 (192.168.100.1)  0.437 ms  0.310 ms  0.312 ms
 2  192.168.150.2 (192.168.150.2)  2.351 ms  2.156 ms  1.989 ms
 3  192.168.200.2 (192.168.200.2)  43.649 ms  43.418 ms  43.244 ms

tracepath 192.168.200.2
 1:  ubuntu-VirtualBox.local                               0.211ms pmtu 1500
 1:  192.168.100.1                                         0.543ms 
 1:  192.168.100.1                                         0.546ms 
 2:  192.168.150.2                                         0.971ms 
 3:  192.168.150.2                                         1.143ms pmtu 750
 3:  192.168.200.2                                         1.059ms reached

段 100.x 和 150.x 的 MTU 1500。段 200.x 的 MTU 750。

我正在尝试发送启用 DF 的 UDP 数据包。事实上,如果数据包大小大于 750,VM1 根本不会发送数据包(我收到 send() 调用的 EMSGSIZE 错误)。

但是,我预计大小超过 1500 的数据包会出现这种行为。并且我预计 VM1 将大小在 750 到 1500 之间的数据包发送到 R1,并且 R1(或 R2)丢弃此类数据包并将 ICMP 数据包返回给 VM1 。但这并没有发生。

有两个问题:

1)为什么?

2) 是否可以根据 RFC 1191 设置我的虚拟网络来接收 ICMP 数据包?

谢谢。

I'm doing some experiments with path MTU discovery in Linux. As far as I understood from RFC 1191, if a router receives a packet with non-zero DF bit and the packet can't be sent to the next host without fragmentation, then the router should drop the packet and send ICMP message to the initial sender.

I've created several VM on my computer and linked them in the following manner:

VM1 (192.168.100.2)

R1  (192.168.100.1, 
     192.168.150.1)

R2  (192.168.150.2, 
     192.168.200.1)

VM2 (192.168.200.2)

Rx - are virtual machines with Linux installed, they have two network interfaces with a static route. Pinging V2 from V1 and vice versa is successful.

traceroute from 192.168.100.2 to 192.168.200.2 (192.168.200.2)
 1  192.168.100.1 (192.168.100.1)  0.437 ms  0.310 ms  0.312 ms
 2  192.168.150.2 (192.168.150.2)  2.351 ms  2.156 ms  1.989 ms
 3  192.168.200.2 (192.168.200.2)  43.649 ms  43.418 ms  43.244 ms

tracepath 192.168.200.2
 1:  ubuntu-VirtualBox.local                               0.211ms pmtu 1500
 1:  192.168.100.1                                         0.543ms 
 1:  192.168.100.1                                         0.546ms 
 2:  192.168.150.2                                         0.971ms 
 3:  192.168.150.2                                         1.143ms pmtu 750
 3:  192.168.200.2                                         1.059ms reached

Segments 100.x and 150.x have MTU 1500. Segment 200.x has MTU 750.

I'm trying to send UDP packets with DF enabled. The fact is the VM1 doesn't send the packet at all in case of the packet's size greater than 750 (I receive EMSGSIZE error for send() invocation).

However I expect such behavior for packets which size is more than 1500. And I expect that the VM1 sends packets which size is between 750 and 1500 to the R1, and the R1 (or R2) drops such packets and returns ICMP packet to the VM1. But this doesn't happen.

There are two questions:

1) Why?

2) Is it possible to set up my virtual network to receive ICMP packets in according to RFC 1191?

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

櫻之舞 2024-12-02 11:57:26

VM1 可能缓存了 PMTU 信息。默认情况下,这些缓存条目的超时时间为 10 分钟。您可以通过写入 /proc/sys/net/ipv4/route/mtu_expires(秒)来更改超时。

对于您的实验,请在发送 1500 字节数据包之前尝试刷新缓存(删除 PMTU 缓存):

echo "0" > /proc/sys/net/ipv4/route/flush 

您将收到一条需要 ICMP 分段的消息,该消息将再次填充此目的地的 PMTU 条目!因此,在重试实验之前,您需要不断刷新此缓存。

Its possible that VM1 has cached PMTU information. By default the timeout for these cache entries is 10 minutes. You can change the timeout by writing to /proc/sys/net/ipv4/route/mtu_expires (seconds).

For your experiment, try flushing the cache (deleting PMTU cache) before sending out 1500 byte packets:

echo "0" > /proc/sys/net/ipv4/route/flush 

You'll receive a ICMP fragmentation needed message which would again populate a PMTU entry for this destination! So you'll need to keep flushing this cache before retrying the experiment.

甩你一脸翔 2024-12-02 11:57:26

使用ping6时遇到同样的问题,
因为/proc/sys/net/ipv6/conf/default/mtu是1280。
/proc/sys/net/ipv4/route/min_pmtu 默认为 552。

所以可以修改里面的值。
您可以使用-M选项。您不会收到 EMSGSIZE 错误。

-M pmtudisc_opt
          Select  Path MTU Discovery strategy.  pmtudisc_option may be 
           either do (prohibit fragmentation, even local one), want (do 
           PMTU discovery, fragment locally when packet size
           is large), or dont (do not set DF flag).

Encountered the same problem when use ping6,
because /proc/sys/net/ipv6/conf/default/mtu is 1280.
And /proc/sys/net/ipv4/route/min_pmtu is 552 by default.

So you can modify the value inside.
And you can use -M option. you will not receive EMSGSIZE error.

-M pmtudisc_opt
          Select  Path MTU Discovery strategy.  pmtudisc_option may be 
           either do (prohibit fragmentation, even local one), want (do 
           PMTU discovery, fragment locally when packet size
           is large), or dont (do not set DF flag).
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文