如何在 Linux 上设置最大 TCP 最大段大小?

发布于 2024-09-26 01:08:43 字数 599 浏览 4 评论 0原文

在 Linux 中,如何设置 TCP 连接上允许的最大段大小?我需要为一个不是我编写的应用程序设置这个(所以我不能使用setsockopt来做到这一点)。我需要将其设置为高于网络堆栈中的 mtu。

我有两个流共享相同的网络连接。周期性地发送小数据包,这需要绝对最小的延迟。另一个发送大量数据——我正在使用 SCP 来模拟该链接。

我设置了流量控制(tc)以给予最小延迟流量高优先级。但我遇到的问题是,从 SCP 发出的 TCP 数据包最终大小高达 64K 字节。是的,它们根据 mtu 被分成更小的数据包,但不幸的是,这种情况发生在 tc 对数据包进行优先级排序之后。因此,我的低延迟数据包被卡在多达 64K 字节的 SCP 流量后面。

本文表明您可以在 Windows 上设置此值。

Linux 上有什么可以设置的吗?我尝试过 ip route 和 iptables,但它们在网络堆栈中应用得太低。我需要在 tc 之前限制 TCP 数据包的大小,以便它可以适当地优先处理高优先级数据包。

In Linux, how do you set the maximum segment size that is allowed on a TCP connection? I need to set this for an application I did not write (so I cannot use setsockopt to do it). I need to set this ABOVE the mtu in the network stack.

I have two streams sharing the same network connection. One sends small packets periodically, which need absolute minimum latency. The other sends tons of data--I am using SCP to simulate that link.

I have setup traffic control (tc) to give the minimum latency traffic high priority. The problem I am running into, though, is that the TCP packets that are coming down from SCP end up with sizes up to 64K bytes. Yes, these are broken into smaller packets based on mtu, but this unfortunately occurs AFTER tc prioritizes the packets. Thus, my low latency packet gets stuck behind up to 64K bytes of SCP traffic.

This article indicates that on Windows you can set this value.

Is there something on Linux I can set? I've tried ip route and iptables, but these are applied too low in the network stack. I need to limit the TCP packet size before tc, so it can prioritize the high priority packets appropriately.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

三五鸿雁 2024-10-03 01:08:43

您是否使用 tcp 分段卸载到网卡? (您可以使用“ethtool -k $your_network_device”来查看卸载设置。)据我所知,这是您会看到设备 MTU 为 1500 的 64k tcp 数据包的唯一方法。这并不是回答问题,而是这可能有助于避免误诊。

Are you using tcp segmentation offload to the nic? (You can use "ethtool -k $your_network_device" to see the offload settings.) This is the only way as far as I know that you would see 64k tcp packets with a device MTU of 1500. Not that this answers the question, but it might help avoid misdiagnosis.

葬花如无物 2024-10-03 01:08:43

带有选项 advmssip route 命令有助于设置 MSS 值。

ip route add 192.168.1.0/24 dev eth0 advmss 1500

ip route command with option advmss helps to set MSS value.

ip route add 192.168.1.0/24 dev eth0 advmss 1500
凉墨 2024-10-03 01:08:43

通告的 TCP MSS 的上限是第一跳路由的 MTU。如果您看到 64k 段,这往往表明第一跳路由 MTU 太大 - 您是否使用环回或其他东西进行测试?

The upper bound of the advertised TCP MSS is the MTU of the first hop route. If you're seeing 64k segments, that tends to indicate that the first hop route MTU is excessively large - are you using loopback or something for testing?

谢绝鈎搭 2024-10-03 01:08:43

MSS = MTU – 40 字节(标准 TCP/IP 开销为 40 字节 [20+20])

如果 MTU 为 1500 字节,则 MSS 将为 1460 字节。

MSS = MTU – 40bytes (standard TCP/IP overhead of 40 bytes [20+20])

If the MTU is 1500 bytes then the MSS will be 1460 bytes.

他夏了夏天 2024-10-03 01:08:43

你肯定误判了问题;正如其他人指出的那样,tc 看不到 TCP 数据包,它看到 IP 数据包,并且此时它们已经成块了。

您可能只是遇到缓冲区膨胀:您正在完全独立的出站队列中超载设备(可能是 DSL 调制解调器或电缆调制解调器)。唯一的解决方法是告诉 tc 将出站带宽限制为小于调制解调器的带宽,例如。使用TBF。

You are definitely misdiagnosing the problem; as someone else pointed out, tc doesn't see TCP packets, it sees IP packets, and they'd already be in chunks at that point.

You are probably just experiencing bufferbloat: you're overloading your outbound queue in a totally separate device (probably a DSL modem or cable modem). The only fix is to tell tc to limit your outbound bandwidth to less than the modem's bandwidth, eg. using TBF.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文