Linux低延迟tcp流
我有一个具有此要求的嵌入式应用程序:一个传出 TCP 网络流需要高于所有其他传出网络流量的绝对最高优先级。如果该流上有任何数据包等待传输,它们应该是下一个发送的数据包。时期。
我衡量成功的标准如下:测量没有后台流量时的高优先级延迟。添加背景流量,然后再次测量。延迟的差异应该是发送一个低优先级数据包的时间。对于 100Mbps 链路,mtu=1500,大约为 150 us。我的测试系统有两个通过交叉电缆连接的 Linux 盒子。
我已经尝试了很多很多事情,尽管我已经大大改善了延迟,但还没有实现目标(我目前看到后台流量增加了 5 毫秒的延迟)。我已经发布了另一个非常具体的问题,但我认为我应该从一个一般性问题开始。
第一个问题:这在 Linux 上可行吗? 第二个问题:如果是这样,我需要做什么?
- TC?
- 我应该使用什么 qdisc?
- 调整内核网络参数?哪些?
- 我还缺少什么?
感谢您的帮助!
埃里克
更新 2010 年 10 月 4 日: 我在发送端和接收端都设置了 tcpdump。这是我在发送端看到的情况(那里似乎很拥塞):
0 us Send SCP (low priority) packet, length 25208
200 us Send High priority packet, length 512
在接收端,我看到:
~ 100 us Receive SCP packet, length 548
170 us Receive SCP packet, length 548
180 us Send SCP ack
240 us Receive SCP packet, length 548
... (Repeated a bunch of times)
2515 us Receive high priority packet, length 512
问题似乎是 SCP 数据包的长度(25208 字节)。根据 mtu(我在本次测试中将其设置为 600)将其分解为多个数据包。然而,这发生在比流量控制更低的网络层,因此我的延迟是由最大 tcp 传输数据包大小决定的,而不是 mtu! Arghhh..
有人知道在 Linux 上设置 TCP 默认最大数据包大小的好方法吗?
I have an embedded application that has this requirement: One outgoing TCP network stream need absolute highest priority over all other outgoing network traffic. If there are any packets waiting to be transferred on that stream, they should be the next packets sent. Period.
My measure of success is as follows: Measure the high priority latency when there is no background traffic. Add background traffic, and measure again. The difference in latency should be the time to send one low priority packet. With a 100Mbps link, mtu=1500, that is roughly 150 us. My test system has two linux boxes connected by a crossover cable.
I have tried many, many things, and although I have improved latency considerably, have not achieved the goal (I currently see 5 ms of added latency with background traffic). I posted another, very specific question already, but thought I should start over with a general question.
First Question: Is this possible with Linux?
Second Question: If so, what do I need to do?
- tc?
- What qdisc should I use?
- Tweak kernel network parameters? Which ones?
- What other things am I missing?
Thanks for your help!
Eric
Update 10/4/2010:
I set up tcpdump on both the transmit side and the receive side. Here is what I see on the transmit side (where things seem to be congested):
0 us Send SCP (low priority) packet, length 25208
200 us Send High priority packet, length 512
On the receive side, I see:
~ 100 us Receive SCP packet, length 548
170 us Receive SCP packet, length 548
180 us Send SCP ack
240 us Receive SCP packet, length 548
... (Repeated a bunch of times)
2515 us Receive high priority packet, length 512
The problem appears to be the length of the SCP packet (25208 bytes). This is broken up into multiple packets based on the mtu (which I had set to 600 for this test). However, that happes in a lower network layer than the traffic control, and thus my latency is being determined by the maximum tcp transmit packet size, not the mtu! Arghhh..
Anyone know a good way to set the default maximum packet size for TCP on Linux?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可能需要检查 NIC 驱动程序上的设置。一些驱动程序合并中断,这会牺牲更高的吞吐量以增加延迟。
http://www.29west.com/docs/THPM/latency-interrupt -coalescing.html
另外,我不知道 NIC 是否正在缓冲多个输出数据包,但如果是的话,这将使强制执行所需的优先级变得更加困难:如果缓冲了多个低优先级数据包在NIC中,内核可能没有办法告诉NIC“忘记我已经发送给你的东西,先发送这个高优先级数据包”。
--- 更新 ---
如果问题是长 TCP 段,我相信您可以通过
ip 路由
上的mtu
选项来控制 TCP 层通告的最大段大小。例如:(请注意,您需要在接收端执行此操作)。
You might want to check settings on your NIC driver. Some drivers coalesce interrupts, which trades off higher throughput for increased latency.
http://www.29west.com/docs/THPM/latency-interrupt-coalescing.html
Also, I don't know if the NIC is buffering multiple output packets, but if it is, that will make it harder to enforce the desired priorities: if there are multiple low-priority packets buffered up in the NIC, the kernel probably doesn't have a way to tell the NIC "forget about that stuff I already sent you, send this high-priority packet first".
--- update ---
If the problem is long TCP segments, I believe you can control what max segment size the TCP layer advertises by the
mtu
option onip route
. For example:(Note that you would need to do this on the receive side).