难以实现 1Gbit UDP 吞吐量
对于负载小于1470的UDP数据包,是否有可能达到1Gbit的吞吐量? 由于数据包较小,因此实现这样的吞吐量应该存在一些瓶颈(I/O、操作系统、网络等)。 我想驱动程序和硬件可能必须调整为小数据包/高吞吐量。 有人尝试用小 UDP 数据包成功实现 1Gbit 吞吐量吗?
For UDP packets with a payload less then 1470, is it possible to achieve 1Gbit throughput? Due to the small packet size, there should be some bottlenecks in achieving such throughput (I/O, OS, network, etc.). I imagine drivers and hardware might have to be tuned to small packet/high throughput. Has anybody attempted successfully achieved 1Gbit throughput with small UDP packets?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
我发现硬件的每秒数据包限制明显低于网络理论容量。 对于 Broadcomm BCM5704S,我达到了 69,000 pps,而千兆位为 1,488,100 pps。
我在此处报告了更多数字,http://code.google.com/p/openpgm/
I've found hardware has a significantly lower packet-per-second limit than the networks theoretical capacity. For a Broadcomm BCM5704S I hit this at 69,000 pps compared to 1,488,100pps of gigabit.
Some more numbers I reported here, http://code.google.com/p/openpgm/
这里有一个很好的教程,介绍如何调整网络设置(在 Linux 中)以实现真正的千兆位速度:http://datatag.web.cern.ch/datatag/howto/tcp.html
There's a good tutorial on tweaking your network settings (in Linux) to achieve true gigabit speed here: http://datatag.web.cern.ch/datatag/howto/tcp.html
检查您正在使用的交换机的文档。 交换机每秒可传输的数据包数 (pps) 受到限制,如果您发送的数据包远小于最大有效负载大小,则交换机通常无法维持 1GBps。
另一件需要检查的事情是您的网卡是否正在执行中断合并,以及它可以支持的发送/接收描述符的最大数量是多少。 在这种吞吐量水平下,即使使用现代 CPU 和内存系统,中断服务时间和上下文切换时间也可能成为主机系统的巨大开销。
此外,如果您使用千兆位铜缆,卡将发出的最小以太网帧是 512 字节,因此较小的消息将被填充到该大小。 这是因为载波侦听/冲突检测的要求。
Check the documentation for the switch you're using. Switches are constrained in the number of packets per second (pps) they can deliver and often can't sustain 1GBps if you're sending packets with significantly smaller than the maximum payload size.
Another thing to check is whether your network card is doing interrupt coalescing, and what is the maximum number of send/receive descriptors it can support. At that level of throughput the interrupt service time and context switching time can become a big overhead on the host system even with a modern CPU and memory system.
Also if you're using gigabit over copper, the smallest ethernet frame the card will emit is 512 bytes, so smaller messages will be padded to that size. This is because of requirements for carrier sense/collision detection.
您使用什么类型的网络连接? 如果您使用 1000BaseTx/Fx 链路,则不要期望最大数据包的吞吐量超过 80%。 随着数据包大小的减小,间隔、同步、以太网标头、IP 标头和 UDP 标头的开销相对于有效负载会增加,因此会进一步降低最大吞吐量。
What type of network connection are you using? If you're using a 1000BaseTx/Fx link, don't expect more than 80% throughput with maximum sized packets. As your packet size decreases, the overhead for spacing, synchronization, Ethernet headers, IP headers and UDP headers increases in relation to the payload and therefore degrades your maximum throughput even more.
我之前曾在相对标准的 PC 硬件上对千兆位链路的吞吐量进行过一些实验,尽管只是进行传输(通过 tcpreplay),而不是 udp。
我发现最大的瓶颈在于将数据包发送到网卡本身。 通过使用高速总线连接到 NIC(例如 4x pci-express NIC),可以显着改善这一点。 但即便如此,还是有一个非常明确的数据包/秒限制。 显然,增加数据包大小可以让您利用更多带宽,同时减少处理器负载。
与 Steve Moyer 的评论相同,任何网络的利用率都存在理论上的限制。 在我的实验中(在完全安静的网络上进行),我看到的最大值大约为 900Mb/s(仅在我的记忆中)。 这是 CPU 负载为 30% 到 40% 的情况。
更有可能的是,该限制是由您的系统硬件(即 PC)而不是您的网络基础设施施加的 - 任何有价值的网络交换机都应该能够维持小数据包的全速网络访问 - 当然速度比您的网络基础设施高得多大多数电脑都可以应付。
I've previously done some experimenting with throughput on gigabit links on relatively standard pc hardware, albeit doing just transmits (via tcpreplay), rather than udp.
The biggest bottleneck that I found was in just getting packets to the NIC itself. This can be significantly improved by using a high speed bus to interface to your NIC (eg. a 4x pci-express NIC). But even with this there was a very definate packet/second limit. Obviously increasing the packet size would allow you to utilize more of your bandwidth while reducing processor load.
Along the same lines as the comment by Steve Moyer, there is a theoretical limit for the utilization of any network. In my experiments (which were being done on a completely quiet network) I was seeing a maximum of approximately (and only off the top of my memory) 900Mb/s. This was with cpu loads of 30 to 40%.
It's more likely that the limitation is going to be imposed by your system hardware (ie. PC) than your network infrastructure - any network switch worth its salt should be capable of sustaining full speed network access with small packets - certainly at much higher rates than most PCs can cope with.