每个 TCP/IP 网络连接 Linux 内核消耗多少内存?

发布于 2024-12-23 06:49:01 字数 49 浏览 0 评论 0原文

每个 TCP/IP 网络连接 Linux 内核(在内核地址空间中)平均消耗多少内存?

How much memory on average is consumed by the Linux kernel (in kernel address space) per TCP/IP network connection?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

早茶月光 2024-12-30 06:49:01

对于 TCP 连接,消耗的内存取决于

  1. sk_buff 的大小(Linux 内核使用的内部网络结构)

  2. 连接的读写缓冲区

的大小可以根据需要进行调整

root@x:~# sysctl -A | grep net | grep mem

检查这些变量

这些指定内核中所有网络连接的最大默认内存缓冲区使用情况

net.core.wmem_max = 131071

net.core.rmem_max = 131071

net.core.wmem_default = 126976

net.core.rmem_default = 126976

这些指定特定于 tcp 连接的缓冲区内存使用情况

net.ipv4.tcp_mem = 378528   504704  757056

net.ipv4.tcp_wmem = 4096    16384   4194304

net.ipv4.tcp_rmem = 4096    87380   4194304

这三个指定的值为“ min默认最大”缓冲区大小。
因此,Linux 开始时将为每个连接使用读写缓冲区的默认值。
随着连接数量的增加,这些缓冲区将会减少[最多直到指定的最小值]
最大缓冲区值的情况也是如此。

这些值可以使用 sysctl -w KEY=KEY VALUE 来设置,例如

。以下命令确保每个连接的读取和写入缓冲区均为 4096。

sysctl -w net.ipv4.tcp_rmem='4096 4096 4096'

sysctl -w net.ipv4.tcp_wmem='4096 4096 4096'

For a TCP connection memory consumed depends on

  1. size of sk_buff (internal networking structure used by linux kernel)

  2. the read and write buffer for a connection

the size of buffers can be tweaked as required

root@x:~# sysctl -A | grep net | grep mem

check for these variables

these specify the maximum default memory buffer usage for all network connections in kernel

net.core.wmem_max = 131071

net.core.rmem_max = 131071

net.core.wmem_default = 126976

net.core.rmem_default = 126976

these specify buffer memory usage specific to tcp connections

net.ipv4.tcp_mem = 378528   504704  757056

net.ipv4.tcp_wmem = 4096    16384   4194304

net.ipv4.tcp_rmem = 4096    87380   4194304

the three values specified are " min default max" buffer sizes.
So to start with linux will use the default values of read and write buffer for each connection.
As the number of connection increases , these buffers will be reduced [at most till the specified min value]
Same is the case for max buffer value.

These values can be set using this sysctl -w KEY=KEY VALUE

eg. The below command ensures the read and write buffers for each connection are 4096 each.

sysctl -w net.ipv4.tcp_rmem='4096 4096 4096'

sysctl -w net.ipv4.tcp_wmem='4096 4096 4096'
你げ笑在眉眼 2024-12-30 06:49:01

也要看是哪一层。如果是纯桥接场景,则只有桥接级 FDB。当路由发挥作用时,就会有路由表和 IP 级 FDB/邻居数据库。最后,一旦本地套接字进入播放状态,您当然就有窗口大小、套接字缓冲区(发送和接收,它们在我上次检查时默认为 128k)、片段列表(如果使用),这就是您的位置记忆会消失,但很难对所有正在使用的部件做出明确的答案。您可以使用 ss -m 来获取本地流套接字的一些内存统计信息。

Also depends on which layer. In case of a pure bridging scenario, there's just the bridge-level FDB. When routing comes into play, there's the routing table and the IP-level FDB/neighbor db. And finally, once a local socket is in the play, you have of course window size, socket buffers (both send and receive, and they default to 128k last time I checked), fragment lists (if used), so that is where your memory goes, but a clear-cut answer is hard to make with all the parts in use. You can use ss -m to obtain a few memory statistics of local stream sockets.

初与友歌 2024-12-30 06:49:01

这取决于。在很多很多事情上。
我认为空闲连接将占用几百个字节。
但如果发送和/或接收数据中有数据,则消耗会增加。窗口大小可以大致限制此消耗。
数据的额外消耗不仅仅是接收/发送队列中的字节。由于存在开销,因此一个字节的段可能需要 2K 左右的空间。 TCP 试图减少这种情况,例如通过将段合并到单个 sk_buff 中,但并不总是成功。

It depends. On many many things.
I think an idle connection will take a few hundreds of bytes.
But if there's data in the transmit and/or receive data, then the consumption increases. The window size can roughly limit this consumption.
The extra consumption for data isn't just the bytes in the receive/transmit queue. There are overheads, so a segment with one byte might take something like 2K. TCP tries to reduce this, for example by merging segments into a single sk_buff, but it doesn't always succeed.

じее 2024-12-30 06:49:01

如果您指的是缓冲区的活动大小,而不是定义潜在使用内存的最大缓冲区限制,您可以尝试如下查询当前使用情况:

while true; do sleep 1; ss -at | awk '$2 > 0 || $3 > 0 { totalreceive += $2; totalsend += $3; } END { print "Totals: Recv-Q", totalreceive, " Send-Q:", totalsend}'; done

该单行代码将每隔一段时间查询系统的当前状态第二(逻辑上拍摄当时系统状态的快照)并计算用于接收和发送缓冲区的总字节数。对于内核来说,所需的字节总量通常非常低,因为数据通常会非常快地传输到用户进程,并且用户模式进程通常有自己的由程序员维护的内部缓冲区。

我不知道是否有简单的方法可以知道实际为这些缓冲区映射了多少 RAM,因为内核可能一直避免获取和释放内存。上面的一行将输出用于缓冲的逻辑字节数,包括等待与侦听端口的新连接的缓冲区。

输出可能如下所示:

Totals: Recv-Q 0  Send-Q: 10497
Totals: Recv-Q 11584  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 10513
Totals: Recv-Q 2092360  Send-Q: 10175
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 1446152
Totals: Recv-Q 0  Send-Q: 1758452
Totals: Recv-Q 1973624  Send-Q: 10513
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 11584  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 614740
Totals: Recv-Q 0  Send-Q: 1446152
Totals: Recv-Q 0  Send-Q: 1600220
Totals: Recv-Q 0  Send-Q: 1586340
Totals: Recv-Q 0  Send-Q: 775602
Totals: Recv-Q 1448  Send-Q: 9572
Totals: Recv-Q 57920  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 23168  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572

即使您在 speed.cloudflare.com 上运行整个测试,结果为数百兆比特。这意味着即使我尽可能快地传输和接收 100+ MB 的数据,内核缓冲区中的 TCP/IP 流量字节总量始终小于 2 MB。

If you mean the active size of the buffers instead of max buffer limits that define the potentially used memory, you can try to query current usage like follows:

while true; do sleep 1; ss -at | awk '$2 > 0 || $3 > 0 { totalreceive += $2; totalsend += $3; } END { print "Totals: Recv-Q", totalreceive, " Send-Q:", totalsend}'; done

That one-liner will query current status of the system every second (logically take snapshot of the system status at that moment) and compute total bytes used for receive and send buffers. The total amount of bytes needed is typically surprisingly low for the kernel because the data is typically transferred to user process pretty fast and user mode process typically has its own internal buffers maintained by the programmer.

I don't know if there's easy way to know how much RAM is actually mapped for these buffers because the kernel probably avoids acquiring and releasing the memory all the time. The above one-liner will output the logical amount of bytes used for the buffering including the buffers waiting for new connection to listened ports.

The output might look something like this:

Totals: Recv-Q 0  Send-Q: 10497
Totals: Recv-Q 11584  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 10513
Totals: Recv-Q 2092360  Send-Q: 10175
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 1446152
Totals: Recv-Q 0  Send-Q: 1758452
Totals: Recv-Q 1973624  Send-Q: 10513
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 11584  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 614740
Totals: Recv-Q 0  Send-Q: 1446152
Totals: Recv-Q 0  Send-Q: 1600220
Totals: Recv-Q 0  Send-Q: 1586340
Totals: Recv-Q 0  Send-Q: 775602
Totals: Recv-Q 1448  Send-Q: 9572
Totals: Recv-Q 57920  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572
Totals: Recv-Q 23168  Send-Q: 9572
Totals: Recv-Q 0  Send-Q: 9572

even if you run the whole test at speed.cloudflare.com with results in hundreds of megabits. That means that the total amount of TCP/IP traffic bytes in kernel buffers was less than 2 MB at all times even though I trasmitted and received 100+ MB of data as fast as possible.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文