每个 TCP/IP 网络连接 Linux 内核消耗多少内存?
每个 TCP/IP 网络连接 Linux 内核(在内核地址空间中)平均消耗多少内存?
How much memory on average is consumed by the Linux kernel (in kernel address space) per TCP/IP network connection?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
对于 TCP 连接,消耗的内存取决于
sk_buff 的大小(Linux 内核使用的内部网络结构)
连接的读写缓冲区
的大小可以根据需要进行调整
检查这些变量
这些指定内核中所有网络连接的最大默认内存缓冲区使用情况
这些指定特定于 tcp 连接的缓冲区内存使用情况
这三个指定的值为“ min默认最大”缓冲区大小。
因此,Linux 开始时将为每个连接使用读写缓冲区的默认值。
随着连接数量的增加,这些缓冲区将会减少[最多直到指定的最小值]
最大缓冲区值的情况也是如此。
这些值可以使用 sysctl -w KEY=KEY VALUE 来设置,例如
。以下命令确保每个连接的读取和写入缓冲区均为 4096。
For a TCP connection memory consumed depends on
size of sk_buff (internal networking structure used by linux kernel)
the read and write buffer for a connection
the size of buffers can be tweaked as required
check for these variables
these specify the maximum default memory buffer usage for all network connections in kernel
these specify buffer memory usage specific to tcp connections
the three values specified are " min default max" buffer sizes.
So to start with linux will use the default values of read and write buffer for each connection.
As the number of connection increases , these buffers will be reduced [at most till the specified min value]
Same is the case for max buffer value.
These values can be set using this
sysctl -w KEY=KEY VALUE
eg. The below command ensures the read and write buffers for each connection are 4096 each.
也要看是哪一层。如果是纯桥接场景,则只有桥接级 FDB。当路由发挥作用时,就会有路由表和 IP 级 FDB/邻居数据库。最后,一旦本地套接字进入播放状态,您当然就有窗口大小、套接字缓冲区(发送和接收,它们在我上次检查时默认为 128k)、片段列表(如果使用),这就是您的位置记忆会消失,但很难对所有正在使用的部件做出明确的答案。您可以使用 ss -m 来获取本地流套接字的一些内存统计信息。
Also depends on which layer. In case of a pure bridging scenario, there's just the bridge-level FDB. When routing comes into play, there's the routing table and the IP-level FDB/neighbor db. And finally, once a local socket is in the play, you have of course window size, socket buffers (both send and receive, and they default to 128k last time I checked), fragment lists (if used), so that is where your memory goes, but a clear-cut answer is hard to make with all the parts in use. You can use
ss -m
to obtain a few memory statistics of local stream sockets.这取决于。在很多很多事情上。
我认为空闲连接将占用几百个字节。
但如果发送和/或接收数据中有数据,则消耗会增加。窗口大小可以大致限制此消耗。
数据的额外消耗不仅仅是接收/发送队列中的字节。由于存在开销,因此一个字节的段可能需要 2K 左右的空间。 TCP 试图减少这种情况,例如通过将段合并到单个 sk_buff 中,但并不总是成功。
It depends. On many many things.
I think an idle connection will take a few hundreds of bytes.
But if there's data in the transmit and/or receive data, then the consumption increases. The window size can roughly limit this consumption.
The extra consumption for data isn't just the bytes in the receive/transmit queue. There are overheads, so a segment with one byte might take something like 2K. TCP tries to reduce this, for example by merging segments into a single sk_buff, but it doesn't always succeed.
如果您指的是缓冲区的活动大小,而不是定义潜在使用内存的最大缓冲区限制,您可以尝试如下查询当前使用情况:
该单行代码将每隔一段时间查询系统的当前状态第二(逻辑上拍摄当时系统状态的快照)并计算用于接收和发送缓冲区的总字节数。对于内核来说,所需的字节总量通常非常低,因为数据通常会非常快地传输到用户进程,并且用户模式进程通常有自己的由程序员维护的内部缓冲区。
我不知道是否有简单的方法可以知道实际为这些缓冲区映射了多少 RAM,因为内核可能一直避免获取和释放内存。上面的一行将输出用于缓冲的逻辑字节数,包括等待与侦听端口的新连接的缓冲区。
输出可能如下所示:
即使您在 speed.cloudflare.com 上运行整个测试,结果为数百兆比特。这意味着即使我尽可能快地传输和接收 100+ MB 的数据,内核缓冲区中的 TCP/IP 流量字节总量始终小于 2 MB。
If you mean the active size of the buffers instead of max buffer limits that define the potentially used memory, you can try to query current usage like follows:
That one-liner will query current status of the system every second (logically take snapshot of the system status at that moment) and compute total bytes used for receive and send buffers. The total amount of bytes needed is typically surprisingly low for the kernel because the data is typically transferred to user process pretty fast and user mode process typically has its own internal buffers maintained by the programmer.
I don't know if there's easy way to know how much RAM is actually mapped for these buffers because the kernel probably avoids acquiring and releasing the memory all the time. The above one-liner will output the logical amount of bytes used for the buffering including the buffers waiting for new connection to listened ports.
The output might look something like this:
even if you run the whole test at speed.cloudflare.com with results in hundreds of megabits. That means that the total amount of TCP/IP traffic bytes in kernel buffers was less than 2 MB at all times even though I trasmitted and received 100+ MB of data as fast as possible.