ip_conntrack_tcp_timeout_builted 不适用于整个子网

发布于 2025-01-06 05:57:17 字数 991 浏览 4 评论 0原文

我有一个 nat 设置,有数千个设备连接到它。网关通过 eth0 提供互联网,LAN 侧的设备连接到网关上的 eth1。

我对 iptables 进行了以下设置:

/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

eth1 配置如下:

    ip: 192.168.0.1
subnet: 255.255.0.0

为客户端分配 ips 192.168.0.2 到 192.168.255.254。

在 /etc/sysctl.conf 中,我对 ip_conntrack_tcp_timeout_builted 进行了以下设置

net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=1200

由于连接到此网关的客户端设备数量较多,我无法使用默认的 5 天超时。

这似乎运行良好,并已使用超过 10000 个客户端设备测试了该设置。

然而,我看到的问题是,tcp 建立的超时 1200 仅适用于 192.168.0.2 到 192.168.0.255 的 IP 范围内的设备。 IP 在 192.168.1.x 到 192.168.255.x 范围内的所有设备仍使用 5 天的默认超时。

这会在 /proc/net/ip_conntrack 表中留下太多“ESTABLISHED”连接,最终会被填满,即使它们应该在 20 分钟内超时,但它们显示它们将在 5 天内超时。

显然我在某处缺少设置或配置不正确。

有什么建议吗?

谢谢

I've got a nat setup with thousands of devices connected to it. The gateway has its internet provided by eth0 and the devices on the LAN side connect to eth1 on the gateway.

I have the following setup with iptables:

/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

eth1 is configured as follows:

    ip: 192.168.0.1
subnet: 255.255.0.0

Clients are assigned the ips 192.168.0.2 through 192.168.255.254.

In /etc/sysctl.conf I have the following setup for ip_conntrack_tcp_timeout_established

net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=1200

Because of the number of client devices that connect to this gateway I can't use the default 5 day timeout.

This seems to work well and have tested the setup with over 10000 client devices.

However, the issue I am seeing is that the tcp established timeout of 1200 is only being applied to devices in the ip range of 192.168.0.2 through 192.168.0.255. All devices with ips in the 192.168.1.x through 192.168.255.x range are still using the 5 day default timeout.

This is leaving way too many "ESTABLISHED" connections in the /proc/net/ip_conntrack table and it eventually fills up, even though they should be timing out within 20 minutes, they are showing that they will timeout in 5 days.

Obviously I am missing a setting somewhere or have something configured incorrectly.

Any suggestions?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

热血少△年 2025-01-13 05:57:18

正如 @StephenHankinson 提到的,更改 sysctl 变量时的现有连接(参见 conntrack -L)不会重置超时。这通常应该不是问题,因为这些连接最终会结束,但可以使用 conntrack -F 强制 NFCT 忘记所有 CT。但请注意,如果您的规则集不允许不以 TCP SYN 开头的“新”连接,这可能会终止现有连接。

As @StephenHankinson mentions, existing connections (cf. conntrack -L) at the time of changing the sysctl variable do not have their timeout reset. This should normally be not a problem, as these connections will eventually end, but NFCT can be forced to forget all CTs using conntrack -F. Note however that this might kill existing connections if your ruleset does not permit “NEW” connections not beginning with TCP SYN.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文