如何在多宿主服务器的非默认接口上接收多播数据

发布于 2024-10-30 13:08:25 字数 953 浏览 0 评论 0原文

我有一个带有两个网卡(eth0 和 eth1)的 Linux 服务器,并在“ip 路由”中将 eth0 设置为默认值。现在我想在 eth1 上接收多播数据包。我已将“224.0.20.0/24 dev eth1 proto static scope link”添加到路由表中,并按如下方式连接:

sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_IP);

// port 12345, adress INADDR_ANY
bind(sock, &bind_addr, sizeof(bind_addr));

// multicast address 224.0.20.100, interface address 10.13.0.7 (=eth1)
setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &imreq, sizeof(imreq));

根据ip maddr,它连接到右侧接口上的该组,并且< code>tshark -i eth1 显示我实际上正在获取多播数据包。

但是,在调用 recvfrom(sock) 时我没有收到任何数据包。如果我将“ip route default”设置为 eth1(而不是 eth0),我会通过 recvfrom 获取数据包。这是我的代码或网络设置的问题吗?正确的方法是什么?

(更新)解决方案: caf 暗示 可能是同样的问题;确实:在执行 echo 0 > 之后/proc/sys/net/ipv4/conf/eth1/rp_filter 我现在可以接收多播数据包了!

I have a linux server with two NICs (eth0 and eth1), and have set eth0 as default in "ip route." Now I would like to receive multicast packets on eth1. I have added "224.0.20.0/24 dev eth1 proto static scope link" to the routing table, and I connect as follows:

sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_IP);

// port 12345, adress INADDR_ANY
bind(sock, &bind_addr, sizeof(bind_addr));

// multicast address 224.0.20.100, interface address 10.13.0.7 (=eth1)
setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &imreq, sizeof(imreq));

According to ip maddr it connects to that group on the right interface, and tshark -i eth1 shows that I am actually getting multicast packets.

However, I don't get any packets when calling recvfrom(sock). If I set "ip route default" to eth1 (instead of eth0), I do get packets via recvfrom. Is this an issue with my code or with my network setup, and what is the correct way of doing this?

(update) solution: caf hinted that this might be the same problem; indeed: after doing echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter I can now receive multicast packets!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

弃爱 2024-11-06 13:08:25

caf 的评论,认为这是 在具有多个接口的服务器上接收多播(Linux) 回答了这个问题! (为了清楚起见,我将其发布为答案。)即, echo 0 > > /proc/sys/net/ipv4/conf/eth1/rp_filter 解决了我的问题。

caf's comment that this is a duplicate of receiving multicast on a server with multiple interfaces (linux) answered this! (And I post this as an answer for clarity.) Namely, an echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter resolves my issue.

乄_柒ぐ汐 2024-11-06 13:08:25

尝试添加网络掩码并在路由表条目中指定 10.13.0.7 作为网关。

Try adding a netmask and specifying 10.13.0.7 as the gateway in your routing table entry.

全部不再 2024-11-06 13:08:25

正确,假设您有两个 NIC,其中只有一个具有默认网关。

多播使用单播路由来确定返回源的路径。这意味着,如果组播路径与单播路径不同,则组播路径将会存在。这是一种称为 RPF 检查的环路预防机制。

在这种情况下,有效绑定到 NIC 的应用程序被迫加入 IGMP,因为单播路由是从具有默认网关的其他 NIC 获知的。所以检查失败了。因此没有数据。

您不需要添加任何静态路由。当您将 rp_filter 值更改为 0 时,它应该会起作用。

Correct, assuming you had two NICs with a default gw on only one of them.

Multicast uses unicast routes to determine path back to the source. It means, if multicast path is different from unicast path, then a multicast path will exit. It's a loop prevention mechanism called RPF check.

In this case the application bound to a NIC effectively was forced to join the IGMP over where as the unicast routes were learned from the other NIC with default gateway. So the check was failing. Thus no data.

You don't need to add any static routes. It should just work when you change the rp_filter value to 0.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文