使用 eBPF 使用 GRE 隧道(或任何 vNIC)传递流量的正确方法是什么?

发布于 2025-01-13 08:01:45 字数 1530 浏览 7 评论 0原文

我有一个 GRE 链接集使用以下命令在虚拟机上启动: iptunnel add tap0 mode gre local; foo<->bar

远程和不同虚拟机(在同一子网中)上的对应版本完全相同,除了我创建的 和 eBPF tc 程序调用 bpf_clone_redirect 将数据包复制到其中一台主机上的隧道设备(即将流量复制到 tap0 链接):

SEC("tc")

SEC("tc")
int tc_ingress(struct __sk_buff *skb) {
    __u32 key = 0;
    struct destination *dest = bpf_map_lookup_elem(&destinations, &key);

    if (dest != NULL) {
        struct bpf_tunnel_key key = {};
        int ret;

        key.remote_ipv4 = dest->destination_ip;
        key.tunnel_id = dest->iface_idx;
        key.tunnel_tos = 0;
        key.tunnel_ttl = 64;
        ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);
        if (ret < 0) {
            // error setting the tunnel key, do not redirect simply continue.
            return TC_ACT_OK;
        }
        // zero flag means that the socket buffer is 
        // cloned to the iface egress path.
        bpf_clone_redirect(skb, dest->iface_idx, 0);
    }
    return TC_ACT_OK;
}
}

我看到流量传递到这GRE 链接 tap0 通过运行 tcpdump -i tap0 但我没有看到其远程对应方的流量...

  1. 在这种情况下是否有必要为设备定义地址( ala ip addr <> dev tap0)?
  2. 定义此类隧道的正确方法是什么?
  3. 如果我在 eth0 上设置了 iptable 规则,它会阻止发送到 GRE 链路的流量吗?如果“是”有办法绕过这些吗?

I have a GRE link set up on a VM using the following commands: ip tunnel add tap0 mode gre local <foo> remote <bar> and the counterpart on a different VM (in the same subnet) is exactly the same except foo<->bar

I have created and an eBPF tc program that calls bpf_clone_redirect to copy packets to the tunnel device on one of the hosts (i.e duplicating the traffic to tap0 link):

SEC("tc")

SEC("tc")
int tc_ingress(struct __sk_buff *skb) {
    __u32 key = 0;
    struct destination *dest = bpf_map_lookup_elem(&destinations, &key);

    if (dest != NULL) {
        struct bpf_tunnel_key key = {};
        int ret;

        key.remote_ipv4 = dest->destination_ip;
        key.tunnel_id = dest->iface_idx;
        key.tunnel_tos = 0;
        key.tunnel_ttl = 64;
        ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);
        if (ret < 0) {
            // error setting the tunnel key, do not redirect simply continue.
            return TC_ACT_OK;
        }
        // zero flag means that the socket buffer is 
        // cloned to the iface egress path.
        bpf_clone_redirect(skb, dest->iface_idx, 0);
    }
    return TC_ACT_OK;
}
}

I see the traffic passed to the GRE link tap0 by running tcpdump -i tap0 but I dont see the traffic on its remote counterpart...

  1. Is it necessary in such scenario to define an address for the device (ala ip addr <> dev tap0)?
  2. What is the proper way of defining such tunnels?
  3. If I have iptable rules set up on eth0 would it block traffic sent to the GRE link? If "yes" is there a way to bypass those?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一页 2025-01-20 08:01:46

对于尝试路由到 GRE 隧道的任何人,请使用提供的 bpf_skb_set_tunnel_key 结构。请参阅 https://github.com/torvalds/linux/blob/5bfc75d92efd494db37f5c4c173d3639d4772966/samples/bpf/tc_l2_redirect_kern.c)。

根据我的用例 -

对于任何尝试在 Azure VM 上创建 GRE 隧道的人,请注意,目前,这不可能按照 https://learn.microsoft.com/en-us/answers/questions/496591/does-azure-virtual-network-support-gre.html

For anyone trying to route to a GRE tunnel please use the bpf_skb_set_tunnel_key struct provided. See examples in https://github.com/torvalds/linux/blob/5bfc75d92efd494db37f5c4c173d3639d4772966/samples/bpf/tc_l2_redirect_kern.c).

Per my use case -

For anyone trying to create a GRE tunnel on Azure VM's please note that this is, currently, not possible per https://learn.microsoft.com/en-us/answers/questions/496591/does-azure-virtual-network-support-gre.html

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文