多节点集群的网络策略行为
我有一个多节点集群设置。为集群中的 Pod 定义了 Kubernetes 网络策略。我只能从 pod 所在的节点使用 clusterIP/podIP 访问服务或 pod。对于具有多个 Pod 的服务,我根本无法从该节点访问该服务(我猜当该服务将流量定向到具有与我调用的位置相同的驻留节点的 Pod 时,该服务就会工作)。
这是预期的行为吗? 这是 Kubernetes 的限制还是安全功能? 为了调试等,我们可能需要从节点访问服务。我怎样才能实现它?
I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).
Is this the expected behavior?
Is it a Kubernetes limitation or a security feature?
For debugging etc., we might need to access the services from the node. How can I achieve it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不,这不是 Kubernetes 的预期行为。同一集群内的所有节点应该可以通过其内部 IP 访问 Pod。
ClusterIP
服务在集群内部 IP 上公开服务,并使其可从集群内部访问 - 它基本上是为所有服务类型默认设置的,如 Kubernetes 文档。服务不特定于节点,它们可以指向某个 Pod,无论它在任何给定时刻在集群中的哪个位置运行。另请确保您在尝试访问服务时使用集群内部
端口:
。如果您仍然只能从运行该 Pod 的节点连接到该 Pod,您可能需要检查您的网络是否有问题 - 例如,检查 UDP 端口是否被阻止。编辑:关于网络策略 - 默认情况下,Pod 对于出口或入口都是非隔离的,即如果 Kubernetes 中没有为 Pod 定义 NetworkPolicy 资源,则所有流量允许进出此 Pod - 所谓的
default-allow
行为。基本上,如果没有网络策略,所有 Pod 都可以与同一集群中的所有其他 Pod/服务进行通信,如上所述。如果将一个或多个 NetworkPolicy 应用于特定 Pod,它将拒绝该策略未明确允许的所有流量(也就是说,NetworkPolicy 既选择该 Pod,又具有其策略类型中的“Ingress”/“Egress”)-
default-deny
行为。什么是更多:
所以,是的,这是 Kubernetes NetworkPolicy 的预期行为 - 当 pod 的入口/出口被隔离时,唯一允许进出 pod 的连接是来自 pod 节点的连接以及连接列表允许的连接定义的
NetworkPolicy
。为了与其兼容,Calico 网络策略遵循与Kubernetes Pod。
NetworkPolicy
应用于特定命名空间内的 pod - 借助 选择器。至于节点特定策略 - 节点不能通过其 Kubernetes 身份来定位,而是应在 pod/service
NetworkPolicy
中以ipBlock
的形式使用 CIDR 表示法 - 特别是 IP选择范围以允许作为 Pod/服务的入口源或出口目的地。在这种情况下,将每个节点的 Calico IP 地址列入白名单似乎是一个有效的选项,请查看描述的类似问题 这里。
No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs.
ClusterIP
service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in Kubernetes documentation.Services are not node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal
port:
while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.EDIT: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no
NetworkPolicy
resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-calleddefault-allow
behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above.If one or more
NetworkPolicy
is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning,NetworkPolicy
that both selects the pod and has "Ingress"/"Egress" in its policyTypes) -default-deny
behavior.What is more:
So yes, it is expected behavior for Kubernetes
NetworkPolicy
- when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list ofNetworkPolicy
defined.To be compatible with it, Calico network policy follows the same behavior for Kubernetes pods.
NetworkPolicy
is applied to pods within a particular namespace - either the same or different with the help of the selectors.As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of
ipBlock
in pod/serviceNetworkPolicy
- particular IP ranges are selected to allow as ingress sources or egress destinations for pod/service.Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described here.