GKE 内部 TCP 负载均衡器的自定义运行状况检查配置

发布于 2025-01-13 06:54:13 字数 597 浏览 3 评论 0原文

我需要通过内部负载均衡器公开 GKE 中的服务(服务可以自行终止 TLS)。

明确指南如何操作那,除了一件事之外,一切都按预期进行。自动创建的 LB 在硬编码路径 /healthz 处配置了 HTTP 运行状况检查,但该服务在不同路径实现其运行状况检查。因此,负载均衡器永远不会“看到”支持实例组是健康的。

有没有办法向 GKE 中的内部 TCP 负载均衡器提供自定义运行状况检查配置?

仅用于上下文:我尝试遵循 配置入口功能(通过创建后端配置并相应地注释服务),但不幸的是这不适用于 TCP 负载平衡器(但如果我尝试使用入口资源部署 HTTP 负载平衡器,则可以)。

I need to expose a service in GKE via internal load-balancer (service can terminate TLS on its own).

There is a clear guideline how to do that, all works as expected except for one thing. The LB that gets automatically created is configured with HTTP health-check at hard-coded path /healthz, however the service implements its health-check at a different path. As a result the load-balancer never "sees" the backing instance-groups as healthy.

Is there a way to provide a custom health-check config to an internal TCP load-balancer in GKE?

Just for the context: I tried to follow the approach described in another guide on configuring ingress features (by creating a backend-config and annotating the service accordingly), but unfortunately that does not work for TCP load-balancer (while it does if I try to deploy HTTP load-balancer with an ingress resource).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

虚拟世界 2025-01-20 06:54:13

是的,您可以修改现有的运行状况检查或定义新的运行状况检查。您可以使用 Cloud Console、Google Cloud CLI 或 REST API 创建运行状况检查。请参阅此文档,了解有关创建健康状况的详细信息检查TCP协议。

与代理负载均衡器不同,内部 TCP/UDP 负载均衡器不会终止来自客户端的连接,然后打开到后端的新连接。相反,内部 TCP/UDP 负载均衡器将连接直接从客户端路由到健康的后端,而不会造成任何中断。

  • 没有中间设备或单点故障。
  • 客户端对负载均衡器 IP 地址的请求直接发送到
    健康的后端虚拟机。
  • 来自健康后端虚拟机的响应直接发送到客户端,
    不通过负载均衡器返回。 TCP 响应使用直接服务器
    返回。

运行状况检查的协议不必与负载均衡器的协议匹配,无论您创建哪种运行状况检查类型,Google Cloud 都会将运行状况检查探测发送到内部 TCP/UDP 负载均衡器转发规则的 IP 地址,负载均衡器后端服务选择的 VPC 中的网络接口。

注意:内部 TCP/UDP 负载均衡器使用 健康检查状态以确定如何路由新连接,如流量分配。

内部 TCP/UDP 负载均衡器分配新连接的方式取决于您是否配置了故障转移

  • 如果您尚未配置故障转移,则内部 TCP/UDP 负载均衡器
    至少在以下情况下将新连接分配到其健康的后端虚拟机
    一台后端虚拟机运行状况良好。当所有后端虚拟机均不健康时,
    负载均衡器在所有后端之间分配新连接
    最后的手段。在这种情况下,负载均衡器会路由每个新的
    连接到不健康的后端虚拟机。
  • 如果您配置了故障转移,则内部 TCP/UDP 负载平衡器
    根据活动池中的虚拟机之间分配新连接
    您配置的故障转移策略。当所有后端虚拟机都已启动时
    不健康,您可以选择以下行为之一

Yes, you can edit an existing health check or define a new one.You can create a health check using the Cloud Console, the Google Cloud CLI, or the REST APIs. Refer this documentation for more information on creating a health check with TCP protocol.

Unlike a proxy load balancer, an internal TCP/UDP load balancer doesn't terminate connections from clients and then open new connections to backends. Instead, an internal TCP/UDP load balancer routes connections directly from clients to the healthy backends, without any interruption.

  • There's no intermediate device or single point of failure.
  • Client requests to the load balancer's IP address go directly to the
    healthy backend VMs.
  • Responses from the healthy backend VMs go directly to the clients,
    not back through the load balancer. TCP responses use direct server
    return.

The protocol of the health check does not have to match the protocol of the load balancer, Regardless of the type of health check that you create, Google Cloud sends health check probes to the IP address of the internal TCP/UDP load balancer's forwarding rule, to the network interface in the VPC selected by the load balancer's backend service.

Note : The internal TCP/UDP load balancers use health check status to determine how to route new connections, as described in Traffic distribution.

The way that an internal TCP/UDP load balancer distributes new connections depends on whether you have configured failover:

  • If you haven't configured failover, an internal TCP/UDP load balancer
    distributes new connections to its healthy backend VMs if at least
    one backend VM is healthy. When all backend VMs are unhealthy, the
    load balancer distributes new connections among all backends as a
    last resort. In this situation, the load balancer routes each new
    connection to an unhealthy backend VM.
  • If you have configured failover, an internal TCP/UDP load balancer
    distributes new connections among VMs in its active pool, according
    to a failover policy that you configure. When all backend VMs are
    unhealthy, you can choose from one of the following behaviors
一影成城 2025-01-20 06:54:13

我遇到了同样的问题,但我在官方 GKE 文档 负载均衡器健康检查

服务的externalTrafficPolicy定义了负载均衡器的运行状况检查的运行方式。在所有情况下,负载均衡器的运行状况检查探测器都会将数据包发送到每个节点上运行的 kube-proxy 软件。负载均衡器的运行状况检查是 kube-proxy 收集的信息的代理,例如 Pod 是否存在、正在运行以及是否已通过其就绪探测。 负载均衡器服务的运行状况检查无法路由到服务 Pod。负载均衡器的运行状况检查旨在将新的 TCP 连接定向到节点。

外部流量策略:集群

集群的所有节点都通过
无论节点是否正在运行服务 Pod,都会进行健康检查。

外部流量策略:本地

仅包含至少一个的节点
准备就绪,服务 Pod 通过健康检查。没有服务 Pod 的节点
以及具有尚未通过准备状态的服务 Pod 的节点
探测器未通过健康检查。

如果服务 Pod 的就绪性探测失败或即将终止,节点仍可能通过负载均衡器的运行状况检查
即使它不包含就绪且正在服务的 Pod。

当负载均衡器的健康检查尚未完成时会发生这种情况
达到了故障阈值。数据包是如何处理的
具体情况取决于 GKE 版本。

我希望您会发现它有帮助。

I have bumped into the same issue, but I found the answer in the official GKE documentation Load balancer health checks

The externalTrafficPolicy of the Service defines how the load balancer's health check operates. In all cases, the load balancer's health check probers send packets to the kube-proxy software running on each node. The load balancer's health check is a proxy for information that the kube-proxy gathers, such as whether a Pod exists, is running, and has passed its readiness probe. Health checks for LoadBalancer Services cannot be routed to serving Pods. The load balancer's health check is designed to direct new TCP connections to nodes.

externalTrafficPolicy: Cluster

All nodes of the cluster pass the
health check regardless of whether the node is running a serving Pod.

externalTrafficPolicy: Local

Only the nodes with at least one
ready, serving Pod pass the health check. Nodes without a serving Pod
and nodes with serving Pods that have not yet passed their readiness
probes fail the health check.

If the serving Pod has failed its readiness probe or is about to terminate, a node might still pass the load balancer's health check
even though it does not contain a ready and serving Pod.
This
situation happens when the load balancer's health check has not yet
reached its failure threshold. How the packet is processed in this
situation depends on the GKE version.

I hope you will find it helpful.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文