我有3个PODS:FE,BE1,BE2。
Fe通过REST API与BE1通信,BE1也通过REST API与BE2通信。
我需要实施POD的复制。我想制作POD的副本,如果设置中的一个POD不起作用,则流量将被重定向到另一组。
就是这样:
set_1:fer1-> be1r1-> BE2R1,
set_2:fer2 - > be1r2 - > BE2R2
Fe是在容器中的反应
BE1和BE2是单独的容器中的Java应用程序。
我不知道如何配置它。每个容器都包含管道式围栏和应用程序。
有人知道如何做到这一点,或者可能另一种创建它的方法?
谢谢!
I have Openshift project with 3 pods: FE, BE1, BE2.
FE communicates with BE1 via REST API, BE1 with BE2 via REST API too.
I need to implement replication of pods. I have idea to make copy of pods, and if one of pod in set will not work, traffic will be redirected to another set.
It will be like this:
Set_1 : FEr1 -> BE1r1 -> BE2r1,
Set_2 : FEr2 -> BE1r2 -> BE2r2
FE is React react in container
BE1 and BE2 is Java apps in separate containers.
I don't know how to configure it. Every container contains pipeline configration and application.template files.
Somebody knows how is it possible to do, or maybe some another way to create it?
Thanks!
发布评论
评论(1)
如果我正确理解您,您的问题基本上归结为“我如何运行主动的K8S服务”?因为如果我可以给您回答如何为FER1 / FER2运行“主动 - 竞争性服务”,那么您可以为“集合”中的每个吊舱使用相同的技术。因此,为了简化我的答案,我将重点介绍如何拥有一项单一的“ Active-Passive”服务。然后,您可以自行推断如何创建一系列“主动 - 辅助”服务。
我将从以下事实开始,在Kubernetes或OpenShift中没有这样的本机“主动”服务对象。这与大多数K8S设计模式相反。因此,您要么要更改架构,要么要构建一些相当自定义的东西。
在尝试找到链接时,我可以共享以演示您的一些选择时,我发现 详细介绍了我要概述的大多数选项。这是对Kubernetes主动通过服务的一个很好的探索。为了方便起见,我将在这里总结并添加一些评论。但是他详细介绍了一些细节,我建议阅读保罗的原始博客文章。
他的选择#1和他的建议方法本质上是“不要那样做”。他谈到了主动方法的缺点,以及为什么K8S模式通常不采用主动方法。我同意:您的最佳选择只是改进您的服务,以使其不具有主动性。
他的选择#2本质上是“不要做”的另一项建议。我将解释他的第二个选择,为“如果您处于被迫只有一个活跃的豆荚的情况下,那么Kubernetes的本机方法将仅运行一个POD”。在此选项中,您仅使用一个吊舱,但请使用kubernetes本机部署/状态探测器和LIVISE探针来保持单个POD可用。显然,如果您的POD启动缓慢,这会遇到一些挑战。
他的选择#3基本上是他的最后选择。为了引用他的文章:“确保您已经完全考虑并进行了周到的排除,然后继续采用主动/被动负载平衡方法。”但是,他详细介绍了一种方法,您可以在其中使用普通的K8S部署/状态满足来创建您的豆荚和普通的K8S服务,以在它们之间路由流量。但是,使他们没有主动流量平衡,您可以在服务中添加其他选择器,例如“角色= Active”。由于没有豆荚具有此标签,因此选择器将防止豆荚的 被路由到。
但这导致了一个技巧:您创建了一个额外的部署(和pod),其唯一作业是保持“角色= Active”标签。完全有可能修补运行吊舱的标签。因此,他为脚本提供了一些伪代码,您可以在该“故障转移管理器” POD中运行的脚本。从本质上讲,“故障转移管理器”只是通过您定义的任何规则来检查可用性,然后通过删除和添加标签来控制从活动转移到被动式POD的故障转移。
他确实谈论了这一点的挑战。包括确保它已经足够硬并具有适当的权限。我建议,如果您采用这种方法,则使其成为完整的运算符。因为从本质上讲,这种方法是:编写自定义操作员。
但是,我还将提及另一种类似的方法,我将调用选项#4。从本质上讲,您使用选项#3的操作是通过修补服务来创建自定义路由逻辑。您可以接受这种客户路由方法并部署自己的Haproxy之类的东西。我没有适合您的示例配置。但是,主动通过的故障转移是一个相当良好的Haproxy探索区域。您正在添加额外的路由层,但是您正在使用更多的货架功能,而不是通过即时修补服务。
If I'm understanding you correctly, your question essentially boils down to "How do I run an active-passive K8S Service"? Because if I could give you answer on how to run an "active-passive service" for FEr1 / FEr2 then you could use the same technique for each pod in your "sets". So, to simplify my answer, I'm going to focus on how to have a single "active-passive" service. You can then you can extrapolate on your own how to create a chain of "active-passive" services.
I will begin with the fact there is no such native "active-passive" service object in Kubernetes or Openshift. It's kind of antithetical to most K8S design patterns. So you are going to have either change your architecture or you are going to have build something fairly customized.
When trying to find a link I could share to demonstrate some of your options, I found this blog post from Paul Dally which details most of the the options I was going to outline. It is a great exploration of active-passive services in Kubernetes. For convenience, I'm going to summarize here and add some commentary. But he goes into some great detail and I'd recommend reading the original blog post from Paul.
His option #1, and his recommended approach, is essentially "don't do that". He talks about the disadvantages of an active-passive approach and why K8S patterns generally don't take an active-passive approach. I concur: your best option is just to rearchitect your services so that they are not active-passive.
His option #2 is essentially another recommendation of "don't do that". I will paraphrase his second option as "if you are in a situation where you are forced to only have one active pod the more Kubernetes native approach would be to only run one pod". In this option you use only a single pod, but use Kubernetes native Deployments/Statefulsets and liveness probes to keep the single pod available. Obviously if your pod has slow startup, this has some challenges.
His option #3 is basically his option of last resort. To quote his article, "Make sure that you have fully considered and thoughtfully ruled out the preceding options before continuing with an active/passive load balancing approach." But then he details an approach where you could use a normal K8S Deployment/StatefulSet to create your pods and a normal K8S Service to route traffic between them. But, so that they don't have active-active traffic balancing you add an additional selector to the service e.g. "role=active". Since none of the pods will have this label, the selector will prevent either of the pods from being routed to.
But this leads to the trick: you create an additional Deployment (and Pod) whose sole job is to maintain that "role=active" label. It's perfectly possible to patch the labels of a running pod. So he provides some pseudo-code for a script that you could run in that "failover manager" pod. Essentially the "failover manager" is just checking for availability, by whatever rules you define, and then controls the failover from the active to passive pod by deleting and adding the label.
He does talk about the challenges of this. Including making sure it's hardened enough and has the proper permissions. I'd suggest that if you take this approach that you make it a full-fledged operator. Because essentially that's what this kind of approach is: writing a custom operator.
I will also, however, mention another similar approach that I'll call option #4. Essentially what you are doing with option #3 is create custom routing logic by patching the service. You could just embrace that customer routing approach and deploy something like your own HAProxy. I don't have a sample config for you. But active-passive failover is a fairly well explored area for an HAProxy. You are adding an additional layer of routing, but you are using more off the shelf functionality rather than patching services on-the-fly.