我目前正在自己的家庭服务器上运行一个Kubernetes群集(在Proxmox CT中,由于我也使用ZFS,但现在运行起来很难工作),并且设置如下:
- lb01:lb01:haproxy& Keepalived
- LB02:Haproxy& keepAlived
- etcd01:etcd节点1
- etcdd02:etcd节点2
- etcd03:etcd节点3
- master-01:k3s在服务器模式下使用污点,因为不接受任何作业
- master-02:与上述相同,只需与Master-01
- Master 相同
- 的标记加入-03:与Master-02 Worker-01 - Worker-03:K3S代理
,如果我正确理解K3S,则使用法兰绒作为CNI预先安装的CNI以及Traefik作为入口控制器。
我已经在我的集群上设置了牧场主以及朗霍恩(Longhorn),这些卷只是安装在代理商内部的ZFS量,并且由于它们不在不同的HDD上,所以我将复制品设置为1。我有一个朋友在运行相同的朋友设置(我们昨天就将它们设置在一起),我们正在计划加入网络VPN隧道,然后为彼此提供存储节点作为异地备份。
到目前为止,我希望一切都正确。
现在我的问题是:我俩都有一个静态的IP @home和一个域,并且我将该域设置为我的静态IP的
内容:(不知道如何实际编写DNS条目,只是从我的头顶供您参考,条目运行良好。)
一个示例。 [[my-ip]]
cname *.example.com。 example.com
我目前已经为港口80& 443但是我不太确定您将如何考虑HA的实际配置,而我的牧场主在访问全球设置后将抛出503,但我没有更改任何东西。
因此,现在我的问题是:一个人实际上将如何配置Port-Forward,据我所知,K3S具有负载平衡器的预安装,但是如何为HA配置这些端口呢?从理论上讲,它可以指出的一个主节点可以停止工作,然后所有服务将不再从外部到达。
I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
- lb01: haproxy & keepalived
- lb02: haproxy & keepalived
- etcd01: etcd node 1
- etcd02: etcd node 2
- etcd03: etcd node 3
- master-01: k3s in server mode with a taint for not accepting any jobs
- master-02: same as above, just joining with the token from master-01
- master-03: same as master-02
- worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip @home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
发布评论
评论(1)
假设您的应用程序在端口80和端口443上运行,您的入口将为您提供外部IP的服务,您可以指出DNS。请阅读下面的更多信息。
好像你不是菜鸟!群集设置有很多事情要做。您要问的要回答有点复杂,我将不得不对您的设置做出一些假设,但会尽力为您提供至少一些Intial信息。
该教程有很多很棒的信息,可以帮助您完成所做的事情。他们使用Kubeadm代替K3,购买可以根据需要跳过该部分,并且仍然使用K3。
https://www.debontonline.com/p/kubernetes.html 自己设置和安装ETCD,您无需这样做,K3S将为您在群集上运行的pods中创建一个ETCD群集。
负载平衡主节点
haproxy +保留节点将配置为指向端口6443(TCP)的主节点的IPS,保存将为您提供虚拟IP,您将配置您的kubeconfig(您可以从K3S中获得)与该IP交谈。在路由器上,您需要保留IP(确保不将其分配给任何计算机)。
这是一个很好的视频,可以解释如何使用Nodejs服务器进行操作,但是主节点的概念相同:
https://www.youtube.com/watch?v=nizrdktvxzo在集群中运行的应用程序
使用K8S服务在此处阅读更多有关它:
本质上,您需要外部IP,我更喜欢使用MetalLB。MetalLB
为您提供类型的负载均衡器的服务,并使用外部IP
添加此添加。创建初始主节点时K3的标记为K3:
”
https://metallb.universe.tf/configuration/configuration/#layer-2-layer-2-configuration < /a>
您将希望在路由器上保留更多的IP,并将其放在下面YAML的地址部分。在此示例中,您将看到您在192.168.1.240到192.168.1.250范围内有11个IP,
将其作为文件示例Metallb -cm.yaml
安装使用这些YAML文件:
source- source-
Universe.tf/installation/#installation-by-manifest“ rel =” nofollow noreferrer
“> https://metallb.universe.tf/installation/#installation/#installation-by-manifest Balancer,将其外部IP用作外部IP
Kubectl获取服务-A-寻找您的入口服务,看看它是否具有外部IP,并且不说待处理,
我会尽力回答您的任何后续问题。祝你好运!
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
Install with these yaml files:
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!