nginx-ingress 停止工作,尽管服务已就绪 + pod(“没有任何活动端点。”)
我已将 microk8s 设置转移到新服务器,发现曾经有效的入口设置我的试用设置停止工作。
我正在运行这个最小的 whoami-app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: default
labels:
app: whoami
spec:
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: default
spec:
selector:
app: whoami
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami
namespace: default
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
Pod 已启动并正在运行,服务正确暴露了它,但入口不起作用:
kubectl get services whoami
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami ClusterIP 10.152.183.184 <none> 80/TCP 26m
curl 10.152.183.184
Hostname: whoami-567b85d54d-qbbd5
IP: 127.0.0.1
IP: ::1
IP: 10.1.76.7
IP: fe80::e850:aaff:fe72:91c4
RemoteAddr: 192.168.0.102:21910
GET / HTTP/1.1
Host: 10.152.183.184
User-Agent: curl/7.68.0
Accept: */*
kubectl get ingress whoami
NAME CLASS HOSTS ADDRESS PORTS AGE
whoami <none> * 127.0.0.1 80 28m
nginx-ingress-controller 日志显示了这些条目:
controller.go:1076] Service "default/whoami" does not have any active Endpoint.
但同样,通过 clusterIP 进行访问是有效的,因此两个Pod 和 Service 正在做他们的工作。
I have transferred my microk8s setup to a new server and found that the once-working ingress setup my trial setup stopped working.
I am running this minimal whoami-app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: default
labels:
app: whoami
spec:
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: default
spec:
selector:
app: whoami
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami
namespace: default
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
Pod is up and running, service exposed it properly, but the ingress is not working:
kubectl get services whoami
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami ClusterIP 10.152.183.184 <none> 80/TCP 26m
curl 10.152.183.184
Hostname: whoami-567b85d54d-qbbd5
IP: 127.0.0.1
IP: ::1
IP: 10.1.76.7
IP: fe80::e850:aaff:fe72:91c4
RemoteAddr: 192.168.0.102:21910
GET / HTTP/1.1
Host: 10.152.183.184
User-Agent: curl/7.68.0
Accept: */*
kubectl get ingress whoami
NAME CLASS HOSTS ADDRESS PORTS AGE
whoami <none> * 127.0.0.1 80 28m
The nginx-ingress-controller log shows these entries:
controller.go:1076] Service "default/whoami" does not have any active Endpoint.
But again, accessing through the clusterIP works, so both the Pod and the Service are doing their job.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我真的不知道这是怎么发生的,但端点与 Pod IP 不匹配。
我使用 kubectl delete endpoints whoami 手动删除了端点,它使用现在正确的 IP 重新创建,现在 ingress 似乎可以工作。
I don't really know how it happened, but the endpoint was not matching the pod IP.
I deleted the endpoint manually usind
kubectl delete endpoints whoami
, it got recreated with the now correct IP, now ingress seems to work.我很高兴听到你成功了。
我试图找到这种行为的原因,但不幸的是我还没有找到。 此链接有一些类似警告的解决方案。也许它会对某人有所帮助。
第一种情况是入口控制器的入口类与用于服务的入口资源清单中的入口类不匹配。
第二个解决方案是在服务选择器中包含 id 指令:
另请参阅 StackOverflow 上的这些问题:
I am glad to hear you managed it work.
I tried to find the reason of that behavior but unfortunately I haven't found it yet. At this link there are some solutions to similar warning. Maybe it will help someone.
First one is situation when ingress class of the ingress controller mismatch ingress class in the ingress resource manifest used for services.
In second solution is including id directive in service selector:
See also these questions on StackOverflow: