通过入口服务不可到达,仅通过nodeport:ip

发布于 2025-02-06 20:31:08 字数 5473 浏览 1 评论 0原文

下午好,

我想问。 我对Ingress及其交通流量的“有点”不高兴,

我创建了使用服务和入口的测试NGINX部署。 (在Titaniun Cloud中)

我没有通过浏览器的直接连接,因此IM使用隧道通过Firefox中的浏览器ABD Sock5代理访问。

部署:

k describe  deployments.apps dpl-nginx
Name:                   dpl-nginx
Namespace:              xxx
CreationTimestamp:      Thu, 09 Jun 2022 07:20:48 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
                        field.cattle.io/publicEndpoints:
                          [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true},{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.x...
Selector:               app=xxx-nginx
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=xxx-nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html/ from nginx-index-file (rw)
  Volumes:
   nginx-index-file:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      index-html-configmap
    Optional:  false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created)
Events:          <none>

服务:

Name:                     xxx-svc
Namespace:                xxx
Labels:                   <none>
Annotations:              field.cattle.io/publicEndpoints: [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true}]
Selector:                 app=xxx-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.95.33
IPs:                      10.43.95.33
Port:                     http-internal  888/TCP
TargetPort:               80/TCP
NodePort:                 http-internal  32506/TCP
Endpoints:                10.42.0.178:80,10.42.0.179:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

入口:

Name:             test-ingress
Namespace:        xxx
Address:          172.xx.xx.117,172.xx.xx.131,172.xx.xx.132
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  test.xxx.io
                         /   xxx-svc:888 (10.42.0.178:80,10.42.0.179:80)
Annotations:             field.cattle.io/publicEndpoints:
                           [{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.xx.132"],"port":80,"protocol":"HTTP","serviceName":"xxx:xxx-svc","ingressName...
                         nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
                         nginx.ingress.kubernetes.io/rewrite-target: /
                         nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
  Type    Reason  Age                     From                      Message
  ----    ------  ----                    ----                      -------
  Normal  Sync    9m34s (x37 over 3d21h)  nginx-ingress-controller  Scheduled for sync

当我尝试curl / wget到host / nodeip,cluster cluster direcly,这两个选项都可以工作时,我可以从Ingress nginx Pod curl中获得自定义索引

 wget test.xxx.io --no-proxy  --no-check-certificate                                                                                                   
--2022-06-13 10:35:12--  http://test.xxx.io/     
Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117
Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197 [text/html]
Saving to: ‘index.html.1’

index.html.1                                 100%[===========================================================================================>]     197  --.-KB/s    in 0s 

curl:

curl    test.xxx.io   --noproxy '*'     -I
HTTP/1.1 200 OK
Date: Mon, 13 Jun 2022 10:36:31 GMT
Content-Type: text/html
Content-Length: 197
Connection: keep-alive
Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT
ETag: "62a19f51-c5"
Accept-Ranges: bytes

nslookup

nslookup,dig,ping from cluster is working as well:
nslookup test.xxx.io
Server:         127.0.0.53   
Address:        127.0.0.53#53

Name:   test.xxx.io
Address: 172.xx.xx.131       
Name:   test.xxx.io
Address: 172.xx.xx.132       
Name:   test.xxx.io
Address: 172.xx.xx.117

dig

dig test.xxx.io +noall +answer
test.xxx.io.  22      IN      A       172.xx.xx.117
test.xxx.io.  22      IN      A       172.xx.xx.132
test.xxx.io.  22      IN      A       172.xx.xx.131

ping,

ping test.xxx.io
PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data.
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms

也可以很好地工作... 在Firefox通过nodeip:port中,我可以获得索引,但是通过主机

似乎不可能将进入流量转移到POD上,但是这个问题与浏览器仅有关吗?

感谢您的建议

Good afternoon,

i'd like to ask.
im a "little" bit upset regarding ingress and its traffic flow

i created test nginx deployment with service and ingress. ( in titaniun cloud )

i have no direct connect via browser so im using tunneling to get access via browser abd sock5 proxy in firefox.

deployment:

k describe  deployments.apps dpl-nginx
Name:                   dpl-nginx
Namespace:              xxx
CreationTimestamp:      Thu, 09 Jun 2022 07:20:48 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
                        field.cattle.io/publicEndpoints:
                          [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true},{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.x...
Selector:               app=xxx-nginx
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=xxx-nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html/ from nginx-index-file (rw)
  Volumes:
   nginx-index-file:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      index-html-configmap
    Optional:  false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created)
Events:          <none>

service:

Name:                     xxx-svc
Namespace:                xxx
Labels:                   <none>
Annotations:              field.cattle.io/publicEndpoints: [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true}]
Selector:                 app=xxx-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.95.33
IPs:                      10.43.95.33
Port:                     http-internal  888/TCP
TargetPort:               80/TCP
NodePort:                 http-internal  32506/TCP
Endpoints:                10.42.0.178:80,10.42.0.179:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

ingress:

Name:             test-ingress
Namespace:        xxx
Address:          172.xx.xx.117,172.xx.xx.131,172.xx.xx.132
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  test.xxx.io
                         /   xxx-svc:888 (10.42.0.178:80,10.42.0.179:80)
Annotations:             field.cattle.io/publicEndpoints:
                           [{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.xx.132"],"port":80,"protocol":"HTTP","serviceName":"xxx:xxx-svc","ingressName...
                         nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
                         nginx.ingress.kubernetes.io/rewrite-target: /
                         nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
  Type    Reason  Age                     From                      Message
  ----    ------  ----                    ----                      -------
  Normal  Sync    9m34s (x37 over 3d21h)  nginx-ingress-controller  Scheduled for sync

when i try curl/wget to host / nodeIP ,direcly from cluster , both option works, i can get my custom index

 wget test.xxx.io --no-proxy  --no-check-certificate                                                                                                   
--2022-06-13 10:35:12--  http://test.xxx.io/     
Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117
Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197 [text/html]
Saving to: ‘index.html.1’

index.html.1                                 100%[===========================================================================================>]     197  --.-KB/s    in 0s 

curl:

curl    test.xxx.io   --noproxy '*'     -I
HTTP/1.1 200 OK
Date: Mon, 13 Jun 2022 10:36:31 GMT
Content-Type: text/html
Content-Length: 197
Connection: keep-alive
Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT
ETag: "62a19f51-c5"
Accept-Ranges: bytes

nslookup

nslookup,dig,ping from cluster is working as well:
nslookup test.xxx.io
Server:         127.0.0.53   
Address:        127.0.0.53#53

Name:   test.xxx.io
Address: 172.xx.xx.131       
Name:   test.xxx.io
Address: 172.xx.xx.132       
Name:   test.xxx.io
Address: 172.xx.xx.117

dig

dig test.xxx.io +noall +answer
test.xxx.io.  22      IN      A       172.xx.xx.117
test.xxx.io.  22      IN      A       172.xx.xx.132
test.xxx.io.  22      IN      A       172.xx.xx.131

ping

ping test.xxx.io
PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data.
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms

also from ingress nginx pod curl works fine...
in firefox via nodeIP:port, i can get index, but via host its not possible

seems that ingress forwarding traffic to the pod, but is this issue only something to do with browser ?

Thanks for any advice

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

晨与橙与城 2025-02-13 20:31:08

因此,为了澄清,

当我使用隧道通过带有Socks5代理的浏览器从本地PC进入Intress。

ssh  [email protected] -D 1090

解决方案是微不足道的,

172.xx.xx.117   test.xxx.io

在跳跃服务器上添加到/etc/hosts 中。

so for clarification,

as I'm using tunneling to reach ingress from local pc via browser with SOCKS5 proxy.

ssh  [email protected] -D 1090

solution is trivial, add

172.xx.xx.117   test.xxx.io

into /etc/hosts on jump server.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文