Kong Helm代理入口控制器400不良要求

发布于 2025-02-04 01:59:23 字数 5689 浏览 4 评论 0原文

在通过Helm Chart安装Kong时,每当我尝试启用代理的入口控制器时,我都会出现错误。我正在打开入口控制器,以便它可以向Cert Manager索取证书(正常运行)。随着入口控制器的关闭,一切都按预期工作。开启,我得到了一个400不良请求,普通的HTTP请求已发送到HTTPS端口错误。

我尝试了:

  1. 在TLS部分中,将容器端口(和OverrideserviceTargetPort)从8443更改为8000、80、443和8443。在使用8000时,我收到错误代码:SSL_ERROR_REX_RECORD_TOO_LONG使用HTTPS或A 使用http bad request>错误。在VerrideserviceTargetPort中使用端口443确实允许我与HTTP连接,但是HTTPS导致我们无法在xyz

    上连接到服务器
  2. 上连接到服务器,添加“ konghq.com/protocol":"htttps”对代理的注释。这会导致HTTP和HTTPS的不良请求错误。

  3. 关闭代理中的http。

  4. 关闭入口控制器中的TLS。

  5. 根据我在代理日志中看到的错误,对管理API的一些更改。现在,代理日志仅显示400秒没有任何错误。

  6. 更改节点端口

  7. 手动更改入口资源中的服务端口,然后将路径更改为/?(。*)

我认为问题是Ingress Controller终止了TLS连接并将不安全的连接传递给Kong Proxy ,只是在错误的端口上。这很好,但是我似乎找不到代理中的正确端口来传递连接。

另一个奇怪的是,有时,在将更改应用于舵局图表之后,有一个简短的第二个,如果在加载所有内容之前,请在HTTPS上导航到Kong,它实际上将正确连接。但是,所有随后的尝试都失败了。我也无法可靠地将其连接到这种方式

使用GKE,因此AWS LB注释在这里不适用(我找不到类似的

Kong:2.8

Ingress:

Name:             kong-dev-kong-proxy
Namespace:        custom-namespace
Address:          123.123.123.123
Default backend:  default-http-backend:80 (192.168.0.3:8080)
TLS:
  kong-proxy-cert terminates kong-test.domain
Rules:
  Host                    Path  Backends
  ----                    ----  --------
  kong-test.domain  
                          /?(.*)   kong-dev-kong-proxy:443 (192.168.0.125:8443)
Annotations:              cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
                          kubernetes.io/tls-acme: true
                          meta.helm.sh/release-name: kong-dev
                          meta.helm.sh/release-namespace: custom-namespace
Events:                   <none>

Helm:helm:

proxy:
  # Enable creating a Kubernetes service for the proxy
  enabled: true
  type: LoadBalancer
  # To specify annotations or labels for the proxy service, add them to the respective
  # "annotations" or "labels" dictionaries below.
  annotations: #{"konghq.com/protocol":"https"}
  # If terminating TLS at the ELB, the following annotations can be used
  #{"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "*",}
  # "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:REGION:ACCOUNT:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "kong-proxy-tls",
  # "service.beta.kubernetes.io/aws-load-balancer-type": "elb"
  labels:
    enable-metrics: "true"

  http:
    # Enable plaintext HTTP listen for the proxy
    enabled: true
    servicePort: 80
    containerPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []

  tls:
    # Enable HTTPS listen for the proxy
    enabled: true
    servicePort: 443
    containerPort: 8443
    # Set a target port for the TLS port in proxy service
    #overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    #nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  # Define stream (TCP) listen
  # To enable, remove "[]", uncomment the section below, and select your desired
  # ports and parameters. Listens are dynamically named after their servicePort,
  # e.g. "stream-9000" for the below.
  # Note: although you can select the protocol here, you cannot set UDP if you
  # use a LoadBalancer Service due to limitations in current Kubernetes versions.
  # To proxy both TCP and UDP with LoadBalancers, you must enable the udpProxy Service
  # in the next section and place all UDP stream listen configuration under it.
  stream: []
    #   # Set the container (internal) and service (external) ports for this listen.
    #   # These values should normally be the same. If your environment requires they
    #   # differ, note that Kong will match routes based on the containerPort only.
    # - containerPort: 9000
    #   servicePort: 9000
    #   protocol: TCP
    #   # Optionally set a static nodePort if the service type is NodePort
    #   # nodePort: 32080
    #   # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
    #   # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
    #   parameters: []

  # Kong proxy ingress settings.
  # Note: You need this only if you are using another Ingress Controller
  # to expose Kong outside the k8s cluster.
  ingress:
    # Enable/disable exposure using ingress.
    enabled: true
    ingressClassName: kong
    # Ingress hostname
    # TLS secret name.
    tls: kong-proxy-cert
    hostname: kong-test.domain
    # Map of ingress annotations.
    annotations: {"kubernetes.io/tls-acme": "true", "cert-manager.io/cluster-issuer": "letsencrypt-cluster-issuer"}
    # Ingress path.
    path: /
    # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    pathType: ImplementationSpecific

  # Optionally specify a static load balancer IP.
  # loadBalancerIP:

update:extment:

每次我匹配时协议,通过将入口控制器中的后端端口更改为80,或者通过设置konghq.com/protocol":"https”,我将初始HTTP越过HTTPS端口错误,但是代理返回标准400不好的请求错误是,如果我直接从Ingress curl curl the Ingress中指定的主机名时,我只会获得新的400个错误。负载平衡器的POD甚至外部IP,我能够从代理中获得典型的404响应,但是400 BAD请求错误仍然会在我提供带有请求的主机名时发生(当使用给入口控制器的“主机”时,代理人的入口控制器才开始使用时。 错误后,我添加-H选项并提供Ingress HostName。

While installing Kong via the helm chart, I get an error any time I try to enable the ingress controller for the proxy. I am turning on the ingress controller so that it can request a cert from cert manager (which is functioning properly). With the ingress controller off, everything works as expected. With it on, I get a 400 Bad Request The plain HTTP request was sent to HTTPS port error.

I tried:

  1. Changing the container port (and overrideServiceTargetPort) from 8443 to 8000, 80, 443, and 8443 in the tls section. While using 8000 I received Error code: SSL_ERROR_RX_RECORD_TOO_LONG using https or a bad request error using http. Using port 443 in verrideServiceTargetPort did allow me to connect with http, but https resulted in We can’t connect to the server at XYZ

  2. Adding the "konghq.com/protocol":"https" annotation to the proxy. This results in a bad request error for both http and https.

  3. Turning off http in the proxy.

  4. Turning off TLS in the ingress controller.

  5. Some changes to the admin api based on errors I was seeing in the proxy logs. Right now the proxy logs just show the 400s without any errors.

  6. Changing node ports

  7. Manually changing the service port in the ingress resource and changing the path to /?(.*)

I think the issue is that the ingress controller is terminating the TLS connection and passing an unsecured connection to the Kong proxy, just on the wrong port. This is fine, but I can’t seem to find the correct port in the proxy to pass the connection to.

One further oddity is that sometimes, immediately after applying changes to the helm chart, there is a brief second where if navigate to Kong on https before everything is loaded, it will actually properly connect. All subsequent tries fail, though. I also can’t reliably get it to connect this way

This is using GKE, so the AWS LB annotations don’t apply here (and I can’t find anything similar

Kong: 2.8

Ingress:

Name:             kong-dev-kong-proxy
Namespace:        custom-namespace
Address:          123.123.123.123
Default backend:  default-http-backend:80 (192.168.0.3:8080)
TLS:
  kong-proxy-cert terminates kong-test.domain
Rules:
  Host                    Path  Backends
  ----                    ----  --------
  kong-test.domain  
                          /?(.*)   kong-dev-kong-proxy:443 (192.168.0.125:8443)
Annotations:              cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
                          kubernetes.io/tls-acme: true
                          meta.helm.sh/release-name: kong-dev
                          meta.helm.sh/release-namespace: custom-namespace
Events:                   <none>

Helm:

proxy:
  # Enable creating a Kubernetes service for the proxy
  enabled: true
  type: LoadBalancer
  # To specify annotations or labels for the proxy service, add them to the respective
  # "annotations" or "labels" dictionaries below.
  annotations: #{"konghq.com/protocol":"https"}
  # If terminating TLS at the ELB, the following annotations can be used
  #{"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "*",}
  # "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:REGION:ACCOUNT:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX",
  # "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "kong-proxy-tls",
  # "service.beta.kubernetes.io/aws-load-balancer-type": "elb"
  labels:
    enable-metrics: "true"

  http:
    # Enable plaintext HTTP listen for the proxy
    enabled: true
    servicePort: 80
    containerPort: 8000
    # Set a nodePort which is available if service type is NodePort
    # nodePort: 32080
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []

  tls:
    # Enable HTTPS listen for the proxy
    enabled: true
    servicePort: 443
    containerPort: 8443
    # Set a target port for the TLS port in proxy service
    #overrideServiceTargetPort: 8000
    # Set a nodePort which is available if service type is NodePort
    #nodePort: 32443
    # Additional listen parameters, e.g. "reuseport", "backlog=16384"
    parameters:
    - http2

  # Define stream (TCP) listen
  # To enable, remove "[]", uncomment the section below, and select your desired
  # ports and parameters. Listens are dynamically named after their servicePort,
  # e.g. "stream-9000" for the below.
  # Note: although you can select the protocol here, you cannot set UDP if you
  # use a LoadBalancer Service due to limitations in current Kubernetes versions.
  # To proxy both TCP and UDP with LoadBalancers, you must enable the udpProxy Service
  # in the next section and place all UDP stream listen configuration under it.
  stream: []
    #   # Set the container (internal) and service (external) ports for this listen.
    #   # These values should normally be the same. If your environment requires they
    #   # differ, note that Kong will match routes based on the containerPort only.
    # - containerPort: 9000
    #   servicePort: 9000
    #   protocol: TCP
    #   # Optionally set a static nodePort if the service type is NodePort
    #   # nodePort: 32080
    #   # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
    #   # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
    #   parameters: []

  # Kong proxy ingress settings.
  # Note: You need this only if you are using another Ingress Controller
  # to expose Kong outside the k8s cluster.
  ingress:
    # Enable/disable exposure using ingress.
    enabled: true
    ingressClassName: kong
    # Ingress hostname
    # TLS secret name.
    tls: kong-proxy-cert
    hostname: kong-test.domain
    # Map of ingress annotations.
    annotations: {"kubernetes.io/tls-acme": "true", "cert-manager.io/cluster-issuer": "letsencrypt-cluster-issuer"}
    # Ingress path.
    path: /
    # Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix)
    pathType: ImplementationSpecific

  # Optionally specify a static load balancer IP.
  # loadBalancerIP:

Update:

Every time I match the protocols, by either changing the backend port in the ingress controller to 80 or by setting konghq.com/protocol":"https", I get past the initial http to https port error, but then the proxy returns a standard 400 bad request error. The strange thing is that I only get the new 400 error when trying to use the hostname specified in the ingress. If I curl the proxy service name (as specified in the backend of the ingress) directly from a pod or even the external IP for the load balancer, I am able to get a typical 404 response from the proxy, but the 400 bad request error still occurs any time I supply a hostname with the request (when the ingress controller for the proxy is on and only while using the "host" given to the ingress controller). Doing a curl directly to the proxy service name from an internal pod works, but it gives me a 400 bad request error as soon as I add the -H option and supply the ingress hostname.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

冷血 2025-02-11 01:59:23

我能够通过将此注释添加到代理入学注释部分来解决这个问题。

"konghq.com/preserve-host": "false"

在数据库中手动进行更改无效。只有我用上述注释更新了Helm图表后,一切都开始工作了。

I was able to get around this problem by adding this annotation to the proxy ingress annotation section.

"konghq.com/preserve-host": "false"

Making the change manually in the database didn't work. It was only once I updated the helm chart with the above annotation that everything started working.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文