我如何解决呼叫webhook的失败; webhook.cert-manager.io;

发布于 2025-01-25 05:43:11 字数 2157 浏览 4 评论 0 原文

我正在尝试设置一个K3S群集。当我有一个主人和代理设置时,经理没有问题。现在,我正在尝试使用嵌入式ETCD的2个主设置。我打开了TCP端口 6443 2379-2380 acter over over and vms and dowd of以下操作:

VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
# k3s kubectl get nodes
NAME  STATUS   ROLES                       AGE    VERSION
VM1   Ready    control-plane,etcd,master   130m   v1.22.7+k3s1
VM2   Ready    control-plane,etcd,master   128m   v1.22.7+k3s1

安装cert-manager效果很好:

# k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS
cert-manager-b4d6fd99b-c6fpc               1/1     Running
cert-manager-cainjector-74bfccdfdf-gtmrd   1/1     Running
cert-manager-webhook-65b766b5f8-brb76      1/1     Running

我的清单具有以下定义:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-account-key
    solvers:
    - selector: {}
      http01:
        ingress: {}

这在以下结果中导致错误:

# k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded

我尝试禁用两个防火墙,等待一天,重置并重新设置,但错误仍然存​​在。 Google也没什么帮助。我能找到的很少的信息在大部分时间都越过头,而且没有教程似乎会执行任何额外的步骤。

I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports 6443 and 2379-2380 for both VMs and did the following:

VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
# k3s kubectl get nodes
NAME  STATUS   ROLES                       AGE    VERSION
VM1   Ready    control-plane,etcd,master   130m   v1.22.7+k3s1
VM2   Ready    control-plane,etcd,master   128m   v1.22.7+k3s1

Installing cert-manager works fine:

# k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS
cert-manager-b4d6fd99b-c6fpc               1/1     Running
cert-manager-cainjector-74bfccdfdf-gtmrd   1/1     Running
cert-manager-webhook-65b766b5f8-brb76      1/1     Running

My manifest has the following definition:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-account-key
    solvers:
    - selector: {}
      http01:
        ingress: {}

Which results in the following error:

# k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded

I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

醉酒的小男人 2025-02-01 05:43:12

我这样做,为我工作。

舵机安装
CERT-MANAGER JETSTACK/CERT-MANAGER
-Namespace Cert-Manager
- 创建名称空间
- Version v1.8.0
-set webhook.secureport = 10260

来源:

I do this it, and work for me.

helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.8.0
--set webhook.securePort=10260

source: https://hackmd.io/@maelvls/debug-cert-manager-webhook

故事还在继续 2025-02-01 05:43:11

可以在 docs ,例如,有一个

但是,就我而言,这并没有真正解决问题。对我来说,问题是,当我使用 cert-Manager 玩耍时,我碰巧安装并多次卸载它。事实证明,仅删除名称空间,例如 kubectl删除名称空间cert-manager 没有删除webhooks和其他非显而易见的资源。

遵循 unstalling cert-manager 表现再次解决了这个问题。

A good starting point for troubleshooting issues with the webhook can be found int the docs, e.g. there is a section for problems on GKE private clusters.

In my case, however, this didn't really solve the problem. For me the issue was that when I played around with cert-manager I happen to install and uninstall it multiple times. It turned out that just removing the namespace, e.g. kubectl delete namespace cert-manager didn't remove the webhooks and other non-obvious resources.

Following the official guide for uninstalling cert-manager and applying the manifests again solved the issue.

落墨 2025-02-01 05:43:11

尝试在集群发行人中指定适当的入口类名称,例如:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-account-key
    solvers:
    - http01:
        ingress:
          class: nginx

另外,请确保您具有Cert Manager注释和TLS秘密名称,如以下内容:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt
...
spec:
  tls:
    - hosts:
      - domain.com
      secretName: letsencrypt-account-key

Try to specify the proper ingress class name in your Cluster Issuer, like this:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-account-key
    solvers:
    - http01:
        ingress:
          class: nginx

Also, make sure that you have the cert manager annotation and the tls secret name specified in your Ingress like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt
...
spec:
  tls:
    - hosts:
      - domain.com
      secretName: letsencrypt-account-key
初相遇 2025-02-01 05:43:11

有两种方法可以进行安装,这两种方法应与安装方式相同的方式删除/卸载。

如果您使用此链接安装:
kubectl apply -f

使用以下方式删除:
kubectl delete -f

如果您使用此链接安装:
kubectl应用-f

使用以下方式删除:
kubectl应用-f

如果您正在遇到终止状态,请运行:
kubectl删除apiservice v1beta1.webhook.cert-manager.io

来源: https://cert-manager.io/v1.2-docs/installation/uninstall/uninstall/kubernetes/

就我个人而言,我走到了与证书并删除相关的节点它是因为我将游泳池设置为Autoscale。

然后,请按照此页面上的说明来帮助您成功安装:
https://www.digitalocean.com/community/tutorials/how-to-to-set-up-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

There are 2 ways to make the installation and these 2 ways are to be deleted/uninstalled the same way they were installed.

If you installed using this link:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.yaml

Delete using this:
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.yaml

If you installed using this link:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.yaml

Delete using this:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.yaml

If you are experiencing terminating state, run:
kubectl delete apiservice v1beta1.webhook.cert-manager.io

source: https://cert-manager.io/v1.2-docs/installation/uninstall/kubernetes/

Personally, I go as far as checking the node tied to the certificate and recycle or delete it because I have set the pool to autoscale.

Then, follow the instruction on this page to assist you make a successful installation:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

不如归去 2025-02-01 05:43:11

在Minikube上安装Eclipse Che时,我遇到了同样的问题。我只是重新命令,这次证书工作了!

值得尝试一次。

I had the same problem while installing Eclipse Che on minikube. I just reran the command and this time the certs worked!

Worth trying once.

紫南 2025-02-01 05:43:11

我遇到了一个类似的问题,该问题产生了以下错误消息:

Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded

在调查后,我确定根本原因是集群中的DNS问题。为了确认这一点,我启动了一个高山吊舱并执行了Ping命令,这在尝试使用ping“ google.com”时导致失败。

为了解决问题,我需要纠正核心配置图。就我而言,问题源于不正确的群集域,该域最初设置为< fords-domain> .at

这是我修复它的方式:

我编辑了Coredns配置映射,以用正确的群集域替换< ford域 t。

# kubectl -n kube-system get configmaps/coredns -o yaml

apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        log . {
            class error
        }
        prometheus :9153

        kubernetes <wrong-domain>.at in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

此外,值得一提的是,问题也可能与IPv6有关。如果您使用的是IPv6,请确保部署集群的网络也支持IPv6。

I encountered a similar issue, which generated the following error message:

Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded

Upon investigation, I determined that the root cause was a DNS problem within the cluster. To confirm this, I initiated an Alpine pod and executed a ping command, which resulted in a failure when attempting to ping "google.com."

To resolve the issue, I needed to correct the CoreDNS configuration map. In my case, the problem stemmed from an incorrect cluster domain, which was originally set to <wrong-domain>.at.

Here's how I fixed it:

I edited the CoreDNS configuration map to replace <wrong-domain>.at with the correct cluster domain.

# kubectl -n kube-system get configmaps/coredns -o yaml

apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        log . {
            class error
        }
        prometheus :9153

        kubernetes <wrong-domain>.at in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

Additionally, it's worth mentioning that the issue might also be related to IPv6. If you're using IPv6, ensure that the network where the cluster is deployed supports IPv6 as well.

情痴 2025-02-01 05:43:11

如果您遇到问题并且在AWS EKS上运行,则可以将其设置为true,例如在这里
并将默认端口从10250更改为其他东西(我使用了10255),只需确保您替换所有实例即可。祝你好运

if you encounter the problem and you are running on aws eks you can solve it with setting hostNetwork to true like here.
and change the default port from 10250 to something else ( i used 10255 ) just make sure you replace all instances. good luck

很酷又爱笑 2025-02-01 05:43:11

您可能会有网络政策问题!有群集限制吗?如果是,您可能需要添加类似的东西:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-cert-manager-resolver-reverse
  namespace: cert-manager
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: cert-manager
  egress:
    - namespaceSelector:
        matchLabels:
          acme.cert-manager.io/http01-solver: true
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-cert-manager-resolver
  namespace: "{{namespace}}"
spec:
  podSelector:
    matchLabels:
      acme.cert-manager.io/http01-solver: "true"
  ingress:
    - ports:
      - port: 8089
        protocol: TCP
    - namespaceSelector:
        matchLabels:
          app.kubernetes.io/instance: cert-manager

You may have a network policies issue ! have any cluster restrictions ? If yes you may need to add something like that:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-cert-manager-resolver-reverse
  namespace: cert-manager
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: cert-manager
  egress:
    - namespaceSelector:
        matchLabels:
          acme.cert-manager.io/http01-solver: true
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-cert-manager-resolver
  namespace: "{{namespace}}"
spec:
  podSelector:
    matchLabels:
      acme.cert-manager.io/http01-solver: "true"
  ingress:
    - ports:
      - port: 8089
        protocol: TCP
    - namespaceSelector:
        matchLabels:
          app.kubernetes.io/instance: cert-manager
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文