用Kubernetes部署应用出现CrashLoopBackOff是怎么回事?

发布于 2022-09-06 13:16:33 字数 4685 浏览 26 评论 0

用Kubernetes集群,3个主机,1个master,2个nodes。
Kubernetes的版本是1.7
部署一个应用类似:

deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: server
  labels:
    app: server
spec:
  ports:
    - port: 80
  selector:
    app: server
    tier: frontend
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: server
  labels:
    app: server
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: server
        tier: frontend
    spec:
      containers:
      - image: 192.168.33.13/myapp/server
        name: server
        ports:
        - containerPort: 3000
          name: server
        imagePullPolicy: Always

192.168.33.13是镜像服务器,用Harbor搭建。
从Kubernetes集群能访问Harbor镜像服务器。

部署执行(kubectl create -f deployment.yaml)后,第一次貌似pull到k8s集群本地并成功运行,容器重启后就不行了:

$ kubectl get pods
NAME                                                         READY     STATUS             RESTARTS   AGE
server-962161505-kw3jf                                       0/1       CrashLoopBackOff   6          9m
server-962161505-lxcfb                                       0/1       CrashLoopBackOff   6          9m
server-962161505-mbnkn                                       0/1       CrashLoopBackOff   6          9m
$ kubectl describe pod server-962161505-kw3jf
Name:           server-962161505-kw3jf
Namespace:      default
Node:           node1/192.168.33.11
Start Time:     Mon, 13 Nov 2017 17:45:47 +0900
Labels:         app=server
                pod-template-hash=962161505
                tier=backend
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","...
Status:         Running
IP:             10.42.254.104
Created By:     ReplicaSet/server-962161505
Controlled By:  ReplicaSet/server-962161505
Containers:
  server:
    Container ID:   docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
    Image:          192.168.33.13/myapp/server
    Image ID:       docker-pullable://192.168.33.13/myapp/server@sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
    Port:           3000/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 14 Nov 2017 10:13:12 +0900
      Finished:     Tue, 14 Nov 2017 10:13:13 +0900
    Ready:          False
    Restart Count:  26
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-csjqn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-csjqn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                 From            Message
  ----     ------                 ----                ----            -------
  Normal   SuccessfulMountVolume  22m                 kubelet, node1  MountVolume.SetUp succeeded for volume "default-token-csjqn"
  Normal   SandboxChanged         22m                 kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed                 20m (x3 over 21m)   kubelet, node1  Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
  Normal   BackOff                20m (x5 over 21m)   kubelet, node1  Back-off pulling image "192.168.33.13/myapp/server"
  Normal   Pulling                4m (x7 over 21m)    kubelet, node1  pulling image "192.168.33.13/myapp/server"
  Normal   Pulled                 4m (x4 over 20m)    kubelet, node1  Successfully pulled image "192.168.33.13/myapp/server"
  Normal   Created                4m (x4 over 20m)    kubelet, node1  Created container
  Normal   Started                4m (x4 over 20m)    kubelet, node1  Started container
  Warning  FailedSync             10s (x99 over 21m)  kubelet, node1  Error syncing pod
  Warning  BackOff                10s (x91 over 20m)  kubelet, node1  Back-off restarting failed container

把镜像放到docker hub上也一样。

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

遇到 2022-09-13 13:16:33

问题解决了吗?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文