如何修复metadata.resourceVersion:无效值:“”:必须为更新指定

发布于 2025-01-16 18:33:36 字数 4410 浏览 3 评论 0原文

所以我已经在 GKE 中部署了这个项目,并且我正在尝试从 github 操作制作 CI/CD。所以我添加了工作流程文件,其中包含

name: Build and Deploy to GKE

on:
  push:
    branches:
      - main

env:
  PROJECT_ID: ${{ secrets.GKE_PROJECT }}
  GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }}    # Add your cluster name here.
  GKE_ZONE: ${{ secrets.GKE_ZONE }}   # Add your cluster zone here.
  DEPLOYMENT_NAME: ems-app # Add your deployment name here.
  IMAGE: ciputra-ems-backend

jobs:
  setup-build-publish-deploy:
    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest
    environment: production

    steps:
    - name: Checkout
      uses: actions/checkout@v2

    # Setup gcloud CLI
    - uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
      with:
        service_account_key: ${{ secrets.GKE_SA_KEY }}
        project_id: ${{ secrets.GKE_PROJECT }}

    # Configure Docker to use the gcloud command-line tool as a credential
    # helper for authentication
    - run: |-
        gcloud --quiet auth configure-docker

    # Get the GKE credentials so we can deploy to the cluster
    - uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
      with:
        cluster_name: ${{ env.GKE_CLUSTER }}
        location: ${{ env.GKE_ZONE }}
        credentials: ${{ secrets.GKE_SA_KEY }}

    # Build the Docker image
    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
          --build-arg GITHUB_SHA="$GITHUB_SHA" \
          --build-arg GITHUB_REF="$GITHUB_REF" \
          .

    # Push the Docker image to Google Container Registry
    - name: Publish
      run: |-
        docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"

    # Set up kustomize
    - name: Set up Kustomize
      run: |-
        curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
        chmod u+x ./kustomize

    # Deploy the Docker image to the GKE cluster
    - name: Deploy
      run: |-
        ./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
        ./kustomize build . | kubectl apply -k ./
        kubectl rollout status deployment/$DEPLOYMENT_NAME
        kubectl get services -o wide

但当工作流程到达部署部分时,它显示一个错误

The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update

现在我已经搜索到这实际上不是真的,因为资源版本应该针对每次更新而更改,所以我只是删除了它

这是我的kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - service.yaml
  - deployment.yaml

我的Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    app: ems-app
  name: ems-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ems-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: ems-app
    spec:
      containers:
      - image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
        imagePullPolicy: IfNotPresent
        name: ems-sha256-1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

和我的service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress":true}'
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: ems-app
  name: ems-app-service
  namespace: default
spec:
  clusterIP: 10.88.10.114
  clusterIPs:
  - 10.88.10.114
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30261
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ems-app
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 34.143.255.159

So i have this project that i already deployed in GKE and i am trying to make the CI/CD from github action. So i added the workflow file which contains

name: Build and Deploy to GKE

on:
  push:
    branches:
      - main

env:
  PROJECT_ID: ${{ secrets.GKE_PROJECT }}
  GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }}    # Add your cluster name here.
  GKE_ZONE: ${{ secrets.GKE_ZONE }}   # Add your cluster zone here.
  DEPLOYMENT_NAME: ems-app # Add your deployment name here.
  IMAGE: ciputra-ems-backend

jobs:
  setup-build-publish-deploy:
    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest
    environment: production

    steps:
    - name: Checkout
      uses: actions/checkout@v2

    # Setup gcloud CLI
    - uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
      with:
        service_account_key: ${{ secrets.GKE_SA_KEY }}
        project_id: ${{ secrets.GKE_PROJECT }}

    # Configure Docker to use the gcloud command-line tool as a credential
    # helper for authentication
    - run: |-
        gcloud --quiet auth configure-docker

    # Get the GKE credentials so we can deploy to the cluster
    - uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
      with:
        cluster_name: ${{ env.GKE_CLUSTER }}
        location: ${{ env.GKE_ZONE }}
        credentials: ${{ secrets.GKE_SA_KEY }}

    # Build the Docker image
    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
          --build-arg GITHUB_SHA="$GITHUB_SHA" \
          --build-arg GITHUB_REF="$GITHUB_REF" \
          .

    # Push the Docker image to Google Container Registry
    - name: Publish
      run: |-
        docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"

    # Set up kustomize
    - name: Set up Kustomize
      run: |-
        curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
        chmod u+x ./kustomize

    # Deploy the Docker image to the GKE cluster
    - name: Deploy
      run: |-
        ./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
        ./kustomize build . | kubectl apply -k ./
        kubectl rollout status deployment/$DEPLOYMENT_NAME
        kubectl get services -o wide

but when the workflow gets to the deploy part, it shows an error

The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update

Now i have searched that this is actually not true because the resourceVersion is supposed to change for every update so i just removed it

Here is my kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - service.yaml
  - deployment.yaml

my deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    app: ems-app
  name: ems-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ems-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: ems-app
    spec:
      containers:
      - image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
        imagePullPolicy: IfNotPresent
        name: ems-sha256-1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

and my service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress":true}'
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: ems-app
  name: ems-app-service
  namespace: default
spec:
  clusterIP: 10.88.10.114
  clusterIPs:
  - 10.88.10.114
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30261
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ems-app
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 34.143.255.159

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

一枫情书 2025-01-23 18:33:36

由于这个问题的标题更多地与 Kubernetes 相关,而不是与 GCP 相关,所以我会回答,因为我在使用 AWS EKS 时遇到了同样的问题。

如何修复metadata.resourceVersion:无效值:0x0:必须指定更新是使用kubectl apply时可能出现的错误

Kubectl apply > 进行三路合并在本地文件、实时 kubernetes 对象清单和该实时对象清单中的注释 kubectl.kubernetes.io/last-applied-configuration 之间。

因此,由于某种原因,值 resourceVersion 设法写入您的 last-applied-configuration 中,可能是因为有人将实时清单导出到文件并修改它,并再次应用它。

当您尝试应用不具有该值(也不应该具有该值)的新本地文件,但该值存在于 last-applied-configuration 中时,它认为应该将其删除从您的实时清单中获取并专门在后续的 patch 操作中发送它,例如 resourceVersion: null,这应该会删除它。但它不起作用,本地文件违反了规则(据我所知)并变得无效。

正如 feichashao 提到的,解决方法是删除 last-applied-配置注释并再次应用您的本地文件。

解决后,您的 kubectl apply 输出将如下所示:

Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

并且您的实时清单将被更新。

As the title of this question is more Kubernetes related than GCP related, I will answer since I had this same problem using AWS EKS.

How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update is an error that may appear when using kubectl apply

Kubectl apply makes a three-way-merge between your local file, the live kubernetes object manifest and the annotation kubectl.kubernetes.io/last-applied-configuration in that live object manifest.

So, for some reason, the value resourceVersion managed to be written in your last-applied-configuration, probably because of someone exporting the live manifests to a file, modifying it, and applying it back again.

When you try to apply your new local file that doesn't have that value -and should not have it-, but the value is present in the last-applied-configuration, it thinks it should be removed from thye live manifest and specifically send it in the subsequent patch operation like resourceVersion: null, which should get rid of it. But it won't work and the local file breakes the rules (out of my knowledge as now) and becomes invalid.

As feichashao mentions, the way to solve it is to delete the last-applied-configuration annotation and apply again your local file.

Once you did solved, you kubectl apply output will be like:

Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

And your live manifests will be updated.

南笙 2025-01-23 18:33:36

this github问题我找到了更好的解决方案来解决我的问题:

通过获取现有资源版本并将其添加到更新对象中然后再应用来解决。

    getObj, err := dr.Get(obj.GetName(), metav1.GetOptions{})
    if errors.IsNotFound(err) {
        // This doesnt ever happen even if it is already deleted or not found
        log.Printf("%v not found", obj.GetName())
        return nil, nil
    }

    if err != nil {
        return nil, err
    }

    obj.SetResourceVersion(getObj.GetResourceVersion())

    response, err := dr.Update(obj, metav1.UpdateOptions{})
    if err != nil {
        return nil, err
    }

From this github issue I found a better solution for my problem:

It is solved by getting the existing resource version and add it to the update object before applying.

    getObj, err := dr.Get(obj.GetName(), metav1.GetOptions{})
    if errors.IsNotFound(err) {
        // This doesnt ever happen even if it is already deleted or not found
        log.Printf("%v not found", obj.GetName())
        return nil, nil
    }

    if err != nil {
        return nil, err
    }

    obj.SetResourceVersion(getObj.GetResourceVersion())

    response, err := dr.Update(obj, metav1.UpdateOptions{})
    if err != nil {
        return nil, err
    }
临风闻羌笛 2025-01-23 18:33:36

如果有人仍然遇到这个问题,如果您仍然想使用 GKE,我可能无法提供帮助,但您可以尝试 @ChandraKiranPasumarti 的答案。就我个人而言,我的前辈只要求我将我们的应用程序容器化,因此我使用 Google Cloud Run 来代替,以便更轻松地部署和 CI/CD。
您可以使用此文件在 Cloud Run 中使用 CI/CD

https://github.com/google-github-actions/setup-gcloud/blob/main/example-workflows/cloud-run/cloud-run。 yml

只需确保您已在存储库中添加了来自服务帐户 json 的机密,然后在 yml 文件中选择用于身份验证的凭据 json

In case anyone still having this problem, I may not be able to help if you still want to use GKE, But you can try the answer from @ChandraKiranPasumarti. For me personally, my senior only require me to containerize our app so I use Google Cloud Run Instead for easier deployments and CI/CD.
You can use this file to use CI/CD in Cloud Run

https://github.com/google-github-actions/setup-gcloud/blob/main/example-workflows/cloud-run/cloud-run.yml

Just make sure you've added secret from service account json in you repo, then select the credentials json for authentication in your yml file

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文