如何在同一项目中的GKE簇中迁移持久量?

发布于 2025-01-21 08:23:08 字数 137 浏览 2 评论 0原文

我有一个gke群集运行,其中有几个持久磁盘用于存储。 为了设置一个登台环境,我在同一项目中创建了第二个集群。 现在,我想在登台集群中使用生产群集的持久磁盘中的数据。

我已经为登台集群创建了持久磁盘。将生产数据转移到登台群集的磁盘的最佳方法是什么?

I have a GKE cluster running with several persistent disks for storage.
To set up a staging environment, I created a second cluster inside the same project.
Now I want to use the data from the persistent disks of the production cluster in the staging cluster.

I already created persistent disks for the staging cluster. What is the best approach to move over the production data to the disks of the staging cluster.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

夜声 2025-01-28 08:23:08

您可以使用开源工具 velero 旨在迁移kubernetes cluster资源。

请按照以下步骤在GKE簇中迁移持久磁盘:

  1. 创建一个GCS桶:
BUCKET=<your_bucket_name>
gsutil mb gs://$BUCKET/
  1. 创建 Google Service帐户并将关联的电子邮件存储在变量中以供以后使用:
GSA_NAME=<your_service_account_name>
gcloud iam service-accounts create $GSA_NAME \
    --display-name "Velero service account" 

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:Velero service account" \
  --format 'value(email)')
  1. 为服务帐户创建自定义角色:
PROJECT_ID=<your_project_id>
ROLE_PERMISSIONS=(
    compute.disks.get
    compute.disks.create
    compute.disks.createSnapshot
    compute.snapshots.get
    compute.snapshots.create
    compute.snapshots.useReadOnly
    compute.snapshots.delete
    compute.zones.get
    storage.objects.create
    storage.objects.delete
    storage.objects.get
    storage.objects.list
)

gcloud iam roles create velero.server \
    --project $PROJECT_ID \
    --title "Velero Server" \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/velero.server

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  1. 授予对Velero的访问:
gcloud iam service-accounts keys create credentials-velero \
    --iam-account $SERVICE_ACCOUNT_EMAIL
  1. 下载并在源群集上下载和安装Velero:
wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz
tar -xvzf velero-v1.8.1-linux-amd64.tar.gz
sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero

velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.4.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero

注意:下载和安装已在A上执行Linux系统,这是云外壳使用的操作系统。如果您通过Cloud SDK管理GCP资源,则发布和安装过程可能会有所不同。

  1. 确认正在运行Velero Pod:
$ kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-xxxxxxxxxxx-xxxx   1/1     Running   0          11s
  1. 为PV创建备份,PVCS:
velero backup create <your_backup_name> --include-resources pvc,pv --selector app.kubernetes.io/<your_label_name>=<your_label_value> 
  1. 验证您的备份成功而没有错误或警告:
$ velero backup describe <your_backup_name>  --details
Name:         your_backup_name
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=21

Phase:  Completed

Errors:    0
Warnings:  0

既然持续的卷已经备份,则可以继续迁移到目标集群之后,然后再迁移到目标集群。步骤:

  1. 在目标群集安装velero中进行身份验证
gcloud container clusters get-credentials <your_destination_cluster> --zone <your_zone> --project <your_project>
  1. 使用与步骤5相同的参数
velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.4.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero
  1. :确认Velero Pod正在运行:
kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-xxxxxxxxxx-xxxxx   1/1     Running   0          19s
  1. 避免覆盖备份数据,请将存储桶更改为仅读取模式:
kubectl patch backupstoragelocation default -n velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
  1. 确认Velero能够要从Bucket访问备份:
velero backup describe <your_backup_name> --details
  1. 还原后备的卷:
velero restore create --from-backup <your_backup_name>
  1. 确认持续的量已在目标群集上恢复:
kubectl get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-data-my-release-redis-master-0     Bound    pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-0   Bound    pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-1   Bound    pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-2   Bound    pvc-65d7471a-7885-46b6-a377-0703e7b01484   8Gi        RWO            standard       79s

查看此教程作为参考。

You can use the open source tool Velero which is designed to migrate Kubernetes cluster resources.

Follow these steps to migrate a persistent disk within GKE clusters:

  1. Create a GCS bucket:
BUCKET=<your_bucket_name>
gsutil mb gs://$BUCKET/
  1. Create a Google Service Account and store the associated email in a variable for later use:
GSA_NAME=<your_service_account_name>
gcloud iam service-accounts create $GSA_NAME \
    --display-name "Velero service account" 

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:Velero service account" \
  --format 'value(email)')
  1. Create a custom role for the Service Account:
PROJECT_ID=<your_project_id>
ROLE_PERMISSIONS=(
    compute.disks.get
    compute.disks.create
    compute.disks.createSnapshot
    compute.snapshots.get
    compute.snapshots.create
    compute.snapshots.useReadOnly
    compute.snapshots.delete
    compute.zones.get
    storage.objects.create
    storage.objects.delete
    storage.objects.get
    storage.objects.list
)

gcloud iam roles create velero.server \
    --project $PROJECT_ID \
    --title "Velero Server" \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/velero.server

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  1. Grant access to Velero:
gcloud iam service-accounts keys create credentials-velero \
    --iam-account $SERVICE_ACCOUNT_EMAIL
  1. Download and install Velero on the source cluster:
wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz
tar -xvzf velero-v1.8.1-linux-amd64.tar.gz
sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero

velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.4.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero

Note: The download and installation was performed on a Linux system, which is the OS used by Cloud Shell. If you are managing your GCP resources via Cloud SDK, the release and installation process could vary.

  1. Confirm that the velero pod is running:
$ kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-xxxxxxxxxxx-xxxx   1/1     Running   0          11s
  1. Create a backup for the PV,PVCs:
velero backup create <your_backup_name> --include-resources pvc,pv --selector app.kubernetes.io/<your_label_name>=<your_label_value> 
  1. Verify that your backup was successful with no errors or warnings:
$ velero backup describe <your_backup_name>  --details
Name:         your_backup_name
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=21

Phase:  Completed

Errors:    0
Warnings:  0

Now that the Persistent Volumes are backed up, you can proceed with the migration to the destination cluster following these steps:

  1. Authenticate in the destination cluster
gcloud container clusters get-credentials <your_destination_cluster> --zone <your_zone> --project <your_project>
  1. Install Velero using the same parameters as step 5 on the first part:
velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.4.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero
  1. Confirm that the velero pod is running:
kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-xxxxxxxxxx-xxxxx   1/1     Running   0          19s
  1. To avoid the backup data being overwritten, change the bucket to read-only mode:
kubectl patch backupstoragelocation default -n velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
  1. Confirm Velero is able to access the backup from bucket:
velero backup describe <your_backup_name> --details
  1. Restore the backed up Volumes:
velero restore create --from-backup <your_backup_name>
  1. Confirm that the persistent volumes have been restored on the destination cluster:
kubectl get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-data-my-release-redis-master-0     Bound    pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-0   Bound    pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-1   Bound    pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69   8Gi        RWO            standard       79s
redis-data-my-release-redis-replicas-2   Bound    pvc-65d7471a-7885-46b6-a377-0703e7b01484   8Gi        RWO            standard       79s

Check out this tutorial as a reference.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文