Google Cloud SQL-将实例从一个项目移动到另一个项目,而停机时间为零?

发布于 2025-02-11 01:51:57 字数 402 浏览 3 评论 0原文

什么是将Google Cloud SQL实例(Postgres 9.6)从一个Google项目移至最小或零停机时间的最简单方法?实例大小约为20 GB,

有一项名为“迁移作业”的服务,看起来非常相关 https://cloud.google.com/database-migration/docs/postgres/create-migration-job 。但是我不明白它是否可以用来将实例从一个Google项目移至另一个Google项目。

从备份中简单恢复并不是我的情况,因为我想达到最小的停机时间,因此我正在寻找具有同步实时数据的2个运行实例。

PS。我还使用PGBOUNCER配置了VM

What is the easiest way to move Google Cloud SQL instance(Postgres 9.6) from one google project to another with minimum or zero downtime? Instance size is about 20 GB

There is a service called "Migration job" which looks very relevant https://cloud.google.com/database-migration/docs/postgres/create-migration-job . But I cannot understand whether it can be used to move instance from one google project to another.

Simple restoring from backup is not really my case because I want to achieve minimum possible downtime, so I'm looking for something like 2 running instances with the synced real-time data.

PS. I also have configured VM with pgbouncer

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

分分钟 2025-02-18 01:51:57

是的,数据库迁移服务可用于将一个云SQL实例从一个GCP项目移动到另一个GCP项目。这是一种比下一种方法便宜的方式,尽管它需要更多的设置,但也应该更快。可以为现有的云SQL实例创建连接配置文件,并且必须在目标项目中创建云SQL目标,但是一旦设置了所有内容,大多数迁移将是自动的。这是一个有据可查的过程,您可以在我们的文档。

开发人员有时希望以“零”停机时间迁移其(正常)关系数据库。 虽然停机时间可以减少,但不能在没有任何影响应用程序的情况下进行迁移(即停机时间为零)。复制导致复制滞后。

决定将所有应用程序从一个副本“迁移”到另一个副本的瞬间,应用程序(以及开发人员)必须等待(即停机)至少在使用新数据库之前等于“复制滞后”。实际上,停机时间高几个数量级(分钟至小时),因为:

  • 数据库查询可能需要多秒钟才能完成,并且在迁移时必须完成或中止飞行查询。
  • 如果数据库具有大量缓冲存储器,则必须“热身” - 在大型数据库中常见。
  • 如果数据库碎片具有重复的表,则可能需要在碎片迁移时暂停一些写作。
  • 必须在源中停止应用程序并在GCP中重新启动,并且必须建立与GCP数据库实例的连接。
  • 必须重新路由到应用程序的网络路由。根据如何设置DNS条目,这可能需要一些时间。

所有这些都可以通过一些计划和“成本”来减少(在迁移之前/之后不允许进行某些操作)。

减少源数据库的负载,直到迁移完成可能会有所帮助,并且停机时间可能会减少。

其他注意事项:

  1. 将机器类型增加到增加网络浏览吞吐量
  2. 增加ssd size ssd size 对于更高的iops/mbps。

更多关于

最直观的方法是将数据从云SQL实例导出到GCS桶中,而将其导入新项目中的新实例。这意味着有些停机时间,您将不得不在目标项目中手动创建与原始配置相同的配置的实例。它确实需要一些手动步骤,但是这将是一种简单且可验证的方法,可以在其他项目中复制数据。

Yes, Database Migration Service could be used to move one Cloud SQL instance from one GCP project to another. This is a cheaper way than the next approach, and although it requires more setup, it should be faster too. A connection profile can be created for the existing Cloud SQL instance, and a Cloud SQL target must be created in the destination project, but once everything is set up, most of the migration will be automatic. This is a well documented procedure, of which you can find information in our documentation.

Developers sometimes want to migrate their (normal) relational database with “zero” downtime. While downtime can be reduced, migration cannot be done without any impact on applications (that is, zero downtime). Replication causes replication lag.

The instant the decision is made to “migrate” all applications from one replica to another, applications (and therefore developers) have to wait (that is, downtime) for at least as long as the “replication lag” before using the new database. In practice, the downtime is a few orders of magnitude higher (minutes to hours) because:

  • Database queries can take multiple seconds to complete, and in flight queries must be completed or aborted at the time of migration.
  • The database has to be “warmed up” if it has substantial buffer memory - common in large databases.
  • If database shards have duplicate tables, some writes may need to be paused while the shards are being migrated.
  • Applications must be stopped at source and restarted in GCP, and connection to the GCP database instance must be established.
  • Network routes to the applications must be rerouted. Based on how DNS entries are set up, this can take some time.

All of these can be reduced with some planning and “cost” (some operations not permitted for some time before/after migration).

Decreasing the load on the source DB until the migration completes might help, and downtime might be less disruptive.

Other considerations:

  1. Increase the machine types to increase network throughput.
  2. Increase SSD size for higher IOPS/MBps.

More about.

The most intuitive way would be to export the data from the Cloud SQL instance to a GCS bucket, and import it to a new instance in the new project. This would imply some downtime, and you would have to manually create the instance in the target project with the same configuration as the original; it does require some manual steps, but it would be a simple and verifiable way to copy the data across an instance in a different project.

泪意 2025-02-18 01:51:57

鉴于它似乎是常见的用例,这是令人惊讶的记录。

我找不到对云SQL =&gt的DMS内置支持。跨项目的云SQL。 DMS中似乎没有任何能力在另一个项目中选择云SQL实例。

以下是对我有用的唯一一件事:

  1. 目标项目中使用DMS,而不是源项目。
  2. 为源数据库创建一个连接配置文件,将其描述为Postgres-IE,“自我管理的Postgres”。您将愚弄GCP认为这不是云SQL实例。否则,您将仅限于当前项目中的实例。
    • 由于您告诉DMS不是云SQL,而是Vanilla Postgres,因此您必须为源db配置逻辑复制,就好像是Vanilla Postgres一样。
    • 这意味着在配置您的源,您应该关注本地或自我管理的后格雷斯克指令。
    • 例外是您仍然需要设置cloudsql.logical_decodingcloudssql.enable_pglogical flags,该标志将导致云SQL在服务器上安装扩展名没有能力自己做。基本上,请阅读自我管理的说明和云SQL指令,并“合并”两者。
    • 连接配置与云SQL
    • 完全相同

  3. ,在当前项目中配置目标db

好运!

This is surprisingly underdocumented given that it seems to be a common use case.

I couldn't find any built-in support for DMS for Cloud SQL => Cloud SQL across projects. There doesn't seem to be any ability in DMS to select a Cloud SQL instance in another project.

The following is the only thing that worked for me:

  1. Use DMS in the destination project, not the source project.
  2. Create a connection profile to the source DB describing it as Postgres -- i.e., "self-managed Postgres". You're going to fool GCP into thinking that it is not a Cloud SQL instance. Otherwise you'd be limited to instances in the current project.
    • Since you are telling DMS it is not Cloud SQL, but vanilla Postgres, you have to configure logical replication for the source DB as if it was vanilla Postgres.
    • That means that in Configure your source, you should follow the On-premise or self-managed PostgreSQL instructions.
    • The exception is that you still need to set the cloudsql.logical_decoding and cloudsql.enable_pglogical flags which will cause Cloud SQL to install the extension on the server, since you don't have the ability to do this yourself. Basically, read both the self-managed instructions and the Cloud SQL instructions and "merge" the two.
    • The connectivity configuration is exactly the same as if it was Cloud SQL
  3. Create and run the migration job, configuring the destination DB in the current project

Good luck!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文