MySQL 同步 - 寻找针对此特定场景的建议
我们经常遇到这种情况:
- 我们有生产数据库。
- 我们意识到(或客户请求)模式更改。
- 我们在本地进行更改。
- 我们需要上传新模式而不丢失数据。
对于我到目前为止所读到的内容,我注意到我们可以:
- 使用 GUI 进行同步。
- 使用某种颠覆来跟踪数据库更改。
- 创建数据库表来跟踪更改。
- 部署时使用某种自动化任务(我相信是 bash 过程?)。
GUI 是一种很好的方法,因为它是一种简单的方法。 但不是自动化的,我们无法恢复到以前的“版本”。
Subversion 我们可以获得不同版本的数据库模式。 但理解和创建一个包含一侧架构和另一侧数据的工作流程非常复杂。
SQL表方法似乎很难理解,我什至看不到那里的好和坏的部分。
bash 自动化任务似乎是一个非常好的方法,因为如果配置良好,它可以很好地进入部署工作流程。但似乎我们无法控制旧版本的数据库。
说实话,我认为在任何颠覆系统下都没有那么重要的数据库需求,因为希望它不是我们每天都会更改的东西,就像我们应用程序上的其他文件一样。
有了这一切,我倾向于搜索一个 bash 进程,它可以在部署时执行两件事:
从生产服务器检索数据。 (因为它将存在最多更新的数据。)
用本地模式更新远程模式。 (因为我们将首先在本地更改架构。)
您觉得怎么样?你有什么建议?是否有一个标准方法来处理这种(在我看来)常见情况?
Often we have this scenario:
- We have our database on the production.
- We realise (or the client requests) schema changes.
- We change locally.
- We need to upload that new schema without losing data.
For what I've read so far, I notice that we can:
- Use a GUI to make this synchronization.
- Use some sort of subversion to track database changes.
- Create a database table to track changes.
- Use some sort of automatized task when deploying (bash process I believe?).
GUI is a nice approach because it's an easy way of doing.
But is not automated and we cannot revert to previous "versions".
Subversion we can get different versions of our database schema.
But is very complicated to understand and to create a workflow that could comprise the schema on one side, and the data on the other side.
The SQL table approach seems to difficult to understand that I can't even see the good and bad parts there.
The bash automatized task seems to be a very nice approach because, if well configured, it enters nicely on deployment workflow. But it seems that we cannot have control above old versions of our database.
Truth being told, I don't see a SO important need for have the database under any subversion system because, hopefully it's not something that we change every day, like other files on our application.
With all this, I'm inclined to search for a bash process that could, on deployment, do two things:
Retrieve data from production server. (Because it will be there where the most update data will exist.)
Update remote schema with local one. (Because it is locally that we will first change our schema.)
What do you think? What do you suggest? Is there a standard way for dealing with this (it seems to me) common scenario?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我最喜欢的是 Ruby on Rails 所采用的数据库迁移通用方法。在 Rails 中,您创建一个使用域特定语言的迁移文件。但是,我们的想法是每个迁移都有一个版本号。此外,每次迁移都包括实施和回滚步骤。最后,当执行迁移时,数据库中标识模式版本的本地表被更新以标识数据库的当前版本。这些迁移文件被签入源代码管理。
因此,我建议执行以下操作:
因此,如果数据库当前版本为 14,并且您使用所需版本调用主脚本从版本 17 开始,它将知道执行版本 15、16 和 17 的架构更改脚本。如果数据库当前为 15,所需版本为 12,它将知道执行版本 15、14 和 13 的回滚脚本。
所有这些脚本绝对应该检查到源代码管理中。
My favorite is the general approach taken by Ruby on Rails for database migrations. In rails, you create a migration file, which is using a domain specific language. But, the idea is that each migration has a version number. Also, each migration includes both implementation and a rollback steps. Finally, when migrations are executed, a local table in the database identifying the schema version is updated to identify the current version of the database. These migration files are checked into source control.
So, I recommend doing the following:
So, if the database is currently version 14, and you call the master script with desired version 17, it will know to execute the schema change scripts for versions 15, 16, and 17. If the database is currently 15 and the desired version is 12, it will know to execute the rollback scripts for versions 15, 14, and 13.
All of these scripts should definitely be checked into source control.