如何在同一台服务器上复制 MySQL 数据库

发布于 2024-11-19 05:55:47 字数 282 浏览 4 评论 0原文

我有一个大型 MySQL 数据库,我们称之为 live_db,我想在同一台机器上复制它,以提供一个可以使用的测试系统 (test_db),包括表结构和数据。 我想定期用 live_db 的内容更新 test_db ;如果可能的话增量。

MySQL 中有一些内置机制可以做到这一点吗?我认为主从复制不是我想要的,因为应该可以更改 test_db 中的数据。不过,这些更改不必保留。

问候,

CGD

I have a large MySQL database, lets call it live_db, which I want to replicate on the same machine to provide a test system to play around with (test_db), including table structure and data.
In regular intervals I want to update the test_db with the content of the live_db; if possible incremental.

Is there some built-in mechanism in MySQL to do that? I think that master-slave replication is not the thing I want since it should be possible to alter data in the test_db. These changes do not have to be preserved, though.

Regards,

CGD

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

夏末 2024-11-26 05:55:47

mysql 命令行客户端将从标准输入接受 SQL 语句流。因此,您可以在命令行上将 mysqldump 的输出直接传送到 mysql 中。作为 cron 作业执行此操作将定期用更新的实时数据覆盖您的测试数据:

mysql --user=username --password=passwd -e 'DROP DATABASE test_db;'
mysql --user=username --password=passwd -e 'CREATE DATABASE test_db;'
mysqldump --user=username --password=passwd live_db | mysql --user=username --password=passwd test_db

请注意,由于您的数据很大,因此需要很长时间。

The mysql command line client will accept a stream of SQL statements from standard input. You can therefore pipe the output of mysqldump directly into mysql on the command line. Doing this as a cron job will regularly overwrite your test data with updated live data:

mysql --user=username --password=passwd -e 'DROP DATABASE test_db;'
mysql --user=username --password=passwd -e 'CREATE DATABASE test_db;'
mysqldump --user=username --password=passwd live_db | mysql --user=username --password=passwd test_db

Note that since your data is large, it will take a long time.

捎一片雪花 2024-11-26 05:55:47

Michaels 的上述回答效果很好,但不复制事件、存储过程或触发器。

要复制 mysqldump 需要的更多开关:
--events --triggers --routines

补充已制作的副本:

mysqldump --user=username --password=passwd --no-data --no-create-info - -no-create-db --events --triggers --routines live_db | -no-create-db --events --triggers --routines live_db | mysql --user=用户名 --password=passwd test_db

Michaels answer abowe works well but does not copy events, stored procedures or triggers.

To copy those a few more switches is needed for mysqldump:
--events --triggers --routines

To complement an already made copy:

mysqldump --user=username --password=passwd --no-data --no-create-info --no-create-db --events --triggers --routines live_db | mysql --user=username --password=passwd test_db

扭转时空 2024-11-26 05:55:47

我多年来一直在不同的环境中使用中小型数据库(1 G 到 100 G)进行此操作。快速而肮脏的mysqldump适用于较小的数据集;它们越小,效果就越好。

当超过 5-10 GB 时,根据 MySQL 负载,快速而肮脏将不再有效。

为什么 mysqldump 可能不够

MySQLdump 的问题是,在转储时,实时数据库要么无法使用,使用起来很不方便,要么备份不一致。除非您有足够宽的时间窗口,否则实时数据库的不可用性并不重要,因为数据库无论如何都不需要使用(例如,深夜)。

默认选项(此处讨论了原因)使数据库位于在转储时不可用,除非使用只是读取数据以及很少的数据。在繁忙的电子商务网站上,您会看到客户端堆积崩溃。

因此,您使用 InnoDB 和现代选项(据我所知,不是默认选项)

--single-transaction --skip-lock-tables

,它们允许站点在转储期间运行,尽管比正常情况慢。根据使用情况,它可能会明显变慢。

当您这样做时,还转储其他可能重要的数据:

--events --triggers --routines

(...哦,这仍然不会转储用户权限。用作测试也许它并不那么重要)。

我发现有一个“建议”(!)作为“伟大的黑客”的解决方法,它基本上禁用事务完整性,允许数据库在转储时全速运行。有点像卸下汽车的刹车以减轻重量并让它跑得更快,是的,它会起作用,但它会产生一些您可能不会立即注意到的副作用。您几乎肯定迟早会注意到它们 - 就像刹车一样,那会是您最需要它们的时候,而且它不会很漂亮。

但是,对于测试数据库,它仍然可以工作。

Xtrabackup

如果你有一个“严肃”的数据库,为什么不拥有 “严重”备份

从属复制

如果您有空闲空间(现在 20 GB 已经不多了),另一种可能性是使用辅助数据库。

您可以在同一服务器上的不同端口上安装 MySQL 服务器的第二个副本,并将其作为从属服务器(服务器将在存储速度方面受到性能影响)。然后您将拥有两个相同的数据库(实时主数据库,实时从数据库)。 第一次您仍然需要运行完整转储以使它们同步,并解决其中涉及的所有问题。

当您需要克隆测试数据库时,停止从属复制 - 活动从属现在将及时保持“冻结”状态 - 并使用 MySQLbackup 或仅复制数据文件将活动从属备份到测试数据库。完成后,您可以重新启动复制。

对实时主服务器的影响可以忽略不计,并且从服务器实际上可以用于非更新关键的选择。

I have been doing this for several years in different contexts, with small to medium databases (1 G to 100 G). The quick and dirty mysqldump works for smaller data sets; the smaller they are, the better it goes.

When you exceed 5-10 GB, depending on the MySQL load, quick and dirty does not cut it anymore.

why mysqldump might not be enough

The problem with MySQLdump is that while it dumps, the live database is either unusable, very awkward to use, or the backup will not be consistent. Unless you have a wide enough time window when the unusability of the live database is not important because the database needs not be in use anyway (for example, late at night).

The default options (here a discussion of the why) make the database next to unusable while it's being dumped, unless the usage is just reading data and little of that. On a busy ecommerce site, you're looking at a client pile-up crash.

So you use InnoDB and the modern options (not defaults, as far as I know)

--single-transaction --skip-lock-tables

which allow the site to run, albeit slower than normal, during the dump. Depending on what the usage is, it might be noticeably slower.

While you're at it, also dump other data which might be important:

--events --triggers --routines

(...oh, and this still won't have dumped user permissions. To use as a test perhaps it wasn't so important).

There is a workaround I've found "advised" (!) as a "great hack", which basically disables transactional integrity allowing the database to run at full speed while it's being dumped. Somewhat like removing the brakes from your car to lighten it and have it run faster, yeah it will work, but it'll have some side effects that you might not notice immediately. You will almost surely notice them sooner or later - and just like brakes, it will be when you'll need them most, and it won't be pretty.

However, for a test database, it could still work.

Xtrabackup

If you have a "serious" database, what's the reason not to have a "serious" backup?

Slave replication

Another possibility if you have space to spare - and, nowadays, 20 Gb is not that much - is that of using an ancillary database.

You can install a second copy of MySQL Server on the same server on a different port, and have it be the slave (the server will take a performance hit, storage-speed-wise). Then you will have two identical databases (live master, live slave). The first time you will still have to run a full dump to get them in sync, with all the problems it involves.

When you need to clone the test database, stop the slave replication - the live slave will now remain "frozen" in time - and backup the live slave to the test db, using MySQLbackup or just copying the data files. Once done, you restart the replication.

The impact on the live master is negligible, and the slave can actually be used for non-update-critical selects.

送你一个梦 2024-11-26 05:55:47

如果您更喜欢 MySQL Migration Toolkit,您可以在数据映射步骤中双击架构名称并更改目标架构名称。

In case you prefer MySQL Migration Toolkit, you may double click on schema name in Data Mapping step and change target schema name.

七秒鱼° 2024-11-26 05:55:47

对于所有 Mac 用户,使用 Sequel Pro,您所需要做的就是转到数据库(菜单)->重复的数据库。完毕!

For all the mac users, with sequel pro all you need to do is go to database (menu) -> Duplicate database. Done!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文