加速 mysql 转储和导入

发布于 2024-07-05 08:47:09 字数 109 浏览 7 评论 0原文

是否有任何记录在案的技术可以加速 mySQL 转储和导入?

这将包括 my.cnf 设置、使用 ramdisk 等。

仅查找记录的技术,最好使用显示潜在加速的基准测试。

Are there any documented techniques for speeding up mySQL dumps and imports?

This would include my.cnf settings, using ramdisks, etc.

Looking only for documented techniques, preferably with benchmarks showing potential speed-up.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(11

烈酒灼喉 2024-07-12 08:47:09
  1. 获取高性能 MySQL 的副本。 很棒的书。
  2. 转储中的扩展插入
  3. 使用 --tab 格式转储,以便您可以使用 mysqlimport,这是
    比mysql快< dumpfile
  4. 使用多个线程导入,每个表一个线程。
  5. 如果可能,请使用不同的数据库引擎。 导入到一个
    像 innodb 这样的重度事务引擎非常慢。 插入
    进入非事务性引擎,例如
    MyISAM 速度要快得多。
  6. 查看Maakit工具包中的表比较脚本,看看是否可以
    更新您的表,而不是转储它们并导入它们。 但
    您可能正在谈论备份/恢复。
  1. Get a copy of High Performance MySQL. Great book.
  2. Extended inserts in dumps
  3. Dump with --tab format so you can use mysqlimport, which is
    faster than mysql < dumpfile
  4. Import with multiple threads, one for each table.
  5. Use a different database engine if possible. importing into a
    heavily transactional engine like innodb is awfully slow. Inserting
    into a non-transactional engine like
    MyISAM is much much faster.
  6. Look at the table compare script in the Maakit toolkit and see if you can
    update your tables rather than dumping them and importing them. But
    you're probably talking about backups/restores.
青瓷清茶倾城歌 2024-07-12 08:47:09

http://www.maatkit.org/ 有 mk-parallel-dump 和 mk-parallel-restore

如果您一直希望使用多线程 mysqldump,那么别再希望了。 该工具并行转储 MySQL 表。 它是一个更智能的 mysqldump,可以充当 mysqldump 的包装器(具有合理的默认行为),也可以充当 SELECT INTO OUTFILE 的包装器。 它专为处理非常大的数据量的高性能应用程序而设计,其中速度非常重要。 它利用多个 CPU 和磁盘来更快地转储数据。

mysqldump 中还有各种潜在的选项,例如在导入转储时不创建索引 - 而是在完成时批量创建索引。

http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore

If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster.

There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion.

分开我的手 2024-07-12 08:47:09

假设您正在使用 InnoDB...

我的情况是有一堆现有的 mysqldump 输出文件,我想在合理的时间内导入它们。 这些表(每个文件一个)大约 500MB,每个表包含大约 5,000,000 行数据。 使用以下参数,我能够将插入时间从 32 分钟减少到 3 分钟以下。

innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT

您还需要有一个相当大的 innodb_buffer_pool_size 设置。

因为我的插入是一次性的,所以我后来恢复了设置。 如果您打算长期继续使用它们,请确保您知道它们在做什么。

我在 Cedric Nilly 的博客 上找到了使用这些设置的建议以及详细的解释每个设置都可以在MySQL 文档。

Assuming that you're using InnoDB...

I was in the situation of having a pile of existing mysqldump output files that I wanted to import in a reasonable time. The tables (one per file) were about 500MB and contained about 5,000,000 rows of data each. Using the following parameters I was able to reduce the insert time from 32 minutes to under 3 minutes.

innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT

You'll also need to have a reasonably large innodb_buffer_pool_size setting.

Because my inserts were a one-off I reverted the settings afterwards. If you're going to keep using them long-term, make sure you know what they're doing.

I found the suggestion to use these settings on Cedric Nilly's blog and the detailed explanation for each of the settings can be found in the MySQL documentation.

不…忘初心 2024-07-12 08:47:09

我想您的问题还取决于瓶颈在哪里:

  • 如果您的网络是瓶颈,您还可以查看 -C/--compress 标志到 mysqldump
  • 如果您的计算机内存不足(即开始交换),您应该购买更多内存。

另外,请查看 mysqldump--quick 标志(如果您使用的是 MyIsam,请查看 --disable-keys)。

I guess your question also depends on where the bottleneck is:

  • If your network is a bottleneck you could also have a look at the -C/--compress flag to mysqldump.
  • If your computer runs out of memory (ie. starts swapping) you should buy more memory.

Also, have a look at the --quick flag for mysqldump (and --disable-keys if you are using MyIsam).

℉服软 2024-07-12 08:47:09

如果您要导入到 InnoDB,您可以做的最有效的事情就是

innodb_flush_log_at_trx_commit = 2

在导入运行时暂时放入 my.cnf。 如果需要 ACID,可以将其放回 1

If you are importing to InnoDB the single most effective thing you can do is to put

innodb_flush_log_at_trx_commit = 2

in your my.cnf, temporarily while the import is running. You can put it back to 1 if you need ACID.

梦幻的心爱 2024-07-12 08:47:09

确保在转储时使用 mysqldump 的 --opt 选项。 这将使用批量插入语法、延迟键更新等...


如果您仅使用 MyISAM 表,您可以通过停止服务器、将它们复制到已停止的服务器并启动该服务器来安全地复制它们。

如果您不想停止源服务器,可以按照以下步骤操作:

  1. 在所有表上获取读锁
  2. 刷新所有表
  3. 复制文件
  4. 解锁表

但我很确定当您需要停止复制到服务器时将它们放置到位。

Make sure you are using the --opt option to mysqldump when dumping. This will use bulk insert syntax, delay key updates, etc...


If you are ONLY using MyISAM tables, you can safely copy them by stopping the server, copying them to a stopped server, and starting that.

If you don't want to stop the origin server, you can follow this:

  1. Get a read lock on all tables
  2. Flush all tables
  3. Copy the files
  4. Unlock the tables

But I'm pretty sure your copy-to server needs to be stopped when you put them in place.

寄意 2024-07-12 08:47:09

有一种使用 LVM 快照进行备份和恢复的方法,这对您来说可能是一个有趣的选择。

考虑使用 LVM 来拍摄 MySQL 数据目录的快照,而不是执行 mysqldump。 使用 LVM 快照可以让您拥有近乎实时的备份能力、对所有存储引擎的支持以及令人难以置信的快速恢复。 引用下面的链接,

“恢复时间与数据恢复和标准 MySQL 崩溃恢复一样快,而且还可以进一步缩短。”

http://www .mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

There is an method for using LVM snapshots for backup and restore that might be an interesting option for you.

Instead of doing a mysqldump, consider using LVM to take snapshots of your MySQL data directories. Using LVM snapshots allow you to have nearly real time backup capability, support for all storage engines, and incredibly fast recovery. To quote from the link below,

"Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further."

http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

给妤﹃绝世温柔 2024-07-12 08:47:09

关闭外键检查并打开自动提交。

turn off foreign key checks and turn on auto-commit.

娇女薄笑 2024-07-12 08:47:09

在转储中使用扩展插入应该会使导入速度更快。

Using extended inserts in dumps should make imports faster.

断念 2024-07-12 08:47:09

如果您只有 MyIsam 表,mysqlhotcopy 也可能是您的替代方案。

mysqlhotcopy might be an alternative for you too if you only have MyIsam tables.

梦里人 2024-07-12 08:47:09

使用索引但不要太多,激活查询缓存,使用sphinx进行大数据库,这里有一些很好的技巧http://www.keedeo.com/media/1857/26-astuces-pour-accelerer-vos-requetes-mysql(法语)

Using indexes but not too much, activate query cache, using sphinx for big database, here is some good tips http://www.keedeo.com/media/1857/26-astuces-pour-accelerer-vos-requetes-mysql (In French)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文