用于在 MySQL 中执行非常大 (20 GB) .sql 文件的更高速度选项

发布于 2024-09-01 15:11:27 字数 227 浏览 17 评论 0原文

为了响应政府的数据请求,我的公司收到了一个 20+ GB 的 .sql 文件。我没有太多选择来获取不同格式的数据,因此我需要选择如何在合理的时间内导入数据。我使用 Navicat 的批处理执行工具在高端服务器(Win 2008 64 位,MySQL 5.1)上运行它。它已经运行了 14 个小时,并且没有任何即将完成的迹象。

有谁知道此类交易有更高速度的选择吗?或者考虑到文件很大,这是我应该期待的吗?

谢谢

My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion.

Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

北渚 2024-09-08 15:11:27

我猜你的意思是它是由 mysqldump 生成的文件作为数据库的备份,因此它主要包含 CREATE TABLEINSERT 语句。

(但严格来说,SQL 脚本可以包含任何内容,例如长时间运行的存储过程的定义和执行、导致死锁的查询等。我假设情况并非如此。)

鉴于您有备份文件并且无法更改文件类型,您可以采取以下措施来加快恢复速度:

  1. 禁用外键检查:SET FOREIGN_KEY_CHECKS=0(记住重新启用
    然后)。也禁用唯一检查: SET UNIQUE_CHECKS =0

  2. 确保您的key_buffer_size 设置得尽可能大。默认为 8MB,最大为 4GB。我会尝试 1GB。

    这些第一个提示来自 Baron Schwartz 的帖子:http://lists.mysql.com/ mysql/206866

  3. 确保您的innodb_buffer_pool_size设置得尽可能大。默认为 8MB,最大为 4GB。我会尝试 1GB。

  4. 设置innodb_flush_log_at_trx_commit = 2

  5. @Mark B 在下面添加了一个很好的建议:禁用键 恢复期间。您可以这样做:

    ALTER TABLE <表名>禁用按键;
    ...运行您的恢复...
    ALTER TABLE <表名>启用按键;
    

    但是该命令一次仅影响一个表。您必须为每个表发出单独的命令。也就是说,通常情况下,一个表比其他表大得多,因此您可能只需要禁用该大表的键。

    此外,如果包含还原的 SQL 脚本删除并重新创建表,这将避免禁用键。您必须找到某种方法来插入命令以在创建表之后和插入行之前禁用键。您可能需要创造性地使用 sed 来预处理 SQL 脚本,然后再将其提供给 mysql 客户端。

  6. 使用 Percona Server 版本的 mysqldump,以及 - -innodb-optimize-keys 选项。

I guess you mean it's a file produced by mysqldump as a backup of a database, so it contains mostly CREATE TABLE and INSERT statements.

(But strictly speaking, an SQL script can contain anything, such as definition and execution of long-running stored procedures, queries that result in deadlocks, etc. I'll assume this is not the case.)

Here are some things you can do to speed up restore, given that you have the backup file and can't change the type of file it is:

  1. Disable foreign key checks: SET FOREIGN_KEY_CHECKS=0 (remember to re-enable
    afterwards). Disable unique checks too: SET UNIQUE_CHECKS=0

  2. Make sure your key_buffer_size is set as large as possible if you use MyISAM tables. The default is 8MB, and the max is 4GB. I'd try 1GB.

    These first tips come from a post by Baron Schwartz: http://lists.mysql.com/mysql/206866

  3. Make sure your innodb_buffer_pool_size is set as large as possible if you use InnoDB tables. The default is 8MB, and the max is 4GB. I'd try 1GB.

  4. Set innodb_flush_log_at_trx_commit = 2 during the restore if you use InnoDB tables.

  5. @Mark B adds a good suggestion below to disable keys during a restore. This is how you do it:

    ALTER TABLE <table-name> DISABLE KEYS;
    ...run your restore...
    ALTER TABLE <table-name> ENABLE KEYS;
    

    But that command affects only one table at a time. You'll have to issue a separate command for each table. That said, it's often the case that one table is much larger than the other tables, so you may need to disable keys only for that one large table.

    Also, if the SQL script containing your restore drops and recreates tables, this would circumvent disabling keys. You'll have to find some way to insert the commands to disable keys after the table is created and before rows are inserted. You may need to get creative with sed to preprocess the SQL script before feeding it to the mysql client.

  6. Use the Percona Server version of mysqldump, with the --innodb-optimize-keys option.

爱本泡沫多脆弱 2024-09-08 15:11:27

周围有大量工具,但我建议使用 Navicat GUI 来完成此操作。
根据我的经验,它可以在 8GB RAM 的主机上在 6 小时内运行 48GB *.sql 文件。

解释(有点)在这里:
在此处输入图像描述
再次单击所选数据库,选择“执行 SQL 文件”,选择文件,如果需要,选择“出错时继续”,最后运行它。
我知道它显示了 MySQL 数据库,但适用于最常用/流行的 DBMS。

我真的不建议在他们的sql查询生成器中“打开”这样比例的文件,它会阻塞机器,因为RAM会一遍又一遍地满负荷。

这也适用于作为 Navicat 应用程序主机的 Macintosh 操作系统,一旦连接到给定的数据库服务器,您就可以在任何您想要的地方运行它,到目前为止,它在 RHEL、Ubuntu Server、Debian 和 Windows Server 上运行得很好。

There is a big amount of tools around, but I would recommend Navicat GUI to do that.
In my experience it could run 48GB *.sql files in 6 hours on a host with 8GB of RAM.

Explanation (sort of) here:
enter image description here
Secondary click on chosen DB, select "Execute SQL File", choose the file, choose "continue on error" if you want to and finally run it.
I know it shows a MySQL DB but works on most used/popular DBMS.

I seriously don't advise to "open" a file of such proportions in their sql query builder, it would block the machine as the RAM would be at full capacity over and over again.

This also works on Macintosh OS as host of Navicat application and once you are connected to a given DB Server you can run it wherever you want, working pretty well on RHEL, Ubuntu Server, Debian and Windows Server until now.

晚雾 2024-09-08 15:11:27

为此,请在 MySQL 中使用 BULK Import。

Use BULK Import in MySQL for this.

无悔心 2024-09-08 15:11:27

仅请求表定义和 .csv 中的数据。然后进行批量导入。

Request just the table definitions, and the data in a .csv. Then do a bulk-import.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文