MySQL 在没有 TRANSACTION 的批量 INSERT 上暂停索引重建

发布于 2024-11-01 01:29:56 字数 355 浏览 1 评论 0原文

我有很多数据需要 INSERT LOW_PRIORITY 到表中。由于每次插入行时都会重建索引,因此需要很长时间。我知道我可以使用事务,但在这种情况下,如果只有一行失败,我不希望整个集合失败。

有没有办法让 MySQL 停止在特定表上重建索引,直到我告诉它可以恢复为止?

理想情况下,我想插入 1,000 行左右,设置索引做它的事情,然后插入接下来的 1,000 行。

我无法使用 INSERT DELAYED,因为我的表类型是 InnoDB。否则,INSERT DELAYED 对我来说是完美的。

这并不重要,但我正在使用 PHP/PDO 访问 MySQL。您能提供的任何建议将不胜感激。谢谢!

I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.

Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?

Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.

I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.

Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

自由如风 2024-11-08 01:29:56
ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS

这将禁用所有非唯一索引的更新。缺点是这些索引也不会用于选择查询。

但是,您可以使用多重插入(INSERT INTO table(...) VALUES(...),(...),(...),这也将批量更新索引。

ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS

This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.

You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.

未央 2024-11-08 01:29:56

AFAIK,对于那些使用 InnoDB 表的人来说,如果您不希望在每次插入后重建索引,则必须使用事务。

例如,要插入一批 1000 行,请使用以下 SQL:

SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;

通过禁用自动提交,事务将在第一次 INSERT 时启动。然后,行被逐一插入,最后提交事务并重建索引。

如果在执行 INSERT 之一期间发生错误,事务不会回滚,但会向客户端报告错误,客户端可以选择回滚或继续。因此,如果您不希望在一次 INSERT 失败时回滚整个批次,您可以记录失败的 INSERT 并继续插入行,最后最后提交事务。

但是,请考虑到将 INSERT 包装在事务中意味着在提交事务之前您将无法看到插入的行。可以将 SELECT 的事务隔离级别设置为 READ_UNCOMMITTED,但正如我测试的那样,当 SELECT 时,行不可见> 发生的位置与 INSERT 非常接近。请参阅我的帖子< /a>.

AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each INSERT, you must use transactions.

For example, for inserting a batch of 1000 rows, use the following SQL:

SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;

By disabling autocommit, a transaction will be started at the first INSERT. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.

If an error occurs during execution of one of the INSERT, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if one INSERT fails, you can log the INSERTs that failed and continue inserting the rows, and finally commit the transaction at the end.

However, take into account that wrapping the INSERTs in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for the SELECT to READ_UNCOMMITTED but as I've tested it, the rows are not visible when the SELECT happens very close to the INSERT. See my post.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文