MySQL 在没有 TRANSACTION 的批量 INSERT 上暂停索引重建
我有很多数据需要 INSERT LOW_PRIORITY 到表中。由于每次插入行时都会重建索引,因此需要很长时间。我知道我可以使用事务,但在这种情况下,如果只有一行失败,我不希望整个集合失败。
有没有办法让 MySQL 停止在特定表上重建索引,直到我告诉它可以恢复为止?
理想情况下,我想插入 1,000 行左右,设置索引做它的事情,然后插入接下来的 1,000 行。
我无法使用 INSERT DELAYED,因为我的表类型是 InnoDB。否则,INSERT DELAYED 对我来说是完美的。
这并不重要,但我正在使用 PHP/PDO 访问 MySQL。您能提供的任何建议将不胜感激。谢谢!
I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.
Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?
Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.
I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.
Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这将禁用所有非唯一索引的更新。缺点是这些索引也不会用于选择查询。
但是,您可以使用多重插入(INSERT INTO table(...) VALUES(...),(...),(...),这也将批量更新索引。
This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.
You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.
AFAIK,对于那些使用 InnoDB 表的人来说,如果您不希望在每次插入后重建索引,则必须使用事务。
例如,要插入一批 1000 行,请使用以下 SQL:
通过禁用自动提交,事务将在第一次
INSERT
时启动。然后,行被逐一插入,最后提交事务并重建索引。如果在执行 INSERT 之一期间发生错误,事务不会回滚,但会向客户端报告错误,客户端可以选择回滚或继续。因此,如果您不希望在一次
INSERT
失败时回滚整个批次,您可以记录失败的INSERT
并继续插入行,最后最后提交事务。但是,请考虑到将
INSERT
包装在事务中意味着在提交事务之前您将无法看到插入的行。可以将SELECT
的事务隔离级别设置为READ_UNCOMMITTED
,但正如我测试的那样,当SELECT
时,行不可见> 发生的位置与INSERT
非常接近。请参阅我的帖子< /a>.AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each
INSERT
, you must use transactions.For example, for inserting a batch of 1000 rows, use the following SQL:
By disabling autocommit, a transaction will be started at the first
INSERT
. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.If an error occurs during execution of one of the
INSERT
, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if oneINSERT
fails, you can log theINSERT
s that failed and continue inserting the rows, and finally commit the transaction at the end.However, take into account that wrapping the
INSERT
s in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for theSELECT
toREAD_UNCOMMITTED
but as I've tested it, the rows are not visible when theSELECT
happens very close to theINSERT
. See my post.