从大型 MySql 表中删除单行会导致“锁定超时”
我正在运行 MySql 5.0.22,并且有一个非常笨重的表,其中包含大约 500 万行。
某些(但不是所有)行由另一个表的外键引用。
到目前为止,所有剔除未引用行的尝试都失败了,每次都会导致锁定超时。
将我想要的行复制到备用表也因锁定超时而失败。
令人怀疑的是,即使是像下面这样应该立即完成的语句也会因“锁定超时”而失败:
DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;
......此时我已经没有想法了。
编辑:不管怎样,我一直在我的开发系统上解决这个问题,所以此时只有我实际使用数据库,所以在我运行的 SQL 之外不应该有任何锁定。
任何 MySql 专家对如何驯服这个流氓表有建议吗?
编辑#2:根据要求,表结构:
CREATE TABLE `tunknowncustomer` (
`UID_PK` int(11) NOT NULL auto_increment,
`UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
`CREATIONDATE` datetime NOT NULL,
`EMAIL` varchar(100) default NULL,
`CUSTOMERUID` int(11) default NULL,
PRIMARY KEY (`UID_PK`),
KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$
注意,尝试删除 FK 也会超时。
I'm running MySql 5.0.22 and have a really unwieldy table containing approximately 5 million rows.
Some, but not all rows are referenced by a foreign key to another table.
All attempts to cull the unreferenced rows have failed so far, resulting in lock-timeouts every time.
Copying the rows I want to an alternate table also failed with lock-timeout.
Suspiciously, even a statement that should finish instantaneously like the one below will also fail with "lock timeout":
DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;
...it's at this point that I've run out of ideas.
Edit: For what it's worth, I've been working through this on my dev system, so only I am actually using the database at this moment so there shouldn't be any locking going on outside of the SQL I'm running.
Any MySql gurus out there have suggestions on how to tame this rogue table?
Edit #2: As requested, the table structure:
CREATE TABLE `tunknowncustomer` (
`UID_PK` int(11) NOT NULL auto_increment,
`UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
`CREATIONDATE` datetime NOT NULL,
`EMAIL` varchar(100) default NULL,
`CUSTOMERUID` int(11) default NULL,
PRIMARY KEY (`UID_PK`),
KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$
Note, attempting to drop the FK also times out.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我对 innodb 表也有同样的问题。优化表纠正了它。
I had the same problem with an innodb table. optimize table corrected it.
好吧,我终于找到了一种方法,可以从我的大型 InnoDB 表中删除不需要的行!我是这样做的:
整个工作看起来有点像这样:
...很奇怪,重命名表是唯一会立即完成的 ALTER TABLE 命令。
...然后重新启动,以防万一我之前的失败尝试可能会阻止任何新的工作完成...
...一旦我一次删除 100k,InnoDB 的执行时间就会随着每个成功的命令而下降。我假设 InnoDB 开始对大型扫描进行预读。进行提交会重置预读数据,因此我将 COMMIT 间隔为每 200 万行,直到作业完成。
我通过将剩余的行复制到“空”克隆表中,然后删除旧的(重命名的)表来完成任务。
这不是一个优雅的解决方案,它没有解决为什么从大表中删除一行会失败的任何原因,但至少我得到了我正在寻找的结果!
Ok, I finally found an approach that worked to trim the unwanted rows from my large InnoDB table! Here's how I did it:
The whole effort looked somewhat like this:
...strange enough, renaming the table was the only ALTER TABLE command that would finish right away.
...then a reboot just in case my previous failed attempts could block any new work done...
...Once I got into deleting 100k at a time InnoDB's execution time dropped with each successful command. I assume InnoDB starts doing read-aheads on large scans. Doing commits would reset the read-ahead data, so I spaced out the COMMITs to every 2 million rows until the job was done.
I wrapped-up the task by copying the remaining rows into my "empty" clone table, then dropping the old (renamed) table.
Not a graceful solution, and it doesn't address any reasons why deleting even a single row from a large table should fail, but at least I got the result I was looking for!