从大型 MySql 表中删除单行会导致“锁定超时”

发布于 2024-11-05 11:17:10 字数 978 浏览 0 评论 0原文

我正在运行 MySql 5.0.22,并且有一个非常笨重的表,其中包含大约 500 万行。

某些(但不是所有)行由另一个表的外键引用。

到目前为止,所有剔除未引用行的尝试都失败了,每次都会导致锁定超时。

将我想要的行复制到备用表也因锁定超时而失败。

令人怀疑的是,即使是像下面这样应该立即完成的语句也会因“锁定超时”而失败:

DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;

......此时我已经没有想法了。

编辑:不管怎样,我一直在我的开发系统上解决这个问题,所以此时只有我实际使用数据库,所以在我运行的 SQL 之外不应该有任何锁定。

任何 MySql 专家对如何驯服这个流氓表有建议吗?

编辑#2:根据要求,表结构:

CREATE TABLE `tunknowncustomer` (
  `UID_PK` int(11) NOT NULL auto_increment,
  `UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
  `CREATIONDATE` datetime NOT NULL,
  `EMAIL` varchar(100) default NULL,
  `CUSTOMERUID` int(11) default NULL,
  PRIMARY KEY  (`UID_PK`),
  KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
  KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
  CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$

注意,尝试删除 FK 也会超时。

I'm running MySql 5.0.22 and have a really unwieldy table containing approximately 5 million rows.

Some, but not all rows are referenced by a foreign key to another table.

All attempts to cull the unreferenced rows have failed so far, resulting in lock-timeouts every time.

Copying the rows I want to an alternate table also failed with lock-timeout.

Suspiciously, even a statement that should finish instantaneously like the one below will also fail with "lock timeout":

DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;

...it's at this point that I've run out of ideas.

Edit: For what it's worth, I've been working through this on my dev system, so only I am actually using the database at this moment so there shouldn't be any locking going on outside of the SQL I'm running.

Any MySql gurus out there have suggestions on how to tame this rogue table?

Edit #2: As requested, the table structure:

CREATE TABLE `tunknowncustomer` (
  `UID_PK` int(11) NOT NULL auto_increment,
  `UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
  `CREATIONDATE` datetime NOT NULL,
  `EMAIL` varchar(100) default NULL,
  `CUSTOMERUID` int(11) default NULL,
  PRIMARY KEY  (`UID_PK`),
  KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
  KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
  CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$

Note, attempting to drop the FK also times out.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

浮世清欢 2024-11-12 11:17:10

我对 innodb 表也有同样的问题。优化表纠正了它。

I had the same problem with an innodb table. optimize table corrected it.

乖乖兔^ω^ 2024-11-12 11:17:10

好吧,我终于找到了一种方法,可以从我的大型 InnoDB 表中删除不需要的行!我是这样做的:

  1. 停止使用 MySQL Workbench(它们的硬编码执行超时为 30 秒)
  2. 打开命令提示符
  3. 使用 ALTER TABLE 重命名“完整”表
  4. 使用原始表名称和结构创建一个空表
  5. 重新启动 MySQL
  6. 已关闭OFF 'autocommit' with SET AUTOCOMMIT = 0
  7. 一次删除有限数量的行,每次成功后增加我的
  8. 限制在删除语句之间,因为关闭自动提交确实让我陷入了一个大事务中

整个工作看起来有点像这样:

ALTER TABLE `ep411`.`tunknowncustomer` RENAME TO  `ep411`.`tunknowncustomer2`;

...很奇怪,重命名表是唯一会立即完成的 ALTER TABLE 命令。

delimiter $

CREATE TABLE `tunknowncustomer` (
    ...
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8$

...然后重新启动,以防万一我之前的失败尝试可能会阻止任何新的工作完成...

SET AUTOCOMMIT = 0;

delete from tunknowncustomer2 where customeruid is null limit 1000;

delete from tunknowncustomer2 where customeruid is null limit 100000;

commit;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

commit;

...一旦我一次删除 100k,InnoDB 的执行时间就会随着每个成功的命令而下降。我假设 InnoDB 开始对大型扫描进行预读。进行提交会重置预读数据,因此我将 COMMIT 间隔为每 200 万行,直到作业完成。

我通过将剩余的行复制到“空”克隆表中,然后删除旧的(重命名的)表来完成任务。

这不是一个优雅的解决方案,它没有解决为什么从大表中删除一行会失败的任何原因,但至少我得到了我正在寻找的结果!

Ok, I finally found an approach that worked to trim the unwanted rows from my large InnoDB table! Here's how I did it:

  1. Stopped using MySQL Workbench (they have a hard-coded execution timeout of 30 seconds)
  2. Opened a command prompt
  3. Renamed the "full" table using ALTER TABLE
  4. Created an empty table using the original table name and structure
  5. Rebooted MySQL
  6. Turned OFF 'autocommit' with SET AUTOCOMMIT = 0
  7. Deleted a limited number of rows at a time, ramping up my limit after each success
  8. Did a COMMIT; in between delete statements since turning off autocommit really left me inside of one large transaction

The whole effort looked somewhat like this:

ALTER TABLE `ep411`.`tunknowncustomer` RENAME TO  `ep411`.`tunknowncustomer2`;

...strange enough, renaming the table was the only ALTER TABLE command that would finish right away.

delimiter $

CREATE TABLE `tunknowncustomer` (
    ...
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8$

...then a reboot just in case my previous failed attempts could block any new work done...

SET AUTOCOMMIT = 0;

delete from tunknowncustomer2 where customeruid is null limit 1000;

delete from tunknowncustomer2 where customeruid is null limit 100000;

commit;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

commit;

...Once I got into deleting 100k at a time InnoDB's execution time dropped with each successful command. I assume InnoDB starts doing read-aheads on large scans. Doing commits would reset the read-ahead data, so I spaced out the COMMITs to every 2 million rows until the job was done.

I wrapped-up the task by copying the remaining rows into my "empty" clone table, then dropping the old (renamed) table.

Not a graceful solution, and it doesn't address any reasons why deleting even a single row from a large table should fail, but at least I got the result I was looking for!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文