删除匹配行的更快方法?

发布于 2024-07-18 07:27:23 字数 3657 浏览 5 评论 0原文

在数据库方面我是一个相对新手。 我们正在使用 MySQL,我目前正在尝试加快 SQL 语句的速度,该语句似乎需要一段时间才能运行。 我在 SO 上四处寻找类似的问题,但没有找到。

目标是删除表 A 中与表 B 中具有匹配 id 的所有行。

我当前正在执行以下操作:

DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE b.id = a.id);

表 a 中大约有 100K 行,表 b 中大约有 22K 行。 “id”列是两个表的 PK。

该语句在我的测试机器上运行大约需要 3 分钟 - Pentium D、XP SP3、2GB 内存、MySQL 5.0.67。 这对我来说似乎很慢。 也许不是,但我希望加快速度。 有没有更好/更快的方法来实现这一点?


编辑:

一些可能有用的附加信息。 表 A 和表 B 具有与我执行以下操作来创建表 B 相同的结构:

CREATE TABLE b LIKE a;

表 a(以及表 b)有一些索引来帮助加快对其进行的查询。 再说一次,我在数据库工作方面相对新手,并且仍在学习中。 我不知道这对事情有多大影响(如果有的话)。 我认为它确实有影响,因为索引也必须清理,对吧? 我还想知道是否有任何其他数据库设置可能会影响速度。

另外,我正在使用 INNO DB。


以下是一些可能对您有帮助的附加信息。

表 A 具有与此类似的结构(我对此进行了一些清理):

DROP TABLE IF EXISTS `frobozz`.`a`;
CREATE TABLE  `frobozz`.`a` (
  `id` bigint(20) unsigned NOT NULL auto_increment,
  `fk_g` varchar(30) NOT NULL,
  `h` int(10) unsigned default NULL,
  `i` longtext,
  `j` bigint(20) NOT NULL,
  `k` bigint(20) default NULL,
  `l` varchar(45) NOT NULL,
  `m` int(10) unsigned default NULL,
  `n` varchar(20) default NULL,
  `o` bigint(20) NOT NULL,
  `p` tinyint(1) NOT NULL,
  PRIMARY KEY  USING BTREE (`id`),
  KEY `idx_l` (`l`),
  KEY `idx_h` USING BTREE (`h`),
  KEY `idx_m` USING BTREE (`m`),
  KEY `idx_fk_g` USING BTREE (`fk_g`),
  KEY `fk_g_frobozz` (`id`,`fk_g`),
  CONSTRAINT `fk_g_frobozz` FOREIGN KEY (`fk_g`) REFERENCES `frotz` (`g`)
) ENGINE=InnoDB AUTO_INCREMENT=179369 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

我怀疑问题的一部分是该表有许多索引。 表 B 看起来与表 B 类似,但它只包含 idh 列。

另外,分析结果如下:

starting 0.000018
checking query cache for query 0.000044
checking permissions 0.000005
Opening tables 0.000009
init 0.000019
optimizing 0.000004
executing 0.000043
end 0.000005
end 0.000002
query end 0.000003
freeing items 0.000007
logging slow query 0.000002
cleaning up 0.000002

已解决

感谢所有回复和评论。 他们确实让我思考这个问题。 感谢 dotjoe 让我通过提出简单的问题“是否有其他表引用 a.id?”来摆脱这个问题。

问题是表 A 上有一个 DELETE TRIGGER,它调用一个存储过程来更新另外两个表 C 和 D。表 C 有一个返回到 a.id 的 FK,并且在存储过程中执行了一些与该 id 相关的操作之后,它有这样的语句,

DELETE FROM c WHERE c.id = theId;

我查看了 EXPLAIN 语句并将其重写为,

EXPLAIN SELECT * FROM c WHERE c.other_id = 12345;

所以,我可以看到它在做什么,它给了我以下信息:

id            1
select_type   SIMPLE
table         c
type          ALL
possible_keys NULL
key           NULL
key_len       NULL
ref           NULL
rows          2633
Extra         using where

这告诉我这是一个痛苦的操作,因为它正在进行被调用 22500 次(对于被删除的给定数据集),这就是问题所在。 当我在 other_id 列上创建索引并重新运行 EXPLAIN 时,我得到:

id            1
select_type   SIMPLE
table         c
type          ref
possible_keys Index_1
key           Index_1
key_len       8
ref           const
rows          1
Extra         

好多了,事实上真的很棒。

我补充说,Index_1 和我的删除时间与 ma​​ttkemp 报告的时间一致。 这对我来说是一个非常微妙的错误,因为我在最后一刻硬塞了一些附加功能。 事实证明,正如 Daniel 所说,大多数建议的替代 DELETE/SELECT 语句最终花费的时间基本上相同,并且正如 soulmerge 提到的,该语句非常漂亮我能够根据我需要做的事情构建出最好的东西。 一旦我为另一个表 C 提供了索引,我的删除速度就会很快。

事后分析
从这次练习中吸取了两个教训。 首先,很明显,我没有利用 EXPLAIN 语句的强大功能来更好地了解 SQL 查询的影响。 这是一个菜鸟错误,所以我不会因为这个错误而责备自己。 我会从那个错误中吸取教训。 其次,有问题的代码是“快速完成”心态的结果,不充分的设计/测试导致这个问题没有更快地出现。 如果我生成了几个相当大的测试数据集来用作此新功能的测试输入,我就不会浪费我的时间,也不会浪费你的时间。 我在数据库端的测试缺乏应用程序端的深度。 现在我有机会改进这一点。

参考:EXPLAIN语句

I'm a relative novice when it comes to databases. We are using MySQL and I'm currently trying to speed up a SQL statement that seems to take a while to run. I looked around on SO for a similar question but didn't find one.

The goal is to remove all the rows in table A that have a matching id in table B.

I'm currently doing the following:

DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE b.id = a.id);

There are approximately 100K rows in table a and about 22K rows in table b. The column 'id' is the PK for both tables.

This statement takes about 3 minutes to run on my test box - Pentium D, XP SP3, 2GB ram, MySQL 5.0.67. This seems slow to me. Maybe it isn't, but I was hoping to speed things up. Is there a better/faster way to accomplish this?


EDIT:

Some additional information that might be helpful. Tables A and B have the same structure as I've done the following to create table B:

CREATE TABLE b LIKE a;

Table a (and thus table b) has a few indexes to help speed up queries that are made against it. Again, I'm a relative novice at DB work and still learning. I don't know how much of an effect, if any, this has on things. I assume that it does have an effect as the indexes have to be cleaned up too, right? I was also wondering if there were any other DB settings that might affect the speed.

Also, I'm using INNO DB.


Here is some additional info that might be helpful to you.

Table A has a structure similar to this (I've sanitized this a bit):

DROP TABLE IF EXISTS `frobozz`.`a`;
CREATE TABLE  `frobozz`.`a` (
  `id` bigint(20) unsigned NOT NULL auto_increment,
  `fk_g` varchar(30) NOT NULL,
  `h` int(10) unsigned default NULL,
  `i` longtext,
  `j` bigint(20) NOT NULL,
  `k` bigint(20) default NULL,
  `l` varchar(45) NOT NULL,
  `m` int(10) unsigned default NULL,
  `n` varchar(20) default NULL,
  `o` bigint(20) NOT NULL,
  `p` tinyint(1) NOT NULL,
  PRIMARY KEY  USING BTREE (`id`),
  KEY `idx_l` (`l`),
  KEY `idx_h` USING BTREE (`h`),
  KEY `idx_m` USING BTREE (`m`),
  KEY `idx_fk_g` USING BTREE (`fk_g`),
  KEY `fk_g_frobozz` (`id`,`fk_g`),
  CONSTRAINT `fk_g_frobozz` FOREIGN KEY (`fk_g`) REFERENCES `frotz` (`g`)
) ENGINE=InnoDB AUTO_INCREMENT=179369 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

I suspect that part of the issue is there are a number of indexes for this table.
Table B looks similar to table B, though it only contains the columns id and h.

Also, the profiling results are as follows:

starting 0.000018
checking query cache for query 0.000044
checking permissions 0.000005
Opening tables 0.000009
init 0.000019
optimizing 0.000004
executing 0.000043
end 0.000005
end 0.000002
query end 0.000003
freeing items 0.000007
logging slow query 0.000002
cleaning up 0.000002

SOLVED

Thanks to all the responses and comments. They certainly got me to think about the problem. Kudos to dotjoe for getting me to step away from the problem by asking the simple question "Do any other tables reference a.id?"

The problem was that there was a DELETE TRIGGER on table A which called a stored procedure to update two other tables, C and D. Table C had a FK back to a.id and after doing some stuff related to that id in the stored procedure, it had the statement,

DELETE FROM c WHERE c.id = theId;

I looked into the EXPLAIN statement and rewrote this as,

EXPLAIN SELECT * FROM c WHERE c.other_id = 12345;

So, I could see what this was doing and it gave me the following info:

id            1
select_type   SIMPLE
table         c
type          ALL
possible_keys NULL
key           NULL
key_len       NULL
ref           NULL
rows          2633
Extra         using where

This told me that it was a painful operation to make and since it was going to get called 22500 times (for the given set of data being deleted), that was the problem. Once I created an INDEX on that other_id column and reran the EXPLAIN, I got:

id            1
select_type   SIMPLE
table         c
type          ref
possible_keys Index_1
key           Index_1
key_len       8
ref           const
rows          1
Extra         

Much better, in fact really great.

I added that Index_1 and my delete times are in line with the times reported by mattkemp. This was a really subtle error on my part due to shoe-horning some additional functionality at the last minute. It turned out that most of the suggested alternative DELETE/SELECT statements, as Daniel stated, ended up taking essentially the same amount of time and as soulmerge mentioned, the statement was pretty much the best I was going to be able to construct based on what I needed to do. Once I provided an index for this other table C, my DELETEs were fast.

Postmortem:
Two lessons learned came out of this exercise. First, it is clear that I didn't leverage the power of the EXPLAIN statement to get a better idea of the impact of my SQL queries. That's a rookie mistake, so I'm not going to beat myself up about that one. I'll learn from that mistake. Second, the offending code was the result of a 'get it done quick' mentality and inadequate design/testing led to this problem not showing up sooner. Had I generated several sizable test data sets to use as test input for this new functionality, I'd have not wasted my time nor yours. My testing on the DB side was lacking the depth that my application side has in place. Now I've got the opportunity to improve that.

Reference: EXPLAIN Statement

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(14

椵侞 2024-07-25 07:27:23

从 InnoDB 中删除数据是您可以请求的最昂贵的操作。 正如您已经发现的那样,查询本身并不是问题——无论如何,它们中的大多数都会被优化为相同的执行计划。

虽然可能很难理解为什么所有情况下的 DELETE 都是最慢的,但有一个相当简单的解释。 InnoDB 是一个事务性存储引擎。 这意味着,如果您的查询中途中止,所有记录仍将保留在原位,就好像什么也没发生一样。 一旦完成,一切都会在同一瞬间消失。 在 DELETE 期间,连接到服务器的其他客户端将看到记录,直到 DELETE 完成为止。

为了实现这一点,InnoDB 使用了一种称为 MVCC(多版本并发控制)的技术。 它的基本作用是为每个连接提供整个数据库的快照视图,就像事务的第一个语句开始时一样。 为了实现这一点,InnoDB 内部的每条记录都可以有多个值 - 每个快照一个值。 这也是 InnoDB 上的 COUNTing 需要一些时间的原因 - 这取决于您当时看到的快照状态。

对于 DELETE 事务,根据查询条件识别的每条记录都会被标记为删除。 由于其他客户端可能同时访问数据,因此无法立即将它们从表中删除,因为它们必须查看各自的快照以保证删除的原子性。

一旦所有记录都被标记为删除,事务就成功提交。 即使这样,在 DELETE 事务之前使用快照值的所有其他事务也结束之前,它们也无法立即从实际数据页中删除。

因此,事实上,考虑到必须修改所有记录以便准备以交易安全的方式删除它们,您的 3 分钟并没有那么慢。 当语句运行时,您可能会“听到”硬盘工作的声音。 这是由于访问所有行引起的。
为了提高性能,您可以尝试增加服务器的 InnoDB 缓冲池大小,并尝试在 DELETE 时限制对数据库的其他访问,从而减少 InnoDB 必须维护每条记录的历史版本数量。
有了额外的内存,InnoDB 可能能够将表(大部分)读入内存并避免一些磁盘查找时间。

Deleting data from InnoDB is the most expensive operation you can request of it. As you already discovered the query itself is not the problem - most of them will be optimized to the same execution plan anyway.

While it may be hard to understand why DELETEs of all cases are the slowest, there is a rather simple explanation. InnoDB is a transactional storage engine. That means that if your query was aborted halfway-through, all records would still be in place as if nothing happened. Once it is complete, all will be gone in the same instant. During the DELETE other clients connecting to the server will see the records until your DELETE completes.

To achieve this, InnoDB uses a technique called MVCC (Multi Version Concurrency Control). What it basically does is to give each connection a snapshot view of the whole database as it was when the first statement of the transaction started. To achieve this, every record in InnoDB internally can have multiple values - one for each snapshot. This is also why COUNTing on InnoDB takes some time - it depends on the snapshot state you see at that time.

For your DELETE transaction, each and every record that is identified according to your query conditions, gets marked for deletion. As other clients might be accessing the data at the same time, it cannot immediately remove them from the table, because they have to see their respective snapshot to guarantee the atomicity of the deletion.

Once all records have been marked for deletion, the transaction is successfully committed. And even then they cannot be immediately removed from the actual data pages, before all other transactions that worked with a snapshot value before your DELETE transaction, have ended as well.

So in fact your 3 minutes are not really that slow, considering the fact that all records have to be modified in order to prepare them for removal in a transaction safe way. Probably you will "hear" your hard disk working while the statement runs. This is caused by accessing all the rows.
To improve performance you can try to increase InnoDB buffer pool size for your server and try to limit other access to the database while you DELETE, thereby also reducing the number of historic versions InnoDB has to maintain per record.
With the additional memory InnoDB might be able to read your table (mostly) into memory and avoid some disk seeking time.

霓裳挽歌倾城醉 2024-07-25 07:27:23

试试这个:

DELETE a
FROM a
INNER JOIN b
 on a.id = b.id

使用子查询往往比连接慢,因为它们针对外部查询中的每条记录运行。

Try this:

DELETE a
FROM a
INNER JOIN b
 on a.id = b.id

Using subqueries tend to be slower then joins as they are run for each record in the outer query.

小嗲 2024-07-25 07:27:23

当我必须处理超大型数据时(此处:包含 150000 行的示例测试表),这就是我经常做的事情:

drop table if exists employees_bak;
create table employees_bak like employees;
insert into employees_bak 
    select * from employees
    where emp_no > 100000;

rename table employees to employees_todelete;
rename table employees_bak to employees;
drop table employees_todelete;

在这种情况下,sql 将 50000 行过滤到备份表中。
在我的慢速机器上,查询级联只需 5 秒即可执行。
您可以将 insert into select 替换为您自己的过滤查询。

这就是在大数据库上执行批量删除的技巧!;=)

This is what I always do, when I have to operate with super large data (here: a sample test table with 150000 rows):

drop table if exists employees_bak;
create table employees_bak like employees;
insert into employees_bak 
    select * from employees
    where emp_no > 100000;

rename table employees to employees_todelete;
rename table employees_bak to employees;
drop table employees_todelete;

In this case the sql filters 50000 rows into the backup table.
The query cascade performs on my slow machine in 5 seconds.
You can replace the insert into select by your own filter query.

That is the trick to perform mass deletion on big databases!;=)

时光瘦了 2024-07-25 07:27:23

你这三分钟的时间看起来真的很慢。 我的猜测是 id 列没有正确索引。 如果您可以提供您正在使用的确切表定义,那将会很有帮助。

我创建了一个简单的 python 脚本来生成测试数据,并对同一数据集运行多个不同版本的删除查询。 这是我的表定义:

drop table if exists a;
create table a
 (id bigint unsigned  not null primary key,
  data varchar(255) not null) engine=InnoDB;

drop table if exists b;
create table b like a;

然后我将 100k 行插入到 a 中,将 25k 行插入到 b 中(其中 22.5k 也在 a 中)。 这是各种删除命令的结果。 顺便说一句,我在运行之间删除并重新填充了表。

mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (1.14 sec)

mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (0.81 sec)

mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (0.97 sec)

mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (0.81 sec)

所有测试均在 Intel Core2 四核 2.5GHz、2GB RAM、Ubuntu 8.10 和 MySQL 5.0 上运行。 注意,一条sql语句的执行仍然是单线程的。


更新:

我更新了我的测试以使用itsmatt 的架构。 我通过删除自动增量(我正在生成合成数据)和字符集编码(不起作用 - 没有深入研究它)稍微修改了它。

这是我的新表定义:

drop table if exists a;
drop table if exists b;
drop table if exists c;

create table c (id varchar(30) not null primary key) engine=InnoDB;

create table a (
  id bigint(20) unsigned not null primary key,
  c_id varchar(30) not null,
  h int(10) unsigned default null,
  i longtext,
  j bigint(20) not null,
  k bigint(20) default null,
  l varchar(45) not null,
  m int(10) unsigned default null,
  n varchar(20) default null,
  o bigint(20) not null,
  p tinyint(1) not null,
  key l_idx (l),
  key h_idx (h),
  key m_idx (m),
  key c_id_idx (id, c_id),
  key c_id_fk (c_id),
  constraint c_id_fk foreign key (c_id) references c(id)
) engine=InnoDB row_format=dynamic;

create table b like a;

然后,我使用 a 中的 100k 行和 b 中的 25k 行重新运行相同的测试(并在运行之间重新填充)。

mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (11.90 sec)

mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (11.48 sec)

mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (12.21 sec)

mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (12.33 sec)

正如您所看到的,这比以前慢了很多,可能是由于多个索引的原因。 然而,距离三分钟还差得很远。

您可能想要查看的其他内容是将长文本字段移动到架构的末尾。 我似乎记得,如果所有大小受限的字段都在前面,文本、blob 等在最后,那么 mySQL 的性能会更好。

Your time of three minutes seems really slow. My guess is that the id column is not being indexed properly. If you could provide the exact table definition you're using that would be helpful.

I created a simple python script to produce test data and ran multiple different versions of the delete query against the same data set. Here's my table definitions:

drop table if exists a;
create table a
 (id bigint unsigned  not null primary key,
  data varchar(255) not null) engine=InnoDB;

drop table if exists b;
create table b like a;

I then inserted 100k rows into a and 25k rows into b (22.5k of which were also in a). Here's the results of the various delete commands. I dropped and repopulated the table between runs by the way.

mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (1.14 sec)

mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (0.81 sec)

mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (0.97 sec)

mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (0.81 sec)

All the tests were run on an Intel Core2 quad-core 2.5GHz, 2GB RAM with Ubuntu 8.10 and MySQL 5.0. Note, that the execution of one sql statement is still single threaded.


Update:

I updated my tests to use itsmatt's schema. I slightly modified it by remove auto increment (I'm generating synthetic data) and character set encoding (wasn't working - didn't dig into it).

Here's my new table definitions:

drop table if exists a;
drop table if exists b;
drop table if exists c;

create table c (id varchar(30) not null primary key) engine=InnoDB;

create table a (
  id bigint(20) unsigned not null primary key,
  c_id varchar(30) not null,
  h int(10) unsigned default null,
  i longtext,
  j bigint(20) not null,
  k bigint(20) default null,
  l varchar(45) not null,
  m int(10) unsigned default null,
  n varchar(20) default null,
  o bigint(20) not null,
  p tinyint(1) not null,
  key l_idx (l),
  key h_idx (h),
  key m_idx (m),
  key c_id_idx (id, c_id),
  key c_id_fk (c_id),
  constraint c_id_fk foreign key (c_id) references c(id)
) engine=InnoDB row_format=dynamic;

create table b like a;

I then reran the same tests with 100k rows in a and 25k rows in b (and repopulating between runs).

mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (11.90 sec)

mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (11.48 sec)

mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (12.21 sec)

mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (12.33 sec)

As you can see this is quite a bit slower than before, probably due to the multiple indexes. However, it is nowhere near the three minute mark.

Something else that you might want to look at is moving the longtext field to the end of the schema. I seem to remember that mySQL performs better if all the size restricted fields are first and text, blob, etc are at the end.

梦太阳 2024-07-25 07:27:23

您正在对“a”中的每一行对“b”进行子查询。

尝试:

DELETE FROM a USING a LEFT JOIN b ON a.id = b.id WHERE b.id IS NOT NULL;

You're doing your subquery on 'b' for every row in 'a'.

Try:

DELETE FROM a USING a LEFT JOIN b ON a.id = b.id WHERE b.id IS NOT NULL;
再浓的妆也掩不了殇 2024-07-25 07:27:23

试试这个:

DELETE QUICK A.* FROM A,B WHERE A.ID=B.ID

它比普通查询快得多。

请参阅语法: http://dev.mysql.com/doc/ refman/5.0/en/delete.html

Try this out:

DELETE QUICK A.* FROM A,B WHERE A.ID=B.ID

It is much faster than normal queries.

Refer for Syntax : http://dev.mysql.com/doc/refman/5.0/en/delete.html

稀香 2024-07-25 07:27:23

我知道由于OP的索引遗漏,这个问题已经得到了很好的解决,但我想提供这个额外的建议,这对于这个问题的更通用的情况是有效的。

我个人曾经处理过必须从一个表中删除另一个表中存在的许多行的问题,根据我的经验,最好执行以下操作,特别是如果您希望删除大量行。 最重要的是,该技术将改善复制从属延迟,因为每个单个变异器查询运行的时间越长,延迟就越严重(复制是单线程的)。

所以,这里是:首先执行 SELECT,作为单独的查询,记住脚本/应用程序中返回的 ID,然后继续批量删除(例如,一次删除 50,000 行)。
这将实现以下目的:

  • 每个删除语句都不会锁定表太长时间,从而不会让复制滞后失控。 如果您依赖复制来为您提供相对最新的数据,这一点尤其重要。 使用批处理的好处是,如果您发现每个 DELETE 查询仍然花费太长的时间,您可以将其调整得更小,而无需触及任何 DB 结构。
  • 使用单独的 SELECT 的另一个好处是,SELECT 本身可能需要很长时间才能运行,特别是如果它出于某种原因无法使用最佳的数据库索引。 如果 SELECT 位于 DELETE 的内部,则当整个语句迁移到从属设备时,它将必须重新执行 SELECT,这可能会滞后从属设备,因为它必须重新执行长选择。 奴隶滞后再次遭受严重打击。 如果您使用单独的 SELECT 查询,这个问题就会消失,因为您传递的只是一个 ID 列表。

如果我的逻辑有问题,请告诉我。

有关复制滞后和解决方法的更多讨论(与此类似),请参阅 MySQL Slave Lag(延迟)解释以及解决它的 7 种方法

PS 当然,需要注意的一件事是对之间的表进行潜在的编辑SELECT 完成和 DELETE 开始的时间。 我将让您通过使用与您的应用程序相关的事务和/或逻辑来处理这些细节。

I know this question has been pretty much solved due to OP's indexing omissions but I would like to offer this additional advice, which is valid for a more generic case of this problem.

I have personally dealt with having to delete many rows from one table that exist in another and in my experience it's best to do the following, especially if you expect lots of rows to be deleted. This technique most importantly will improve replication slave lag, as the longer each single mutator query runs, the worse the lag would be (replication is single threaded).

So, here it is: do a SELECT first, as a separate query, remembering the IDs returned in your script/application, then continue on deleting in batches (say, 50,000 rows at a time).
This will achieve the following:

  • each one of the delete statements will not lock the table for too long, thus not letting replication lag to get out of control. It is especially important if you rely on your replication to provide you relatively up-to-date data. The benefit of using batches is that if you find that each DELETE query still takes too long, you can adjust it to be smaller without touching any DB structures.
  • another benefit of using a separate SELECT is that the SELECT itself might take a long time to run, especially if it can't for whatever reason use the best DB indexes. If the SELECT is inner to a DELETE, when the whole statement migrates to the slaves, it will have to do the SELECT all over again, potentially lagging the slaves because it has to do the long select all over again. Slave lag, again, suffers badly. If you use a separate SELECT query, this problem goes away, as all you're passing is a list of IDs.

Let me know if there's a fault in my logic somewhere.

For more discussion on replication lag and ways to fight it, similar to this one, see MySQL Slave Lag (Delay) Explained And 7 Ways To Battle It

P.S. One thing to be careful about is, of course, potential edits to the table between the times the SELECT finishes and DELETEs start. I will let you handle such details by using transactions and/or logic pertinent to your application.

鸩远一方 2024-07-25 07:27:23
DELETE FROM a WHERE id IN (SELECT id FROM b)
DELETE FROM a WHERE id IN (SELECT id FROM b)
悲念泪 2024-07-25 07:27:23

也许您应该在运行如此庞大的查询之前重建索引。 好吧,你应该定期重建它们。

REPAIR TABLE a QUICK;
REPAIR TABLE b QUICK;

然后运行上述任何查询(即)

DELETE FROM a WHERE id IN (SELECT id FROM b)

Maybe you should rebuild the indicies before running such a hugh query. Well, you should rebuild them periodically.

REPAIR TABLE a QUICK;
REPAIR TABLE b QUICK;

and then run any of the above queries (i.e.)

DELETE FROM a WHERE id IN (SELECT id FROM b)
臻嫒无言 2024-07-25 07:27:23

查询本身已经处于最佳形式,更新索引会导致整个操作花费那么长时间。 您可以在该表之前禁用该表上的键操作,这应该会加快速度。 如果您不立即需要它们,可以稍后重新打开它们。

另一种方法是向表中添加一个deleted 标志列并调整其他查询,以便它们考虑该值。 mysql 中最快的布尔类型是 CHAR(0) NULL (true = '', false = NULL)。 这将是一个快速的操作,您可以随后删除这些值。

sql 语句中表达了相同的想法:

ALTER TABLE a ADD COLUMN deleted CHAR(0) NULL DEFAULT NULL;

-- The following query should be faster than the delete statement:
UPDATE a INNER JOIN b SET a.deleted = '';

-- This is the catch, you need to alter the rest
-- of your queries to take the new column into account:
SELECT * FROM a WHERE deleted IS NULL;

-- You can then issue the following queries in a cronjob
-- to clean up the tables:
DELETE FROM a WHERE deleted IS NOT NULL;

如果这也不是您想要的,您可以看看 mysql 文档对 删除语句的速度

The query itself is already in an optimal form, updating the indexes causes the whole operation to take that long. You could disable the keys on that table before the operation, that should speed things up. You can turn them back on at a later time, if you don't need them immediately.

Another approach would be adding a deleted flag-column to your table and adjusting other queries so they take that value into account. The fastest boolean type in mysql is CHAR(0) NULL (true = '', false = NULL). That would be a fast operation, you can delete the values afterwards.

The same thoughts expressed in sql statements:

ALTER TABLE a ADD COLUMN deleted CHAR(0) NULL DEFAULT NULL;

-- The following query should be faster than the delete statement:
UPDATE a INNER JOIN b SET a.deleted = '';

-- This is the catch, you need to alter the rest
-- of your queries to take the new column into account:
SELECT * FROM a WHERE deleted IS NULL;

-- You can then issue the following queries in a cronjob
-- to clean up the tables:
DELETE FROM a WHERE deleted IS NOT NULL;

If that, too, is not what you want, you can have a look at what the mysql docs have to say about the speed of delete statements.

遗忘曾经 2024-07-25 07:27:23

顺便说一句,在我的博客上发布上述内容后,来自 Percona 的 Baron Schwartz 引起了我的注意,他的 maatkit 已经有一个专门用于此目的的工具 - mk-archiver。 http://www.maatkit.org/doc/mk-archiver.html

它很可能是您完成这项工作的最佳工具。

BTW, after posting the above on my blog, Baron Schwartz from Percona brought to my attention that his maatkit already has a tool just for this purpose - mk-archiver. http://www.maatkit.org/doc/mk-archiver.html.

It is most likely your best tool for the job.

拥有 2024-07-25 07:27:23

显然,构建 DELETE 操作基础的 SELECT 查询非常快,因此我认为外键约束或索引是导致速度极其缓慢的原因询问。

尝试

SET foreign_key_checks = 0;
/* ... your query ... */
SET foreign_key_checks = 1;

此操作将禁用对外键的检查。 不幸的是你不能禁用(至少我不知道如何)InnoDB 表的键更新。 使用 MyISAM 表,您可以执行类似

ALTER TABLE a DISABLE KEYS
/* ... your query ... */
ALTER TABLE a ENABLE KEYS 

我实际上没有测试这些设置是否会影响查询持续时间的操作。 但值得一试。

Obviously the SELECT query that builds the foundation of your DELETE operation is quite fast so I'd think that either the foreign key constraint or the indexes are the reasons for your extremely slow query.

Try

SET foreign_key_checks = 0;
/* ... your query ... */
SET foreign_key_checks = 1;

This would disable the checks on the foreign key. Unfortunately you cannot disable (at least I don't know how) the key-updates with an InnoDB table. With a MyISAM table you could do something like

ALTER TABLE a DISABLE KEYS
/* ... your query ... */
ALTER TABLE a ENABLE KEYS 

I actually did not test if these settings would affect the query duration. But it's worth a try.

忘你却要生生世世 2024-07-25 07:27:23

使用终端连接数据库,执行下面的命令,看一下各自的结果时间,你会发现删除10、100、1000、10000、100000条记录的次数并没有成倍增加。

  DELETE FROM #{$table_name} WHERE id < 10;
  DELETE FROM #{$table_name} WHERE id < 100;
  DELETE FROM #{$table_name} WHERE id < 1000;
  DELETE FROM #{$table_name} WHERE id < 10000;
  DELETE FROM #{$table_name} WHERE id < 100000;

删除1万条记录的时间还不到删除10万条记录的10倍。
那么,除了想办法更快的删除记录之外,还有一些间接的方法。

1、我们可以将table_name重命名为table_name_bak,然后从table_name_bak到table_name中选择记录。

2、要删除10000条记录,我们可以删除1000条记录10次。 有一个示例 ruby​​ 脚本可以做到这一点。

#!/usr/bin/env ruby
require 'mysql2'


$client = Mysql2::Client.new(
  :as => :array,
  :host => '10.0.0.250',
  :username => 'mysql',
  :password => '123456',
  :database => 'test'
)


$ids = (1..1000000).to_a
$table_name = "test"

until $ids.empty?
  ids = $ids.shift(1000).join(", ")
  puts "delete =================="
  $client.query("
                DELETE FROM #{$table_name}
                WHERE id IN ( #{ids} )
                ")
end

Connect datebase using terminal and execute command below, look at the result time each of them, you'll find that times of delete 10, 100, 1000, 10000, 100000 records are not Multiplied.

  DELETE FROM #{$table_name} WHERE id < 10;
  DELETE FROM #{$table_name} WHERE id < 100;
  DELETE FROM #{$table_name} WHERE id < 1000;
  DELETE FROM #{$table_name} WHERE id < 10000;
  DELETE FROM #{$table_name} WHERE id < 100000;

The time of deleting 10 thousand records is not 10 times as much as deleting 100 thousand records.
Then, except for finding a way delete records more faster, there are some indirect methods.

1, We can rename the table_name to table_name_bak, and then select records from table_name_bak to table_name.

2, To delete 10000 records, we can delete 1000 records 10 times. There is an example ruby script to do it.

#!/usr/bin/env ruby
require 'mysql2'


$client = Mysql2::Client.new(
  :as => :array,
  :host => '10.0.0.250',
  :username => 'mysql',
  :password => '123456',
  :database => 'test'
)


$ids = (1..1000000).to_a
$table_name = "test"

until $ids.empty?
  ids = $ids.shift(1000).join(", ")
  puts "delete =================="
  $client.query("
                DELETE FROM #{$table_name}
                WHERE id IN ( #{ids} )
                ")
end
说不完的你爱 2024-07-25 07:27:23

MySQL通过id字段删除单表中多行的基本技巧

DELETE FROM tbl_name WHERE id <= 100 AND id >=200;

该查询负责从某个表中删除100 AND 200之间的匹配条件

The basic technique for deleting multiple Row form MySQL in single table through the id field

DELETE FROM tbl_name WHERE id <= 100 AND id >=200;

This query is responsible for deleting the matched condition between 100 AND 200 from the certain table

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文