MySQL 中出现死锁

发布于 2024-09-03 14:22:03 字数 667 浏览 5 评论 0 原文

我们非常沮丧地在 MySQL 中遇到死锁。这并不是因为超过了锁定超时,因为死锁确实发生时立即发生。下面是在 2 个单独的线程(连接池中有 2 个单独的连接)上执行的 SQL 代码,该代码会产生死锁:

UPDATE Sequences SET Counter = LAST_INSERT_ID(Counter + 1) WHERE Sequence IS NULL

序列表有 2 列:序列和计数器

LAST_INSERT_ID 允许我们 根据 MySQL 的建议检索此更新的计数器值。这对我们来说非常有效,但我们遇到了这些僵局!为什么我们会得到它们以及我们如何避免它们?

非常感谢您对此提供的任何帮助。

编辑:这一切都在事务中(因为我使用 Hibernate,所以需要),并且 AUTO_INCRMENT 在这里没有意义。我应该说得更清楚。 Sequences 表包含许多序列(在我们的例子中大约有 1 亿个)。我需要增加一个计数器并检索该值。 AUTO_INCRMENT 在这一切中不起任何作用,这与 Id 或 PRIMARY KEY 无关。

We're very frustratingly getting deadlocks in MySQL. It isn't because of exceeding a lock timeout as the deadlocks happen instantly when they do happen. Here's the SQL code that is executing on 2 separate threads (with 2 separate connections from the connection pool) that produces a deadlock:

UPDATE Sequences SET Counter = LAST_INSERT_ID(Counter + 1) WHERE Sequence IS NULL

Sequences table has 2 columns: Sequence and Counter

The LAST_INSERT_ID allows us to retrieve this updated counter value as per MySQL's recommendation. That works perfect for us, but we get these deadlocks! Why are we getting them and how can we avoid them??

Thanks so much for any help with this.

EDIT: this is all in a transaction (required since I'm using Hibernate) and AUTO_INCREMENT doesn't make sense here. I should've been more clear. The Sequences table holds many sequences (in our case about 100 million of them). I need to increment a counter and retrieve that value. AUTO_INCREMENT plays no role in all of this, this has nothing to do with Ids or PRIMARY KEYs.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

落花浅忆 2024-09-10 14:22:03

将 sql 语句包装在事务中。如果您不使用事务,您将在 LAST_INSERT_ID 上遇到竞争条件。

但实际上,您应该有计数器字段 auto_increment< /a>,所以你让mysql处理这个。

您的第三个解决方案是使用 LOCK_TABLES 来锁定序列表,因此没有其他进程可以同时访问它。除非您使用 INNODB,否则这可能是最慢的解决方案。

Wrap your sql statements in a transaction. If you aren't using a transaction you will get a race condition on LAST_INSERT_ID.

But really, you should have counter fields auto_increment, so you let mysql handle this.

Your third solution is to use LOCK_TABLES, to lock the sequence table so no other process can access it concurrently. This is the probably the slowest solution unless you are using INNODB.

尬尬 2024-09-10 14:22:03

死锁是任何事务数据库的正常组成部分,并且随时可能发生。一般来说,您应该编写应用程序代码来处理它们,因为没有万无一失的方法可以保证您永远不会出现死锁。话虽这么说,有些情况会增加发生死锁的可能性,例如使用大型事务,并且您可以采取一些措施来减轻死锁的发生。

首先,您应该阅读此手册页更好地了解如何避免它们。

其次,如果您所做的只是更新计数器,那么您应该真的,真的,真的为 Counter 使用 AUTO_INCRMENT 列,而不是依赖于“选择然后更新”过程,就像您所拥有的那样所见的是可能产生死锁的竞争条件。本质上,表列的 AUTO_INCRMENT 属性将充当您的计数器。

最后,我假设您在事务内有该更新语句,因为这会频繁产生死锁。如果您想查看它的实际效果,请尝试 此处列出的实验。这正是您的代码所发生的情况...两个线程在提交其中一个记录之前尝试同时更新相同的记录。瞬间陷入僵局。

最好的解决方案是弄清楚如何在没有事务的情况下做到这一点,而 AUTO_INCRMENT 可以让你做到这一点。

Deadlocks are a normal part of any transactional database, and can occur at any time. Generally, you are supposed to write your application code to handle them, as there is no surefire way to guarantee that you will never get a deadlock. That being said, there are situations that increase the likelihood of deadlocks occurring, such as the use of large transactions, and there are things you can do to mitigate their occurrence.

First thing, you should read this manual page to get a better understanding of how you can avoid them.

Second, if all you're doing is updating a counter, you should really, really, really be using an AUTO_INCREMENT column for Counter rather than relying on a "select then update" process, which as you have seen is a race condition that can produce deadlocks. Essentially, the AUTO_INCREMENT property of your table column will act as a counter for you.

Finally, I'm going to assume that you have that update statement inside a transaction, as this would produce frequent deadlocks. If you want to see it in action, try the experiment listed here. That's exactly what's happening with your code... two threads are attempting to update the same records at the same time before one of them is committed. Instant deadlock.

Your best solution is to figure out how to do it without a transaction, and AUTO_INCREMENT will let you do that.

℉服软 2024-09-10 14:22:03

没有涉及其他 SQL 吗?对我来说似乎有点不太可能。

“where sequence is null”可能会导致全表扫描,从而导致在每一行/页/...上获取读锁。

如果(您的特定引擎不使用 MVCC 并且)在同一事务中更新之前有一个 INSERT,这就会成为一个问题。该 INSERT 将获取某些资源(行/页/...)的独占锁,这将导致任何其他线程获取读锁时进入等待状态。因此,两个连接可以首先执行插入操作,使每个连接都在表的一小部分上拥有独占锁,然后它们都尝试执行更新,要求每个连接都能够获取表的读锁整个桌子。

No other SQL involved ? Seems a bit unlikely to me.

The 'where sequence is null' probably causes a full table scan, causing read locks to be acquired on every row/page/... .

This becomes a problem if (your particular engine does not use MVCC and) there were an INSERT that preceded your update within the same transaction. That INSERT would have acquired an exclusive lock on some resource (row/page/...), which will cause the acquisition of a read lock by any other thread to go waiting. So two connections can first do their insert, causing each of them to have an exclusive lock on some small portion of the table, and then they both try to do your update, requiring each of them to be able to acquire a read lock on the entire table.

寒江雪… 2024-09-10 14:22:03

我设法使用序列的 MyISAM 表来做到这一点。

然后,我有一个名为 getNextCounter 的函数,它执行以下操作:

  • 从序列中执行 SELECT 序列值,其中序列名称 = 'test';
  • 执行更新: UPDATE 序列 SET 序列值 = LAST_INSERT_ID(last_retrieved_value + 1) WHERE 序列名称 = 'test' 且序列值 = 最后检索的值;
  • 循环重复,直到两个查询都成功,然后检索最后一个插入 id。

由于它是 MyISAM 表,因此它不会成为事务的一部分,因此该操作不会导致任何死锁。

I managed to do this using a MyISAM table for the sequences.

I then have a function called getNextCounter that does the following:

  • performs a SELECT sequence_value FROM sequences where sequence_name = 'test';
  • performs the update: UPDATE sequences SET sequence_value = LAST_INSERT_ID(last_retrieved_value + 1) WHERE sequence_name = 'test' and sequence_value = last retrieved value;
  • repeat in a loop until both queries are successful, then retrieve the last insert id.

As it is a MyISAM table it won't be part of your transaction, so the operation won't cause any deadlocks.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文