多线程——避免和处理数据库死锁

发布于 2024-08-13 20:14:16 字数 978 浏览 3 评论 0原文

我正在寻找一种在 Java 6 应用程序中处理数据库死锁的好策略;多个并行线程可能会同时写入同一个表。如果数据库 (Ingres RDMBS) 检测到死锁,它将随机终止其中一个会话。

考虑到以下要求,处理死锁情况的可接受技术是什么?

  • 应保留总经过时间 尽可能小合理地
  • 杀死一个会话将导致 显着的(可测量的)回滚
  • 时间线程无法
    相互沟通,即 策略应该是自主的

到目前为止,我提出的策略是这样的:

short attempts = 0;
boolean success = false;
long delayMs = 0;

Random random = new Random();
do {
    try {
        //insert loads of records in table 'x'
        success = true;
    } catch (ConcurrencyFailureException e) {
        attempts++;
        success = false;
        delayMs = 1000*attempts+random.nextInt(1000*attempts);

        try {
                Thread.sleep(delayMs);
            } catch (InterruptedException ie) {
        }
    }
} while (!success);

它可以以任何方式改进吗?例如,等待固定数量(幻数)的秒数。 是否有不同的策略可以产生更好的结果?

注意: 将使用多种数据库级技术来确保死锁在实践中很少见。此外,应用程序将尝试避免调度同时写入同一个表的线程。上述情况只是“最坏的情况”。

注意:插入记录的表是堆分区表,没有索引;每个线程都会在自己的分区中插入记录。

I am looking for a good strategy of dealing with database deadlocks from within a Java 6 application; several parallel threads could, potentially, write into the same table at the same time. The database (Ingres RDMBS) will randomly kill one of the sessions if it detects a deadlock.

What would be an acceptable technique to deal with the deadlock situation, given the following requirements?

  • the total elapsed time should be kept
    as small as reasonably possible
  • killing a session will incur a
    significant (measurable) rollback
  • time threads have no way to
    communicate with each other i.e. the
    strategy should be autonomous

So far, the strategy I came up with is something along these lines:

short attempts = 0;
boolean success = false;
long delayMs = 0;

Random random = new Random();
do {
    try {
        //insert loads of records in table 'x'
        success = true;
    } catch (ConcurrencyFailureException e) {
        attempts++;
        success = false;
        delayMs = 1000*attempts+random.nextInt(1000*attempts);

        try {
                Thread.sleep(delayMs);
            } catch (InterruptedException ie) {
        }
    }
} while (!success);

Can it be improved in any way? e.g. waiting for a fixed amount (magic number) of seconds.
Is there a different strategy that will produce better results?

Note: Several database level techniques will be used to ensure deadlocks are, in practice, very rare. Also, the application will attempt to avoid scheduling threads that write into the same table at the same time. The situation above will be just a “worst case scenario”.

Note: The table in which records are inserted is organised as a heap partitioned table and has no indexes; each thread will insert records in it's own partition.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

千と千尋 2024-08-20 20:14:16

常用的方法是某种形式的指数退避。不要使用 1000*attempts+random 方法,而是使延迟成为尝试次数的指数函数。这可以确保在前一两次尝试中将延迟降至最低,在这种情况下,您可能只是运气不好而陷入僵局,但在稍后,当很明显连接确实拥塞时,会给您带来更大的延迟。

当然,另一种方法是尝试安排数据库访问,以便不太可能发生死锁。但是,如果不知道您的查询做了什么(以及如何执行以及何时执行),就不可能说是否可以做到这一点

A commonly used approach is some form of exponential back-off. Rather than your 1000*attempts+random aproach, make the delay an exponential function of the number of attempts. This ensures minimal latency in the first one or two attempts, where it might have just been bad luck that you deadlocked, but gives you much bigger delays later, when it is clear that the connection really is congested.

Of course, another approach would be to try to arrange your database accesses so that deadlocks are less likely to occur. But without knowing what your queries do (and how, and when they're executed), it's impossible to say if that can be done

成熟稳重的好男人 2024-08-20 20:14:16

我们就是这样做的。循环并重试事务直至完成。

我们没有搞乱随机延误。

此外,我们在 try 块内进行了提交,并在异常处理程序中进行了回滚。

当您有多个可锁定资源和多个并发事务时,死锁是不可避免的。这是锁争用的逻辑结果。

如果您避免锁争用(即悲观表级锁定),那么您也往往会阻止并发。如果可以定义不争用锁的事务,则可以避免死锁。然而,对同一个表的并发访问几乎就是死锁的定义。

加载时,插入(特别是在 HEAP 表中)可以(通常)并行进行,而不会出现很多争用问题。如果您延迟构建索引,则插入期间不会进行其他更新。

因此,您可以通过删除索引、将组织更改为堆、加载多个并发进程(或线程,拥有多个进程通常更快),然后构建索引(并可能重新组织表)来避免,你也许能够避免僵局。

当进行更新或删除时,没有太大帮助。

That's the way we did it. Loop and retry the transaction until it finishes.

We didn't mess with random delays.

Also, we did the commit inside the try block and the rollback in the exception handler.

When you have multiple lockable resources and multiple concurrent transactions, deadlock is unavoidable. It's a logical consequence of contention for locks.

If you avoid contention for locks (i.e., pessimistic table-level locking) then you also tend to prevent concurrency. If you can define transaction which don't contend for locks, you can avoid deadlock. Concurrent access to the same table, however, is pretty much the definition of deadlock.

When loading, inserts (especial in a HEAP table) can (often) proceed in parallel without many contention issues. If you delay building the indices, then there's no other updates going on during the insert.

So, you may be able to avoid by dropping the indexes, changing the organization to a heap, loading with multiple concurrent processes (or threads, it's usually faster to have multiple processes), then build your indices (and possibly reorganize the table), you may be able to avoid deadlocks.

When doing updates or deletes, not much helps.

贵在坚持 2024-08-20 20:14:16

如果您不需要并发访问数据库,一个简单的解决方案可能是删除它并使用任务处理队列来更新数据库,通过队列序列化对数据库的访问。我意识到这将向您的应用程序引入异步元素,因此不适合大多数用户启动的应用程序或在线网络应用程序,但对于批处理/离线类型应用程序可能值得考虑(我意识到可能不是您寻找的答案)不过)。

If you don't need to have concurrent access to the database a simple solution might be to remove it and use a task processing queue to update the database instead, serialising access to the database via the queue. I realise this will introduce an asynchronous element to your application, and so would not be suitable for most user initiated applications or online web-apps, but might be worth considering for a batch/offline type application (I realise probably not the answer your looking for though).

三人与歌 2024-08-20 20:14:16

使用像 Ingres 这样的数据库,您总是会遇到一些死锁,因此您必须假设任何插入、更新或删除都会失败,并制定重试策略(如您的示例所示)。
您应该设计数据库,以便最大限度地减少争用,并且死锁很少发生。如果即使在多次重试之后事务仍然不断失败,那么这表明您必须进行一些主要的数据库重新设计(或者迁移到 Oracle 这样的系统,通常可以通过适当的使用来设计应用程序来避免死锁)行级锁定)。

With a database like Ingres you will always get some deadlocks, so you have to assume that any insert, update or delete will fail and have a retry strategy in place (as in your example).
You should design your database so that contention is minimised and deadlocks only happen rarely. If you are continually getting transactions failing even after several retries, then this is a sign that you'll have to do some major database redesign (or move to a system like Oracle where it is usually possible to design applications to avoid deadlocks by suitable use of row-level locking).

怀中猫帐中妖 2024-08-20 20:14:16

这是怎么回事?

short attempts = 0;
boolean success = false;
long delayMs = 0;

Random random = new Random();
do {
try {
     synchronized(ClassName.class) {
         //insert loads of records in table 'x'
      }

    success = true;
} catch (ConcurrencyFailureException e) {
    attempts++;
    success = false;
    delayMs = 1000*attempts+random.nextInt(1000*attempts);

    try {
                    Thread.sleep(delayMs);
            } catch (InterruptedException ie) {
    }
  }
} while (!success);

how is this ?

short attempts = 0;
boolean success = false;
long delayMs = 0;

Random random = new Random();
do {
try {
     synchronized(ClassName.class) {
         //insert loads of records in table 'x'
      }

    success = true;
} catch (ConcurrencyFailureException e) {
    attempts++;
    success = false;
    delayMs = 1000*attempts+random.nextInt(1000*attempts);

    try {
                    Thread.sleep(delayMs);
            } catch (InterruptedException ie) {
    }
  }
} while (!success);
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文