ORA-00060: 等待资源时检测到死锁
我有一系列脚本作为 nohup 在托管 Oracle 10g 的 AIX 服务器上并行运行。这些脚本是由其他人编写的,旨在同时执行。所有脚本都在表上执行更新。我收到错误,
ORA-00060: 检测到死锁 等待资源
当我用谷歌搜索这个时,我发现, http://www.dba-oracle.com/t_deadly_perpetual_embrace_locks.htm
脚本同时对同一个表执行更新,它们对由 WHERE
子句确定的表的不同记录执行更新,并且它们之间没有记录重叠。
那么这会导致错误吗?
无论在表的何处执行更新,都会发生此错误吗?
我应该始终避免表上的并发更新吗?
奇怪的是我还在 nohup.out 日志上发现, 在出现上述错误后,PL/SQL 已成功完成
。
这是否意味着 Oracle 已从死锁中恢复并成功完成更新,或者我应该连续重新运行这些脚本吗?
I have a series of scripts running in parallel as a nohup on an AIX server hosting oracle 10g. These scripts are written by somebody else and are meant to be executed concurrently. All the scripts are performing updates on a table. I am getting the error,
ORA-00060: deadlock detected while
waiting for resource
As I googled for this, I found,
http://www.dba-oracle.com/t_deadly_perpetual_embrace_locks.htm
Even though the scripts are performing updation on the same table concurrently, they are performing updates on different records of the table determined by the WHERE
clause with no overlaps of records between them.
So would this have caused the error?.
Will this error happen regardless of where the updates are performed on a table?.
Should I avoid concurrent updates on a table at all times?.
Strangely I also found on the nohup.out log,PL/SQL successfully completed
after the above quoted error.
Does this mean that oracle has recovered from the deadlock and completed the updates successfully or Should I rerun those scripts serially?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我最近也遇到了类似的问题。结果发现数据库缺少外键索引。这导致 Oracle 锁定了比需要的更多的记录,从而在高并发期间很快导致死锁。
这是一篇优秀的文章,其中包含许多详细信息、建议以及有关如何修复死锁的详细信息:
http://www.oratechinfo.co.uk/deadlocks.html#unindex_fk
I was recently struggling with a similar problem. It turned out that the database was missing indexes on foreign keys. That caused Oracle to lock many more records than required which quickly led to a deadlock during high concurrency.
Here is an excellent article with lots of good detail, suggestions, and details about how to fix a deadlock:
http://www.oratechinfo.co.uk/deadlocks.html#unindex_fk
除了行锁之外,您还可能会遇到死锁,例如,请参阅此。这些脚本可能会竞争其他资源,例如索引块。
过去,我通过设计并行性来解决这个问题,即不同的实例正在处理不太可能影响彼此靠近的块的工作负载部分;例如,对于大型表的更新,我不会使用
MOD(n,10)
等设置并行从站,而是使用TRUNC(n/10)
code> 这意味着每个从站都处理一组连续的数据。当然,有更好的方法来分割作业以实现并行性,例如 DBMS_PARALLEL_EXECUTE。
不确定为什么您会收到“PL/SQL 成功完成”,也许您的脚本正在处理异常?
You can get deadlocks on more than just row locks, e.g. see this. The scripts may be competing for other resources, such as index blocks.
I've gotten around this in the past by engineering the parallelism in such a way that different instances are working on portions of the workload that are less likely to affect blocks that are close to each other; for example, for an update of a large table, instead of setting up the parallel slaves using something like
MOD(n,10)
, I'd useTRUNC(n/10)
which mean that each slave worked on a contiguous set of data.There are, of course, much better ways of splitting up a job for parallelism, e.g. DBMS_PARALLEL_EXECUTE.
Not sure why you're getting "PL/SQL successfully completed", perhaps your scripts are handling the exception?
我也遇到了这个问题。我不知道实际发生的技术细节。但是,在我的情况,根本原因是 Oracle 数据库中存在级联删除设置,而我的 JPA/Hibernate 代码也尝试执行级联删除调用。所以我的建议是确保您确切地知道发生了什么。
I ran into this issue as well. I don't know the technical details of what was actually happening. However, in my situation, the root cause was that there was cascading deletes setup in the Oracle database and my JPA/Hibernate code was also trying to do the cascading delete calls. So my advice is to make sure that you know exactly what is happening.
我正在测试一个在
IF-ELSE
块内有多个UPDATE
语句的函数。我正在测试所有可能的路径,因此每次在再次运行该函数之前,我都会使用“手动”
UPDATE
语句将表重置为之前的值。我注意到这个问题会在这些
UPDATE
语句之后发生;我在用于重置表的
UPDATE
语句之后添加了COMMIT;
,这解决了问题。所以,请注意,问题不是函数本身...
I was testing a function that had multiple
UPDATE
statements withinIF-ELSE
blocks.I was testing all possible paths, so I reset the tables to their previous values with 'manual'
UPDATE
statements each time before running the function again.I noticed that the issue would happen just after those
UPDATE
statements;I added a
COMMIT;
after theUPDATE
statement I used to reset the tables and that solved the problem.So, caution, the problem was not the function itself...