ORA-30926: 无法在源表中获取稳定的行集
我正在得到
ORA-30926: 无法在源表中获取稳定的行集
ORA-30926:无法在以下查询中
MERGE INTO table_1 a
USING
(SELECT a.ROWID row_id, 'Y'
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
:我已经运行了 table_1
它有数据,而且我也运行了内部查询 (src
)也有数据。
为什么会出现这个错误以及如何解决?
I am getting
ORA-30926: unable to get a stable set of rows in the source tables
in the following query:
MERGE INTO table_1 a
USING
(SELECT a.ROWID row_id, 'Y'
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
I've ran table_1
it has data and also I've ran the inside query (src
) which also has data.
Why would this error come and how can it be resolved?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
这通常是由 USING 子句中指定的查询中的重复项引起的。这可能意味着 TABLE_A 是父表,并且多次返回相同的 ROWID。
您可以通过在查询中使用 DISTINCT 来快速解决问题(事实上,如果“Y”是常量值,您甚至不需要将其放入查询中)。
假设您的查询是正确的(不知道您的表),您可以执行以下操作:
This is usually caused by duplicates in the query specified in USING clause. This probably means that TABLE_A is a parent table and the same ROWID is returned several times.
You could quickly solve the problem by using a DISTINCT in your query (in fact, if 'Y' is a constant value you don't even need to put it in the query).
Assuming your query is correct (don't know your tables) you could do something like this:
您可能尝试多次更新目标表的同一行。我刚刚在我开发的合并语句中遇到了同样的问题。确保您的更新在执行合并时不会多次触及同一条记录。
You're probably trying to to update the same row of the target table multiple times. I just encountered the very same problem in a merge statement I developed. Make sure your update does not touch the same record more than once in the execution of the merge.
对一般情况下使用 DISTINCT 解决错误 ORA-30926 的进一步说明:
您需要确保USING() 子句指定的数据集没有连接列(即ON() 子句中的列)的重复值。
在 OP 的示例中,USING 子句仅选择一个键,将 DISTINCT 添加到 USING 子句就足够了。然而,在一般情况下,USING 子句可以选择要匹配的键列的组合以及要在 UPDATE ... SET 子句中使用的属性列。因此,在一般情况下,将 DISTINCT 添加到 USING 子句仍将允许相同键的不同更新行,在这种情况下,您仍然会收到 ORA-30926 错误。
这是对 DCookie 的答案和 Tagar 的答案中第 3.1 点的阐述,根据我的经验,这可能不会立即显而易见。
A further clarification to the use of DISTINCT to resolve error ORA-30926 in the general case:
You need to ensure that the set of data specified by the USING() clause has no duplicate values of the join columns, i.e. the columns in the ON() clause.
In OP's example where the USING clause only selects a key, it was sufficient to add DISTINCT to the USING clause. However, in the general case the USING clause may select a combination of key columns to match on and attribute columns to be used in the UPDATE ... SET clause. Therefore in the general case, adding DISTINCT to the USING clause will still allow different update rows for the same keys, in which case you will still get the ORA-30926 error.
This is an elaboration of DCookie's answer and point 3.1 in Tagar's answer, which from my experience may not be immediately obvious.
1)识别失败的语句
或者
并在 UDUMP 发生时监视 .trc 文件。
2)找到SQL语句后,检查它是否正确(也许使用解释计划或tkprof来检查查询执行计划),如果最近没有这样做过,则分析或计算有关表的统计信息。重建(或删除/重新创建)索引也可能有所帮助。
3.1) SQL语句是MERGE吗?
评估 USING 子句返回的数据以确保连接中不存在重复值。修改 merge 语句以包含确定性 where 子句
3.2) 这是通过视图的 UPDATE 语句吗?
如果是这样,请尝试将视图结果填充到表中,然后尝试直接更新表。
3.3)桌子上有触发器吗?尝试禁用它,看看是否仍然失败。
3.4) 语句是否在“IN-Subquery”中包含不可合并的视图?如果查询具有“FOR UPDATE”子句,这可能会导致返回重复行。请参阅 Bug 2681037
3.5) 表中是否有未使用的列?删除这些可能可以防止该错误。
4) 如果修改 SQL 不能解决错误,则问题可能出在表上,尤其是存在链接行时。
4.1) 对 SQL 中使用的所有表运行“ANALYZE TABLE VALIDATE STRUCTURE CASCADE”语句,以查看表或其索引是否存在任何损坏。
4.2) 检查并消除表上的任何链接或迁移的行。有一些方法可以最大限度地减少这种情况,例如正确设置 PCTFREE。
使用注释 122020.1 - 行链接和迁移
4.3) 如果表另外采用索引组织,请参阅:
注释 102932.1 - 监控 IOT 上的链接行
1) Identify the failing statement
or
and watch for .trc files in UDUMP when it occurs.
2) Having found the SQL statement, check if it is correct (perhaps using explain plan or tkprof to check the query execution plan) and analyze or compute statistics on the tables concerned if this has not recently been done. Rebuilding (or dropping/recreating) indexes may help too.
3.1) Is the SQL statement a MERGE?
evaluate the data returned by the USING clause to ensure that there are no duplicate values in the join. Modify the merge statement to include a deterministic where clause
3.2) Is this an UPDATE statement via a view?
If so, try populating the view result into a table and try updating the table directly.
3.3) Is there a trigger on the table? Try disabling it to see if it still fails.
3.4) Does the statement contain a non-mergeable view in an 'IN-Subquery'? This can result in duplicate rows being returned if the query has a "FOR UPDATE" clause. See Bug 2681037
3.5) Does the table have unused columns? Dropping these may prevent the error.
4) If modifying the SQL does not cure the error, the issue may be with the table, especially if there are chained rows.
4.1) Run the ‘ANALYZE TABLE VALIDATE STRUCTURE CASCADE’ statement on all tables used in the SQL to see if there are any corruptions in the table or its indexes.
4.2) Check for, and eliminate, any CHAINED or migrated ROWS on the table. There are ways to minimize this, such as the correct setting of PCTFREE.
Use Note 122020.1 - Row Chaining and Migration
4.3) If the table is additionally Index Organized, see:
Note 102932.1 - Monitoring Chained Rows on IOTs
今天在 12c 上出现错误,并且现有答案都不适合(没有重复项,WHERE 子句中没有非确定性表达式)。根据 Oracle 的消息文本(下面强调),我的情况与错误的其他可能原因有关:
合并是较大批次的一部分,并在具有许多并发用户的实时数据库上执行。无需更改声明。我只是在合并之前提交了事务,然后单独运行合并,然后再次提交。于是在消息的建议操作中找到了解决方案:
Had the error today on a 12c and none of the existing answers fit (no duplicates, no non-deterministic expressions in the WHERE clause). My case was related to that other possible cause of the error, according to Oracle's message text (emphasis below):
The merge was part of a larger batch, and was executed on a live database with many concurrent users. There was no need to change the statement. I just committed the transaction before the merge, then ran the merge separately, and committed again. So the solution was found in the suggested action of the message:
我由于重复记录(16K)而发生此错误,
我尝试使用唯一的它有效。
但是当我再次尝试合并而没有唯一的相同问题时,发生了
第二次是由于
合并后提交,如果未完成提交,则会显示相同的错误。
如果没有唯一性,如果在每次合并操作后都给出提交,则查询将起作用。
This Error occurred for me because of duplicate records(16K)
I tried with unique it worked .
but again when I tried merge without unique same proble occurred
Second time it was due to commit
after merge if commit is not done same Error will be shown.
Without unique, Query will work if commit is given after each merge operation.
正如前面有人解释的那样,您的 MERGE 语句可能会尝试多次更新同一行,但这不起作用(可能会导致歧义)。
这是一个简单的例子。 MERGE 尝试在匹配给定搜索模式时将某些产品标记为已找到:
如果
patterns
表包含Basic%
和Super%
模式,则 MERGE 有效前三个产品将被更新(找到)。但是,如果patterns
表包含Basic%
和%thing
搜索模式,则 MERGE 不起作用,因为它将尝试更新第二个产品两次,并且这导致问题。如果某些记录需要更新多次,则 MERGE 不起作用。也许你会问为什么不更新两次!?这里,第一次更新 1 和第二次更新 1 是相同的值,但这只是偶然的。现在看看这个场景:
现在第一个产品名称与
Basic%
模式匹配,它将使用代码B
进行更新,但第二个产品与这两个模式匹配,并且无法使用两个代码进行更新< code>B 和T
同时出现(歧义)!这就是数据库引擎抱怨的原因。别怪它!它知道自己在做什么! ;-)
As someone explained earlier, probably your MERGE statement tries to update the same row more than once and that does not work (could cause ambiguity).
Here is one simple example. MERGE that tries to mark some products as found when matching the given search patterns:
If
patterns
table containsBasic%
andSuper%
patterns then MERGE works and first three products will be updated (found). But ifpatterns
table containsBasic%
and%thing
search patterns, then MERGE does NOT work because it will try to update second product twice and this causes the problem. MERGE does not work if some records should be updated more than once. Probably you ask why not update twice!?Here first update 1 and second update 1 are the same value but only by accident. Now look at this scenario:
Now first product name matches
Basic%
pattern and it will be updated with codeB
but second product matched both patterns and cannot be updated with both codesB
andT
in the same time (ambiguity)!That's why DB engine complaints. Don't blame it! It knows what it is doing! ;-)
几个小时后我无法解决这个问题。最终,我只是对连接的两个表进行了选择,创建了一个提取,并为表中的 500 行创建了单独的 SQL 更新语句。丑陋,但胜过花费数小时试图让查询工作。
I was not able to resolve this after several hours. Eventually I just did a select with the two tables joined, created an extract and created individual SQL update statements for the 500 rows in the table. Ugly but beats spending hours trying to get a query to work.