相同的 Oracle 数据库设置:仅其中之一出现异常
编辑:查看这个问题的结尾,了解导致错误的原因以及我是如何发现的。
当我运行一个将数据批量插入到的应用程序时,Hibernate 抛出了一个非常奇怪的异常甲骨文数据库。该错误来自 Oracle 数据库 ORA-00001,其中
”表示已尝试 插入一条有重复记录的记录 (唯一)密钥。也会出现这个错误 如果现有记录被生成 更新以生成副本 (唯一)密钥。”
这个错误很奇怪,因为我在另一台机器上创建了相同的表(完全相同的定义),如果我通过我的应用程序使用该表,则不会收到相同的错误。并且所有数据都插入到数据库中,所以没有什么被真正拒绝的,
这两个设置之间一定有什么不同,但我能看到的唯一不同的是我在发出
select * from v$version where banner like 'Oracle%';
The database 时得到的横幅输出,这给我带来了麻烦: Oracle 数据库 10g 企业版版本 10.2.0.3.0 - 产品
有效的: Oracle Database 10g 版本 10.2.0.3.0 - 64 位生产
两者的表定义、输入和我编写的应用程序是相同的。所涉及的表基本上是一个带有复合 id(serviceid、date、value1、value2)的四列表 - 没什么花哨的。
关于什么可能是错误的任何想法?我已经多次开始干净,删除两个表以平等的理由开始,但我仍然从数据库中收到错误。
更多输出:
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (STATISTICS.PRIMARY_KEY_CONSTRAINT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8728)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
我如何找出导致问题的原因
感谢 APC 和 ik_zelf,我能够查明此错误的根本原因。事实证明,Quartz 调度程序针对生产数据库(出现错误的地方)配置错误。 对于针对非故障 Oracle 服务器运行的作业,我有
每五秒运行一次批处理作业。我认为对于其他 Oracle 服务器来说,每分钟一次就足够了,并用 * */1 * * * ? 设置了quartz 调度程序。事实证明这是错误的,它不是每分钟运行一次,而是每秒运行一次!
每个作业大约需要 1.5-2 秒,因此两个或多个作业同时运行,从而导致服务器上同时插入。因此,我没有插入 529 个元素,而是插入了 1000 到 2000 个元素。将 crontrigger 表达式更改为与另一个表达式相同,每五秒运行一次,解决了问题。
为了找出问题所在,我必须在 hibernate.cfg.xml 中设置 true 并禁用表上的主键约束。
-- To catch exceptions
-- to find the offending rows run the following query
-- SELECT * FROM uptime_statistics, EXCEPTIONS WHERE MY_TABLE.rowid = EXCEPTIONS.row_id;
create table exceptions(row_id rowid,
owner varchar2(30),
table_name varchar2(30),
constraint varchar2(30));
-- This table was set up
CREATE TABLE MY_TABLE
(
LOGDATE DATE NOT NULL,
SERVICEID VARCHAR2(255 CHAR) NOT NULL,
PROP_A NUMBER(10,0),
PROP_B NUMBER(10,0),
CONSTRAINT PK_CONSTRAINT PRIMARY KEY (LOGDATE, SERVICEID)
);
-- Removed the constraint to see what was inserted twice or more
alter table my_table
disable constraint PK_CONSTRAINT;
-- Enable this later on to find rows that offend the constraints
alter table my_table
enable constraint PK_CONSTRAINT
exceptions into exceptions;
edit: Look to the end of this question for what caused the error and how I found out.
I have a very strange exception thrown on me from Hibernate when I run an app that does batch inserts of data into an oracle database. The error comes from the Oracle database, ORA-00001, which
" means that an attempt has been made to
insert a record with a duplicate
(unique) key. This error will also be
generated if an existing record is
updated to generate a duplicate
(unique) key."
The error is weird because I have created the same table (exactly same definition) on another machine where I do NOT get the same error if I use that through my app. AND all the data get inserted into the database, so nothing is really rejected.
There has to be something different between the two setups, but the only thing I can see that is different is the banner output that I get when issuing
select * from v$version where banner like 'Oracle%';
The database that gives me trouble:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
The one that works:Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
Table definitions, input, and the app I wrote is the same for both. The table involved is basically a four column table with a composite id (serviceid, date, value1, value2) - nothing fancy.
Any ideas on what can be wrong? I have started out clean several times, dropping both tables to start on equal grounds, but I still get the error from the database.
Some more of the output:
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (STATISTICS.PRIMARY_KEY_CONSTRAINT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8728)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
How I found out what caused the problem
Thanks to APC and ik_zelf I was able to pinpoint the root cause of this error. It turns out the Quartz scheduler was wrongly configured for the production database (where the error turned up).
For the job running against the non-failing oracle server I had <cronTriggerExpression>0/5 * * * * ?</cronTriggerExpression>
which ran the batch job every five seconds. I figured that once a minute was sufficent for the other oracle server, and set the quartz scheduler up with * */1 * * * ?. This turns out to be wrong, and instead of running every minute, this ran every second!
Each job took approximately 1.5-2 seconds, and thus two or more jobs were running concurrently, thus causing simultaneous inserts on the server. So instead of inserting 529 elements, I was getting anywhere from 1000 to 2000 inserts. Changing the crontrigger expression to the same as the other one, running every five seconds, fixed the problem.
To find out what was wrong I had to set true in hibernate.cfg.xml and disable the primary key constraint on the table.
-- To catch exceptions
-- to find the offending rows run the following query
-- SELECT * FROM uptime_statistics, EXCEPTIONS WHERE MY_TABLE.rowid = EXCEPTIONS.row_id;
create table exceptions(row_id rowid,
owner varchar2(30),
table_name varchar2(30),
constraint varchar2(30));
-- This table was set up
CREATE TABLE MY_TABLE
(
LOGDATE DATE NOT NULL,
SERVICEID VARCHAR2(255 CHAR) NOT NULL,
PROP_A NUMBER(10,0),
PROP_B NUMBER(10,0),
CONSTRAINT PK_CONSTRAINT PRIMARY KEY (LOGDATE, SERVICEID)
);
-- Removed the constraint to see what was inserted twice or more
alter table my_table
disable constraint PK_CONSTRAINT;
-- Enable this later on to find rows that offend the constraints
alter table my_table
enable constraint PK_CONSTRAINT
exceptions into exceptions;
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您有一个独特的复合约束。 ORA-00001 表示您有两行或多行在 ServiceID、Date、Value1 和/或 Value2 中具有重复值。您说两个数据库的输入相同。因此,要么:
更可能的解释是第二种:一个或多个关键列由外部源或默认值填充(例如,ServiceId 的代码表或日期列的 SYSDATE)。在失败的数据库中,此自动填充无法提供唯一值。造成这种情况的原因有很多,具体取决于您使用的机制。请记住,在唯一的复合键中,NULL 条目计数。也就是说,可以有任意数量的记录 (NULL,NULL.NULL,NULL),但 (42,NULL,NULL,NULL) 只能有一条记录。
我们很难猜测实际问题可能是什么,对您来说也几乎同样困难(尽管您确实具有作为代码作者的优势,这应该会给您一些洞察力)。您需要的是一些跟踪语句。我的首选解决方案是使用 批量 DML 异常处理 但我是一个 PL/SQL 粉丝。 Hibernate 允许您将一些日志记录挂接到您的程序中:我建议您打开它。当代码有合适的工具时,调试起来会容易得多。
作为最后的手段,请在运行批量插入之前禁用约束。然后像这样重新启用它:
如果有重复的行,这将失败,但至关重要的是 MY_EXCEPTIONS 表将列出所有冲突的行。这至少会给你一些关于重复来源的线索。如果您还没有异常表,则必须运行脚本:
$ORACLE_HOME/rdbms/admin/utlexcptn.sql
(您可能需要 DBA 才能访问此目录)。tl;dr
洞察力需要信息:检测您的代码。
You have a unique compound constraint. ORA-00001 means that you have two or more rows which have duplicate values in ServiceID, Date, Value1 and/or Value2. You say the input is the same for both databases. So either:
The more likely explanation is the second one: one or more of your key columns is populated by an external source or default value (e.g. code table for ServiceId or SYSDATE for the date column). In your failing database this automatic population is failing to provide a unique value. There can be any number of reasons why this might be so, depending on what mechanism(s) you're using. Remember that in a unique compound key NULL entries count. That is, you can have any number of records (NULL,NULL.NULL,NULL) but only one for (42,NULL,NULL,NULL).
It is hard for us to guess what the actual problem might be, and almost as hard for you (although you do have the advantage of being the code's author, which ought to grant you some insight). What you need is some trace statements. My preferred solution would be to use Bulk DML Exception Handling but then I am a PL/SQL fan. Hibernate allows you to hook in some logging to your programs: I suggest you switch it on. Code is a heck of a lot easier to debug when it has decent instrumentation.
As a last resort, disable the constraint before running the batch insert. Afterwards re-enable it like this:
This will fail if you have duplicate rows but crucially the MY_EXCEPTIONS table will list all the rows which clash. That at least will give you some clue as to the source of the duplication. If you don't already have an exceptions table you will have to run a script:
$ORACLE_HOME/rdbms/admin/utlexcptn.sql
( you may need a DBA to gain access to this directory).tl;dr
insight requires information: instrument your code.
有问题的是 EE 数据库,另一个看起来像是 SE 数据库。我希望第一个是在更快的硬件上。如果是这种情况,并且您的日期列是使用 SYSDATE 填充的,则很可能是时间分辨率不够;您会得到重复的日期值。如果数据的其他列也不是唯一的,您将得到 ORA-00001。
虽然可能性不大,但乍一看我会朝这个方向看。
您可以使用异常表来识别数据吗?请参阅报告约束异常
The one that has problems is a EE and the other looks like a SE database. I expect that the first is on quicker hardware. If that is the case, and your date column is filled using SYSDATE, it could very well be that the time resolution is not enough; that you get duplicate date values. If the other columns of your data are also not unique, you get ORA-00001.
It's a long shot but at first sight I would look into this direction.
Can you use an exception table to identify the data? See Reporting Constraint Exceptions
我的猜测是服务 ID。无论 hibernate 用于“新鲜”插入的 service_id 是什么,都已被使用。
可能该表在一个数据库中为空,但在另一个数据库中填充,
但我敢打赌,service_id 是序列生成的,并且序列号与数据内容不同步。因此,表中有相同的 1000 行,但
在一个数据库中执行操作会得到比另一个数据库要少的行数。我在创建序列(例如脱离源代码控制)并且数据已从另一个数据库导入到表中的地方经常看到这种情况。
My guess would be the service id. Whatever service_id hibernate is using for the 'fresh' insert has already been used.
Possibly the table is empty in one database but populated in another
I'm betting though that the service_id is sequence generated and the sequence number is out of sync with the data content. So you have the same 1000 rows in the table but doing
in one database gives a lower number than the other. I see this a lot where the sequence has been created (eg out of source control) and data has been imported into the table from another database.