java.sql.SQLException: [BEA][Oracle JDBC Driver][Oracle]ORA-01438: 值大于此列允许的指定精度
我在生产中收到此错误消息:
java.sql.SQLException: [BEA][Oracle JDBC Driver][Oracle]ORA-01438: 值大于此列允许的指定精度
不幸的是,这来自购买的应用程序,并且支持过程并不快。
当数据从一个表复制到另一个表时会发生这种情况。 两个表应该具有相同的列类型和长度。 到目前为止,我已经通过执行以下操作审查了其中的一些内容:
select distinct( length( column_name ) ) ) from source_table
然后将值与目标表中的column_name 长度进行比较,但这花费了我很多时间。
有没有办法执行此检查而不会导致此错误?
我想确定哪一列包含长度超出源限制的数据。
我正在使用:
- Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
- 带有分区、OLAP 和 Oracle Data Mining 选项
- JServer Release 9.2.0.7.0 - Production
I'm getting this error message in production:
java.sql.SQLException: [BEA][Oracle JDBC Driver][Oracle]ORA-01438: value larger than specified precision allows for this column
Unfortunately this comes from a purchased application and the support process is not precisely fast.
This happens when data is being copied from one table to another. Both tables are supposed to have the same columns types and length. So far I have reviewed a some of them by doing the following:
select distinct( length( column_name ) ) ) from source_table
Then comparing the value with the length of column_name in the target table but it's taking me a lot of time.
Is there a way to perform this check that doesn't result in this error?
I want to identify what column contains the data whose length goes beyond the limit of the source.
I'm working with:
- Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
- With the Partitioning, OLAP and Oracle Data Mining options
- JServer Release 9.2.0.7.0 - Production
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
用于调试此问题的强力方法:
您可以创建一个运行
FOR
循环的脚本,并从表 A 到表 B 逐行插入,将行 ID 或一些相关数据输出到控制台。 一旦识别出坏行,您可以尝试使用坏行中的列数据逐列更新现有行,直到找到罪魁祸首。A brute force method for debugging this:
You could create a script that runs a
FOR
loop and inserts row by row from Table A to Table B, outputting the row ID or some pertinent data to the console. Once you identify the bad row, you could attempt to update column by column of an existing row with the column data from the bad row until you find your culprit.