ResultSet 中的内存不足 allocLargeObjectOrArray
我正在使用 JDBC 来获取大量数据。 调用成功完成,但是当调用 resultSet.next()
时,出现以下错误:
java.lang.OutOfMemoryError: allocLargeObjectOrArray - Object size: 15414016, Num elements: 7706998
我尝试增加 JVM 内存大小,但这并不能解决问题。 我不确定这个问题是否可以解决,因为我没有使用 JDBC 访问数据库,而是系统通过 JDBC 访问 BEA AquaLogic 服务。
有人遇到过这个错误吗?
I'm using JDBC to get a large amount of data. The call completes successfully, but when resultSet.next()
is called, I get the following error:
java.lang.OutOfMemoryError: allocLargeObjectOrArray - Object size: 15414016, Num elements: 7706998
I've attempted to increase the JVM memory size, but this does not fix the problem. I'm not sure this problem can even be addressed as I'm not using JDBC to access a database, rather, the system is accessing a BEA AquaLogic service through JDBC.
Has anyone run into this error?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
请注意,在第一次 resultSet.next() 调用之前,结果可能尚未从数据库中读取,或者仍然位于某个地方的另一个缓存结构中。
您应该尝试限制您的 Select 返回合理数量的结果,如果您需要所有数据,则可以重复调用,直到没有更多结果为止。
除非您可以确定 JDBC 调用返回的数据量有绝对限制,否则增加 JVM 内存大小不会有帮助。
此外,通过 JDBC 访问任何服务本质上都归结为使用 JDBC:)
另一种(不太可能)的可能性可能是您正在使用的 JDBC 驱动程序中存在错误。 如果可能,尝试不同的实现并检查问题是否仍然存在。
Beware that until the first resultSet.next() call the results may not yet be read from the database or still be in another caching structure somewhere.
You should try limit your Select to return a sane amount of results and maybe repeat the call until there are no more results left if you need all the data.
Increasing the JVM memory size won't help unless you can be sure that there is an absolute limit on the amount of data which will be returned by your JDBC call.
Furthermore, accessing any service through JDBC essentially boils down to using JDBC :)
Another (unlikely) possibility could be that there is a bug in the JDBC driver you're using. Try a different implementation if it is possible and check if the problem persists.
首先,弄清楚您是否真的需要一次在内存中获取那么多数据。 RDBMS 擅长聚合/排序等大型数据集,如果可能的话,您应该尝试利用这一点。
如果没有(并且出于某种原因,您确实确实需要工作内存中那么多数据)...并且增加 JVM 的内存参数并不能提高足够的标准...查看内存中的分布式缓存解决方案,例如Coherence (COTS) 或 TerraCotta(开源)。
First-- figure out if you really need to get that much data in memory at once. RDBMS's are good at aggregating/sorting/etc large data sets, and you should try to take advantage of that if possible.
If not (and you really, really do need that much data in working memory for some reason)... and bumping up the JVM's memory args doesn't raise the bar enough... look into an in-memory distributed caching solution like Coherence (COTS) or TerraCotta (open source).
您可以尝试在语句中设置 setFetchSize(int rows) 方法。
但setFetchRows只是一个提示,这意味着它可能不会被实现。
You can try setting the setFetchSize(int rows) method on your statement.
But setFetchRows is only a hint, which means it may not be implemented.
尝试将内存大小增加到 1.2g,例如 -mx1200m 或略小于计算机物理内存的值。 您可能会发现它一次读取的数据比您想象的要多。
Try increasing the memory size to 1.2g e.g. -mx1200m or something just less than the physical memory of your machine. You may find it is reading more data at once than your think.
您从数据库返回多少行? 像 kosi2801 一样,我建议只获取数据的子集,从合理的数字开始,然后增加以找到阈值。
How many rows are you returning from the database? like kosi2801 I would suggest to only fetch a subset of the data, start with a reasonable number and then increase to find the threshold.