如何修复“请求的数组大小超出 VM 限制” Java 中的错误?

发布于 2024-10-29 00:29:31 字数 422 浏览 2 评论 0原文

是否有一个日志选项可以让 tomcat 记录错误的查询而不是仅仅抛出这个?

严重:java.lang.OutOfMemoryError:请求的数组大小超出 VM 限制

(尝试将日志级别设置为FULL,但仅捕获上述内容)

这没有足够的信息来进一步调试
或者,是否可以通过调整以下内容分配更多内存来解决此问题?

-Xms1024M -Xmx4096M -XX:MaxPermSize=256M

更新

-Xms6G -Xmx6G -XX:MaxPermSize=1G -XX:PermSize=512M

(上面的似乎效果更好,继续监控)

Is there an log option that can let tomcat log the bad query instead just throwing this ?

SEVERE: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

(Tried log level to FULL, but only capture the above)

This is not enough information to further debug
Alternatively, if this can be fixed by allocated more memory by tweaking the following?

-Xms1024M -Xmx4096M -XX:MaxPermSize=256M

Update

-Xms6G -Xmx6G -XX:MaxPermSize=1G -XX:PermSize=512M

(the above seems works better, keep monitoring)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

月亮是我掰弯的 2024-11-05 00:29:31

我怀疑您可能在大型索引上使用排序。这是我明确知道 Lucene 需要大数组大小的一件事。不管怎样,您可能想尝试使用具有以下选项的 64 位 JVM:

-Xmx6G -XX:MaxPermSize=128M -XX:+UseCompressedOops

最后一个选项会将 64 位内存指针减少到 32 位(只要堆低于 32GB)。这通常可以减少大约 40% 的内存开销,因此可以帮助显着扩展您的内存。

更新:很可能您不需要这么大的永久代大小,当然不是 1G。您可能对 128M 没问题,如果您使用 Java 6,您会收到特定错误。由于服务器中的内存限制为 8G,因此您可以使用较小的 Perm 为堆保留 7G将军小心不要进入交换,这会严重减慢 Java 的速度。

我注意到您在更新中没有提到 -XX:+UseCompressedOops 。如果您还没有尝试过,这可能会产生巨大的影响。您也许可以通过减小伊甸园的大小来挤出更多的空间,为终身一代提供更多的空间。除此之外,我认为您只需要更多的内存或更少的排序字段。

I suspect you might be using sorts on a large index. That's one thing I definitely know can require a large array size with Lucene. Either way, you might want to try using a 64-bit JVM with these options:

-Xmx6G -XX:MaxPermSize=128M -XX:+UseCompressedOops

The last option will reduce 64-bit memory pointers to 32-bit (as long the heap is under 32GB). This typically reduces the memory overhead by about 40%, so it can help stretch your memory significantly.

Update: Most likely you don't need such a large permanent generation size, certainly not 1G. You're probably fine with 128M, and you'll get a specific error if you go over with Java 6. Since you're limited to 8G in your server you might be able to get away with 7G for the heap with a smaller perm gen. Be careful about not going into swap, that can seriously slow things down for Java.

I noticed you didn't mention -XX:+UseCompressedOops in your update. That can make a huge difference if you haven't tried it yet. You might be able to squeeze a little more space out by reducing the size of eden to give the tenured generation more room. Beyond that I think you'll simply need more memory or fewer sort fields.

许你一世情深 2024-11-05 00:29:31

您将收到此异常,因为您尝试创建的数组大于 Java VM 堆中的最大连续内存块。

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds -vm-limit

解决方案是什么?

java.lang.OutOfMemoryError:请求的数组大小超出 VM 限制可能是由于以下任一情况导致的:

您的数组变得太大最终大小介于平台限制和 Integer.MAX_INT 之间

您故意尝试分配大于 2^31-1 元素的数组来试验限制。

在第一种情况下,检查您的代码库,看看您是否真的需要那么大的数组。也许你可以减少数组的大小并完成它。或者将数组分成更小的块,并根据您的平台限制分批加载您需要处理的数据。

在第二种情况下 – 请记住 Java 数组是按 int 索引的。因此,在平台内使用标准数据结构时,数组中的元素不能超过 2^31-1。事实上,在这种情况下,你已经被编译器阻止,在编译过程中宣布“错误:整数太大”。
但如果您确实使用真正的大型数据集,则需要重新考虑您的选择。您可以小批量加载需要处理的数据,并且仍然使用标准 Java 工具,或者您可能会超出标准实用程序的范围。实现此目的的一种方法是查看 sun.misc.Unsafe 类。这允许您像在 C 中一样直接分配内存。

You will get this exception because you are trying to create an Array that is larger than the maximum contiguous block of memory in your Java VMs heap.

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds-vm-limit

What is the solution?

The java.lang.OutOfMemoryError: Requested array size exceeds VM limit can appear as a result of either of the following situations:

Your arrays grow too big and end up having a size between the platform limit and the Integer.MAX_INT

You deliberately try to allocate arrays larger than 2^31-1 elements to experiment with the limits.

In the first case, check your code base to see whether you really need arrays that large. Maybe you could reduce the size of the arrays and be done with it. Or divide the array into smaller bulks and load the data you need to work with in batches fitting into your platform limit.

In the second case – remember that Java arrays are indexed by int. So you cannot go beyond 2^31-1 elements in your arrays when using the standard data structures within the platform. In fact, in this case you are already blocked by the compiler announcing “error: integer number too large” during compilation.
But if you really work with truly large data sets, you need to rethink your options. You can load the data you need to work with in smaller batches and still use standard Java tools, or you might go beyond the standard utilities. One way to achieve this is to look into the sun.misc.Unsafe class. This allows you to allocate memory directly like you would in C.

沉睡月亮 2024-11-05 00:29:31

如果你想找出导致 OutOfMemory 的原因,你可以添加

-XX:+HeapDumpOnOutOfMemoryError 

到你的 java opts 中。

下次内存不足时,您将获得一个堆转储文件,可以使用位于 jdk/lib 内的“jhat”进行分析。 Jhat 将向您显示堆中存在哪些对象以及它们消耗了多少内存。

If you want to find out what causes OutOfMemory, you can add

-XX:+HeapDumpOnOutOfMemoryError 

to your java opts.

The next time you get out of memory, you will get a heap dump file that can be analyzed with "jhat" that is located inside jdk/lib. Jhat will show you what objects exist in your heap and how much memory they consume.

妖妓 2024-11-05 00:29:31

将 solr 升级到较新的版本似乎可以解决此问题,可能较新的版本具有更好的堆内存管理。

Upgrade solr to a newer version seems have sort this problem, likely newer version has a better heap memory management.

离鸿 2024-11-05 00:29:31

我在 catalina.sh 中使用它,

JAVA_OPTS="-Dsolr.solr.home=/etc/tomcat6/solr -Djava.awt.headless=true -server -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"

我在 Tomcat/solr 上从未遇到过 30M 小文档的内存问题。不过,我在使用 solrJ 索引客户端时遇到了问题。我必须使用 -Xms8G -Xmx8G 作为 Java 客户端,并按 250K 文档块添加文档。

I use this in catalina.sh

JAVA_OPTS="-Dsolr.solr.home=/etc/tomcat6/solr -Djava.awt.headless=true -server -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"

I never had mem problems on Tomcat/solr with 30M small documents. I had problems with the solrJ indexing client though. I had to use -Xms8G -Xmx8G for the Java client, and add documents by chunks of 250K documents.

聚集的泪 2024-11-05 00:29:31

内存不足!看看是否有数组越界,或者循环系统资源被吞噬掉!


  1. java.lang.OutOfMemoryError: Java 堆空间
    在JVM中,如果98%的时间是可用的GC Heap大小而只有不到2%的时间会抛出这个异常信息。
    JVM堆设置是java程序运行时可用于调配JVM内存空间的设置。 JVM在启动时自动设置Heap size值,初始空间(即-Xms)是物理内存的1 / 64 ,最大空间(-Xmx)是物理内存的1 / 4。JVM可以用来提供-可以设置Xmn-Xms-Xmx等选项。

  2. 请求的数组大小超出VM限制:这是因为申请的数组大小超出了堆空间的大小,比如一个256M的堆空间在数组中申请了512M

Out of memory! See if there is an array out of bounds, or loop the system resources are swallowed up!


  1. java.lang.OutOfMemoryError: Java heap space
    In the JVM, if 98% of the time is available for the GC Heap size and less than 2% of the time to throw this exception information.
    JVM heap setting is the java program is running JVM memory space can be used to deploy the settings. JVM at startup automatically set Heap size value, the initial space (ie-Xms) is the physical memory of 1 / 64 , The maximum space (-Xmx) is the physical memory of 1 / 4. JVM can be used to provide the-Xmn-Xms-Xmx and other options can be set.

  2. Requested array size exceeds VM limit: This is because the application of the array size exceeds the size of heap space, such as a 256M of heap space in the array to apply for a 512M

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文