java.lang.OutOfMemoryError:超出 GC 开销限制

发布于 2024-11-04 08:36:42 字数 684 浏览 3 评论 0原文

我在一个程序中遇到此错误,该程序创建了多个(数十万个)HashMap 对象,每个对象都有一些(15-20)个文本条目。在提交到数据库之前,必须收集所有这些字符串(不要分解成更小的数量)。

根据 Sun 的说法,如果在垃圾收集上花费了太多时间,就会发生错误:如果总时间的 98% 以上花费在垃圾收集上,并且回收的堆少于 2%,则会抛出 OutOfMemoryError。 ”。

显然,可以使用命令行将参数传递给 JVM

  • ,以通过“-Xmx1024m”(或更多)增加堆大小,或
  • 通过“-XX:-UseGCOverheadLimit”完全禁用错误检查。

第一种方法工作正常,第二种方法最终导致另一个 java.lang.OutOfMemoryError,这次是关于堆的。

那么,问题是:对于特定的用例(即几个小的 HashMap 对象),是否有任何编程替代方案?例如,如果我使用 HashMapclear() 方法,问题就会消失,但存储在 HashMap 中的数据也会消失! :-)

这个问题也在 StackOverflow 中的相关主题。

I am getting this error in a program that creates several (hundreds of thousands) HashMap objects with a few (15-20) text entries each. These Strings have all to be collected (without breaking up into smaller amounts) before being submitted to a database.

According to Sun, the error happens "if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.".

Apparently, one could use the command line to pass arguments to the JVM for

  • Increasing the heap size, via "-Xmx1024m" (or more), or
  • Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".

The first approach works fine, the second ends up in another java.lang.OutOfMemoryError, this time about the heap.

So, question: is there any programmatic alternative to this, for the particular use case (i.e., several small HashMap objects)? If I use the HashMap clear() method, for instance, the problem goes away, but so do the data stored in the HashMap! :-)

The issue is also discussed in a related topic in StackOverflow.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(16

浮光之海 2024-11-11 08:36:42

您基本上已经耗尽内存来顺利运行该过程。想到的选项:

  1. 像您提到的那样指定更多内存,首先尝试使用 -Xmx512m 之类的东西,
  2. 如果可能的话,使用小批量的 HashMap 对象立即处理
  3. 如果您有很多重复的字符串,请使用 String.intern() 在将它们放入 HashMap 之前
  4. 使用 HashMap(int initialCapacity, float loadFactor) 构造函数来根据您的情况进行调整

You're essentially running out of memory to run the process smoothly. Options that come to mind:

  1. Specify more memory like you mentioned, try something in between like -Xmx512m first
  2. Work with smaller batches of HashMap objects to process at once if possible
  3. If you have a lot of duplicate strings, use String.intern() on them before putting them into the HashMap
  4. Use the HashMap(int initialCapacity, float loadFactor) constructor to tune for your case
家住魔仙堡 2024-11-11 08:36:42

以下内容对我有用。只需添加以下代码片段:

dexOptions {
        javaMaxHeapSize "4g"
}

到您的 build.gradle 中:

android {
    compileSdkVersion 23
    buildToolsVersion '23.0.1'

    defaultConfig {
        applicationId "yourpackage"
        minSdkVersion 14
        targetSdkVersion 23
        versionCode 1
        versionName "1.0"

        multiDexEnabled true
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }

    packagingOptions {

    }

    dexOptions {
        javaMaxHeapSize "4g"
    }
}

The following worked for me. Just add the following snippet:

dexOptions {
        javaMaxHeapSize "4g"
}

To your build.gradle:

android {
    compileSdkVersion 23
    buildToolsVersion '23.0.1'

    defaultConfig {
        applicationId "yourpackage"
        minSdkVersion 14
        targetSdkVersion 23
        versionCode 1
        versionName "1.0"

        multiDexEnabled true
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }

    packagingOptions {

    }

    dexOptions {
        javaMaxHeapSize "4g"
    }
}
酒与心事 2024-11-11 08:36:42

@takrl:此选项的默认设置是:

java -XX:+UseConcMarkSweepGC

这意味着,此选项默认情况下不活动。所以当你说你使用了这个选项时
+XX:UseConcMarkSweepGC
我假设您正在使用此语法:

java -XX:+UseConcMarkSweepGC

这意味着您明确激活了此选项。
有关 Java HotSpot VM Options @ this 的正确语法和默认设置
文档

@takrl: The default setting for this option is:

java -XX:+UseConcMarkSweepGC

which means, this option is not active by default. So when you say you used the option
"+XX:UseConcMarkSweepGC"
I assume you were using this syntax:

java -XX:+UseConcMarkSweepGC

which means you were explicitly activating this option.
For the correct syntax and default settings of Java HotSpot VM Options @ this
document

逆流 2024-11-11 08:36:42

根据记录,我们今天遇到了同样的问题。我们通过使用此选项修复了它:

-XX:-UseConcMarkSweepGC

显然,这修改了用于垃圾收集的策略,从而使问题消失。

For the record, we had the same problem today. We fixed it by using this option:

-XX:-UseConcMarkSweepGC

Apparently, this modified the strategy used for garbage collection, which made the issue disappear.

半寸时光 2024-11-11 08:36:42

嗯......你要么需要:

  1. 完全重新考虑你的算法和数据结构,这样它就不需要所有这些小 HashMap。

  2. 创建一个外观,允许您根据需要将这些 HashMap 分页进出内存。一个简单的 LRU 缓存可能只是问题。

  3. 增加 JVM 可用的内存。如果有必要,如果您可以管理托管这头野兽的机器,那么即使购买更多 RAM 也可能是最快、最便宜的解决方案。话虽如此:我通常不喜欢“投入更多硬件”解决方案,特别是如果可以在合理的时间内想出替代算法解决方案。如果您不断地为每一个问题投入更多的硬件,您很快就会遇到收益递减规律。

无论如何,你到底想做什么?我怀疑有更好的方法来解决您的实际问题。

Ummm... you'll either need to:

  1. Completely rethink your algorithm & data-structures, such that it doesn't need all these little HashMaps.

  2. Create a facade which allows you page those HashMaps in-and-out of memory as required. A simple LRU-cache might be just the ticket.

  3. Up the memory available to the JVM. If necessary, even purchasing more RAM might be the quickest, CHEAPEST solution, if you have the management of the machine that hosts this beast. Having said that: I'm generally not a fan of the "throw more hardware at it" solutions, especially if an alternative algorithmic solution can be thought up within a reasonable timeframe. If you keep throwing more hardware at every one of these problems you soon run into the law of diminishing returns.

What are you actually trying to do anyway? I suspect there's a better approach to your actual problem.

橘味果▽酱 2024-11-11 08:36:42

使用替代 HashMap 实现 (Trove)。标准 Java HashMap 的内存开销大于 12 倍。
您可以在此处阅读详细信息。

Use alternative HashMap implementation (Trove). Standard Java HashMap has >12x memory overhead.
One can read details here.

梦明 2024-11-11 08:36:42

在等待结束时不要将整个结构存储在内存中。

将中间结果写入数据库中的临时表而不是散列图 - 从功能上讲,数据库表相当于散列图,即两者都支持对数据进行键控访问,但表不受内存限制,因此此处使用索引表而不是哈希图。

如果做得正确,你的算法甚至不应该注意到变化——这里正确意味着使用一个类来表示表,甚至给它一个 put(key, value) 和一个 get(key) 方法,就像 hashmap 一样。

中间表完成后,从其中生成所需的 sql 语句,而不是从内存中生成。

Don't store the whole structure in memory while waiting to get to the end.

Write intermediate results to a temporary table in the database instead of hashmaps - functionally, a database table is the equivalent of a hashmap, i.e. both support keyed access to data, but the table is not memory bound, so use an indexed table here rather than the hashmaps.

If done correctly, your algorithm should not even notice the change - correctly here means to use a class to represent the table, even giving it a put(key, value) and a get(key) method just like a hashmap.

When the intermediate table is complete, generate the required sql statement(s) from it instead of from memory.

花期渐远 2024-11-11 08:36:42

如果垃圾收集花费了太多时间,并行收集器将抛出 OutOfMemoryError。特别是,如果超过总时间的 98% 花费在垃圾回收上,而回收的堆空间少于 2%,则会抛出 OutOfMemoryError。此功能旨在防止应用程序长时间运行,但由于堆太小而几乎没有进展或没有进展。如有必要,可以通过在命令行中添加选项 -XX:-UseGCOverheadLimit 来禁用此功能。

The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection. In particular, if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

嘿咻 2024-11-11 08:36:42

如果您要创建数十万个哈希映射,那么您使用的可能远远超过实际需要;除非您正在处理大型文件或图形,否则存储简单数据不应溢出 Java 内存限制。

您应该尝试重新考虑您的算法。在这种情况下,我会就该主题提供更多帮助,但在您提供有关问题背景的更多信息之前,我无法提供任何信息。

If you're creating hundreds of thousands of hash maps, you're probably using far more than you actually need; unless you're working with large files or graphics, storing simple data shouldn't overflow the Java memory limit.

You should try and rethink your algorithm. In this case, I would offer more help on that subject, but I can't give any information until you provide more about the context of the problem.

南风起 2024-11-11 08:36:42

如果您有 java8,并且可以使用 G1 垃圾收集器,则运行您的应用程序:

 -XX:+UseG1GC -XX:+UseStringDeduplication

这告诉 G1 查找相似的字符串并仅将其中一个字符串保留在内存中,其他的只是指向内存中该字符串的指针。

当您有很多重复的字符串时,这非常有用。该解决方案可能有效也可能无效,具体取决于每个应用程序。

更多信息:
https:// blog.codecentric.de/en/2014/08/string-deduplication-new-feature-java-8-update-20-2/
http://java-performance.info/java-string-deduplication/

If you have java8, and you can use the G1 Garbage Collector, then run your application with:

 -XX:+UseG1GC -XX:+UseStringDeduplication

This tells the G1 to find similar Strings and keep only one of them in memory, and the others are only a pointer to that String in memory.

This is useful when you have a lot of repeated strings. This solution may or not work and depends on each application.

More info on:
https://blog.codecentric.de/en/2014/08/string-deduplication-new-feature-java-8-update-20-2/
http://java-performance.info/java-string-deduplication/

笨笨の傻瓜 2024-11-11 08:36:42

借助 eclipse MATVisualVM

使用 JDK 1.7.x 或更高版本,使用G1GC它在垃圾回收上花费了 10%,与其他 GC 算法中的 2% 不同。

除了使用 -Xms1g 设置堆内存 - Xmx2g ,尝试`

-XX:+UseG1GC 
-XX:G1HeapRegionSize=n, 
-XX:MaxGCPauseMillis=m, 
-XX:ParallelGCThreads=n, 
-XX:ConcGCThreads=n`

看看 oracle 文章用于微调这些参数。

SE 中与 G1GC 相关的一些问题:

Java 7 (JDK 7) G1上的垃圾收集和文档

生产环境中的 Java G1 垃圾回收

积极的垃圾收集器策略

Fix memory leaks in your application with help of profile tools like eclipse MAT or VisualVM

With JDK 1.7.x or later versions, use G1GC, which spends 10% on garbage collection unlike 2% in other GC algorithms.

Apart from setting heap memory with -Xms1g -Xmx2g , try `

-XX:+UseG1GC 
-XX:G1HeapRegionSize=n, 
-XX:MaxGCPauseMillis=m, 
-XX:ParallelGCThreads=n, 
-XX:ConcGCThreads=n`

Have a look at oracle article for fine-tuning these parameters.

Some question related to G1GC in SE:

Java 7 (JDK 7) garbage collection and documentation on G1

Java G1 garbage collection in production

Agressive garbage collector strategy

断肠人 2024-11-11 08:36:42

为此,请在 android 关闭下的应用程序 gradle 文件中使用以下代码。

dex 选项 {
javaMaxHeapSize“4g”
}

For this use below code in your app gradle file under android closure.

dexOptions {
javaMaxHeapSize "4g"
}

梦情居士 2024-11-11 08:36:42

如果出现错误:

“内部编译器错误:java.lang.OutOfMemoryError:java.lang.AbstractStringBuilder 超出 GC 开销限制”

将 java 堆空间增加到 2GB,即 -Xmx2g。

In case of the error:

"Internal compiler error: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.AbstractStringBuilder"

increase the java heap space to 2GB i.e., -Xmx2g.

乞讨 2024-11-11 08:36:42

您需要在 Jdeveloper 中通过 setDomainEnv.cmd 增加内存大小。

set WLS_HOME=%WL_HOME%\server
set XMS_SUN_64BIT=256
set XMS_SUN_32BIT=256
set XMX_SUN_64BIT=3072
set XMX_SUN_32BIT=3072
set XMS_JROCKIT_64BIT=256
set XMS_JROCKIT_32BIT=256
set XMX_JROCKIT_64BIT=1024
set XMX_JROCKIT_32BIT=1024

if "%JAVA_VENDOR%"=="Sun" (
    set WLS_MEM_ARGS_64BIT=-Xms256m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms256m -Xmx512m
) else (
    set WLS_MEM_ARGS_64BIT=-Xms512m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx512m
)
and

set MEM_PERM_SIZE_64BIT=-XX:PermSize=256m
set MEM_PERM_SIZE_32BIT=-XX:PermSize=256m

if "%JAVA_USE_64BIT%"=="true" (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%

) else (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)

set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=1024m
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=1024m

You need to increase the memory size in Jdeveloper go to setDomainEnv.cmd.

set WLS_HOME=%WL_HOME%\server
set XMS_SUN_64BIT=256
set XMS_SUN_32BIT=256
set XMX_SUN_64BIT=3072
set XMX_SUN_32BIT=3072
set XMS_JROCKIT_64BIT=256
set XMS_JROCKIT_32BIT=256
set XMX_JROCKIT_64BIT=1024
set XMX_JROCKIT_32BIT=1024

if "%JAVA_VENDOR%"=="Sun" (
    set WLS_MEM_ARGS_64BIT=-Xms256m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms256m -Xmx512m
) else (
    set WLS_MEM_ARGS_64BIT=-Xms512m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx512m
)
and

set MEM_PERM_SIZE_64BIT=-XX:PermSize=256m
set MEM_PERM_SIZE_32BIT=-XX:PermSize=256m

if "%JAVA_USE_64BIT%"=="true" (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%

) else (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)

set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=1024m
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=1024m
李不 2024-11-11 08:36:42

对于我的情况,使用 -Xmx 选项增加内存是解决方案。

我用java读取了一个10g的文件,每次我都会遇到同样的错误。当 top 命令中的 RES 列中的值达到 -Xmx 选项中设置的值时,就会发生这种情况。然后通过使用 -Xmx 选项增加内存,一切顺利。

还有一点。当我在用户帐户中设置 JAVA_OPTSCATALINA_OPTS 并再次增加内存量时,我遇到了相同的错误。然后,我在代码中打印了这些环境变量的值,这给了我与我设置的值不同的值。原因是 Tomcat 是该进程的根,然后由于我不是 su-doer,所以我要求管理员增加 Tomcat 中 catalina.sh 的内存。

For my case increasing the memory using -Xmx option was the solution.

I had a 10g file read in java and each time I got the same error. This happened when the value in the RES column in top command reached to the value set in -Xmx option. Then by increasing the memory using -Xmx option everything went fine.

There was another point as well. When I set JAVA_OPTS or CATALINA_OPTS in my user account and increased the amount of memory again I got the same error. Then, I printed the value of those environment variables in my code which gave me different values than what I set. The reason was that Tomcat was the root for that process and then as I was not a su-doer I asked the admin to increase the memory in catalina.sh in Tomcat.

你怎么这么可爱啊 2024-11-11 08:36:42

这帮助我摆脱了这个错误。此选项禁用
-XX:+禁用显式GC

This helped me to get rid of this error.This option disables
-XX:+DisableExplicitGC

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文