为什么 64 位 JVM 在到达 xmx 之前会抛出内存不足?
我正在努力解决 java 应用程序的大内存需求。
为了解决更多内存问题,我切换到 64 位 JVM 并使用大型 xmx。 但是,当 xmx 高于 2GB 时,应用程序似乎会比预期更早耗尽内存。 当使用 2400M 的 xmx 运行并查看 -verbosegc
中的 GC 信息时,我得到......
[Full GC 2058514K->2058429K(2065024K), 0.6449874 secs]
然后它抛出内存不足异常。 我希望它在内存耗尽之前将堆增加到 2065024K 以上。
在一个简单的示例中,我有一个测试程序,它在循环中分配内存并从 Runtime.getRuntime( ).maxMemory()
和 Runtime.getRuntime().totalMemory()
直到最终耗尽内存。
在一定范围的 xmx 值上运行此结果,Runtime.getRuntime().maxMemory()
报告的内存比 xmx 少约 10%,并且总内存增长不会超过 运行时的 90%。 getRuntime().maxMemory()
。
我正在使用以下 64 位 jvm:
java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
这是代码:
import java.util.ArrayList;
public class XmxTester {
private static String xmxStr;
private long maxMem;
private long usedMem;
private long totalMemAllocated;
private long freeMem;
private ArrayList list;
/**
* @param args
*/
public static void main(String[] args) {
xmxStr = args[0];
XmxTester xmxtester = new XmxTester();
}
public XmxTester() {
byte[] mem = new byte[(1024 * 1024 * 50)];
list = new ArrayList();
while (true) {
printMemory();
eatMemory();
}
}
private void eatMemory() {
// TODO Auto-generated method stub
byte[] mem = null;
try {
mem = new byte[(1024 * 1024)];
} catch (Throwable e) {
System.out.println(xmxStr + "," + ConvertMB(maxMem) + ","
+ ConvertMB(totalMemAllocated) + "," + ConvertMB(usedMem)
+ "," + ConvertMB(freeMem));
System.exit(0);
}
list.add(mem);
}
private void printMemory() {
maxMem = Runtime.getRuntime().maxMemory();
freeMem = Runtime.getRuntime().freeMemory();
totalMemAllocated = Runtime.getRuntime().totalMemory();
usedMem = totalMemAllocated - freeMem;
}
double ConvertMB(long bytes) {
int CONVERSION_VALUE = 1024;
return Math.round((bytes / Math.pow(CONVERSION_VALUE, 2)));
}
}
我使用此批处理文件在多个 xmx 设置上运行它。它包括对 32 位 JVM 的引用,我想与 32 位 jvm 进行比较 - 显然,一旦 xmx 大于大约 1500M,这个调用就会失败。
@echo off
set java64=<location of 64bit JVM>
set java32=<location of 32bit JVM>
set xmxval=64
:start
SET /a xmxval = %xmxval% + 64
%java64% -Xmx%xmxval%m -XX:+UseCompressedOops -XX:+DisableExplicitGC XmxTester %xmxval%
%java32% -Xms28m -Xmx%xmxval%m XmxTester %xmxval%
if %xmxval% == 4500 goto end
goto start
:end
pause
这会输出一个 csv,当放入 excel 时,它看起来像这样(对我糟糕的格式表示歉意)此处)
32 位
XMX max mem total mem free mem %of xmx used before out of mem exception 128 127 127 125 2 98.4% 192 191 191 189 1 99.0% 256 254 254 252 2 99.2% 320 318 318 316 1 99.4% 384 381 381 379 2 99.5% 448 445 445 443 1 99.6% 512 508 508 506 2 99.6% 576 572 572 570 1 99.7% 640 635 635 633 2 99.7% 704 699 699 697 1 99.7% 768 762 762 760 2 99.7% 832 826 826 824 1 99.8% 896 889 889 887 2 99.8% 960 953 953 952 0 99.9% 1024 1016 1016 1014 2 99.8% 1088 1080 1080 1079 1 99.9% 1152 1143 1143 1141 2 99.8% 1216 1207 1207 1205 2 99.8% 1280 1270 1270 1268 2 99.8% 1344 1334 1334 1332 2 99.9%
64 位
128 122 122 116 6 90.6% 192 187 187 180 6 93.8% 256 238 238 232 6 90.6% 320 285 281 275 6 85.9% 384 365 365 359 6 93.5% 448 409 409 402 6 89.7% 512 455 451 445 6 86.9% 576 512 496 489 7 84.9% 640 595 595 565 30 88.3% 704 659 659 629 30 89.3% 768 683 682 676 6 88.0% 832 740 728 722 6 86.8% 896 797 772 766 6 85.5% 960 853 832 825 6 85.9% 1024 910 867 860 7 84.0% 1088 967 916 909 6 83.5% 1152 1060 1060 1013 47 87.9% 1216 1115 1115 1068 47 87.8% 1280 1143 1143 1137 6 88.8% 1344 1195 1174 1167 7 86.8% 1408 1252 1226 1220 6 86.6% 1472 1309 1265 1259 6 85.5% 1536 1365 1317 1261 56 82.1% 1600 1422 1325 1318 7 82.4% 1664 1479 1392 1386 6 83.3% 1728 1536 1422 1415 7 81.9% 1792 1593 1455 1448 6 80.8% 1856 1650 1579 1573 6 84.8% 1920 1707 1565 1558 7 81.1% 1984 1764 1715 1649 66 83.1% 2048 1821 1773 1708 65 83.4% 2112 1877 1776 1769 7 83.8% 2176 1934 1842 1776 66 81.6% 2240 1991 1899 1833 65 81.8% 2304 2048 1876 1870 6 81.2% 2368 2105 1961 1955 6 82.6% 2432 2162 2006 2000 6 82.2%
I am wrestling with large memory requirements for a java app.
In order to address more memory I have switch to a 64 bit JVM and am using a large xmx.
However, when the xmx is above 2GB the app seems to run out of memory earlier than expected.
When running with an xmx of 2400M and looking at GC info from -verbosegc
I get...
[Full GC 2058514K->2058429K(2065024K), 0.6449874 secs]
...and then it throws an out of memory exception. I would expect it to increase the heap above 2065024K before running out of memory.
In a trivial example i have a test program that allocates memory in a loop and prints out information from Runtime.getRuntime().maxMemory()
and Runtime.getRuntime().totalMemory()
until it eventually runs out of memory.
Running this over a range of xmx values it appears that Runtime.getRuntime().maxMemory()
reports about 10% less than xmx and that total memory will not grow beyond 90% of Runtime.getRuntime().maxMemory()
.
I am using the following 64bit jvm:
java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
Here is the code:
import java.util.ArrayList;
public class XmxTester {
private static String xmxStr;
private long maxMem;
private long usedMem;
private long totalMemAllocated;
private long freeMem;
private ArrayList list;
/**
* @param args
*/
public static void main(String[] args) {
xmxStr = args[0];
XmxTester xmxtester = new XmxTester();
}
public XmxTester() {
byte[] mem = new byte[(1024 * 1024 * 50)];
list = new ArrayList();
while (true) {
printMemory();
eatMemory();
}
}
private void eatMemory() {
// TODO Auto-generated method stub
byte[] mem = null;
try {
mem = new byte[(1024 * 1024)];
} catch (Throwable e) {
System.out.println(xmxStr + "," + ConvertMB(maxMem) + ","
+ ConvertMB(totalMemAllocated) + "," + ConvertMB(usedMem)
+ "," + ConvertMB(freeMem));
System.exit(0);
}
list.add(mem);
}
private void printMemory() {
maxMem = Runtime.getRuntime().maxMemory();
freeMem = Runtime.getRuntime().freeMemory();
totalMemAllocated = Runtime.getRuntime().totalMemory();
usedMem = totalMemAllocated - freeMem;
}
double ConvertMB(long bytes) {
int CONVERSION_VALUE = 1024;
return Math.round((bytes / Math.pow(CONVERSION_VALUE, 2)));
}
}
I use this batch file to run it over multiple xmx settings. Its includes references to a 32 bit JVM, I wanted a comparison to a 32bit jvm - obviously this call fails as soon as xmx is larger than about 1500M
@echo off
set java64=<location of 64bit JVM>
set java32=<location of 32bit JVM>
set xmxval=64
:start
SET /a xmxval = %xmxval% + 64
%java64% -Xmx%xmxval%m -XX:+UseCompressedOops -XX:+DisableExplicitGC XmxTester %xmxval%
%java32% -Xms28m -Xmx%xmxval%m XmxTester %xmxval%
if %xmxval% == 4500 goto end
goto start
:end
pause
This spits out a csv which when put into excel looks like this (apologies for my poor formatting here)
32 bit
XMX max mem total mem free mem %of xmx used before out of mem exception 128 127 127 125 2 98.4% 192 191 191 189 1 99.0% 256 254 254 252 2 99.2% 320 318 318 316 1 99.4% 384 381 381 379 2 99.5% 448 445 445 443 1 99.6% 512 508 508 506 2 99.6% 576 572 572 570 1 99.7% 640 635 635 633 2 99.7% 704 699 699 697 1 99.7% 768 762 762 760 2 99.7% 832 826 826 824 1 99.8% 896 889 889 887 2 99.8% 960 953 953 952 0 99.9% 1024 1016 1016 1014 2 99.8% 1088 1080 1080 1079 1 99.9% 1152 1143 1143 1141 2 99.8% 1216 1207 1207 1205 2 99.8% 1280 1270 1270 1268 2 99.8% 1344 1334 1334 1332 2 99.9%
64 bit
128 122 122 116 6 90.6% 192 187 187 180 6 93.8% 256 238 238 232 6 90.6% 320 285 281 275 6 85.9% 384 365 365 359 6 93.5% 448 409 409 402 6 89.7% 512 455 451 445 6 86.9% 576 512 496 489 7 84.9% 640 595 595 565 30 88.3% 704 659 659 629 30 89.3% 768 683 682 676 6 88.0% 832 740 728 722 6 86.8% 896 797 772 766 6 85.5% 960 853 832 825 6 85.9% 1024 910 867 860 7 84.0% 1088 967 916 909 6 83.5% 1152 1060 1060 1013 47 87.9% 1216 1115 1115 1068 47 87.8% 1280 1143 1143 1137 6 88.8% 1344 1195 1174 1167 7 86.8% 1408 1252 1226 1220 6 86.6% 1472 1309 1265 1259 6 85.5% 1536 1365 1317 1261 56 82.1% 1600 1422 1325 1318 7 82.4% 1664 1479 1392 1386 6 83.3% 1728 1536 1422 1415 7 81.9% 1792 1593 1455 1448 6 80.8% 1856 1650 1579 1573 6 84.8% 1920 1707 1565 1558 7 81.1% 1984 1764 1715 1649 66 83.1% 2048 1821 1773 1708 65 83.4% 2112 1877 1776 1769 7 83.8% 2176 1934 1842 1776 66 81.6% 2240 1991 1899 1833 65 81.8% 2304 2048 1876 1870 6 81.2% 2368 2105 1961 1955 6 82.6% 2432 2162 2006 2000 6 82.2%
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
为什么会发生这种情况?
基本上,JVM / GC 可以使用两种策略来决定何时放弃并抛出 OOME。
它可以一直运行下去,直到垃圾回收后没有足够的内存来分配下一个对象。
它可以继续运行,直到 JVM 花费超过给定百分比的时间来运行垃圾收集器。
第一种方法存在这样的问题:对于典型的应用程序,JVM 将花费越来越多的时间来运行 GC,最终徒劳地完成任务。
第二种方法的问题是它可能太快放弃。
GC 在该区域中的实际行为由 JVM 选项 (-XX:...) 控制。显然,32 位和 64 位 JVM 的默认行为有所不同。这是有道理的,因为(直观地)64 位 JVM 的“内存不足死亡螺旋”效应将持续更长时间并且更明显。
我的建议是不要管这个问题。除非您真的需要用东西填充内存的最后一个字节,否则 JVM 最好尽早终止,以避免浪费大量时间。然后您可以使用更多内存重新启动它并完成工作。
显然,您的基准是不典型的。大多数真实的程序根本不会尝试获取所有堆。您的应用程序也可能不典型。但您的应用程序也可能存在内存泄漏。如果是这种情况,您应该调查泄漏,而不是试图找出为什么不能使用所有内存。
它正在向它致敬! -Xmx 是堆大小的上限,而不是决定何时放弃的标准。
它返回已使用的最大内存,而不是允许使用的最大内存。
见上文。
我推测这是因为 JVM 已经达到了“垃圾收集花费太多时间”的阈值。
不是一般情况下。软糖系数取决于您的应用。例如,对象流失率较高(即每单位有用工作创建和丢弃更多对象)的应用程序可能会更快因 OOME 而死亡。
IMO,解决方案是简单地在当前添加的基础上添加额外 20%(或更多)。假设您有足够的物理内存,为 JVM 提供更大的堆将减少总体 GC 开销,并使应用程序运行得更快。
您可以尝试的其他技巧是将 -Xmx 和 -Xms 设置为相同的值,并调整设置最大“垃圾收集所用时间”比率的调整参数。
Why does it happen?
Basically, there are two strategies that the JVM / GC can use to decide when to give up and throw an OOME.
It can keep going and going until there is simply not enough memory after garbage collection to allocate the next object.
It can keep going until the JVM is spending more than a given percentage of time running the garbage collector.
The first approach has the problem that for a typical application the JVM will spend a larger and larger percentage of its time running the GC, in an ultimately futile effort to complete the task.
The second approach has the problem that it might give up too soon.
The actual behaviour of the GC in this area is governed by JVM options (-XX:...). Apparently, the default behaviour differs between 32 and 64 bit JVMs. This kind of makes sense, because (intuitively) the "out of memory death spiral" effect for a 64 bit JVM will last longer and be more pronounced.
My advice would be to leave this issue alone. Unless you really need to fill every last byte of memory with stuff it is better for the JVM to die early and avoid wasting lots of time. You can then restart it with more memory and get the job done.
Clearly, your benchmark is atypical. Most real programs simply don't try to grab all of the heap. It is possible that your application is atypical too. But it is also possible that your application is suffering from a memory leak. If that is the case, you should be investigating the leak rather than trying to figure out why you can't use all of memory.
It is honoring it! The -Xmx is the upper limit on the heap size, not the criterion for deciding when to give up.
It is returning the max memory that it has used, not the max memory it is allowed to use.
See above.
I presume that it is because the JVM has hit the "too much time spent garbage collecting" threshold.
Not in general. The fudge factor depends on your application. For instance, an application with a larger rate of object churn (i.e. more objects created and discarded per unit of useful work) is likely to die with an OOME sooner.
IMO, the solution is to simply add an extra 20% (or more) on top of what you are currently adding. Assuming that you have enough physical memory, giving the JVM a larger heap is going to reduce overall GC overheads and make your application run faster.
The other tricks that you could try is to set -Xmx and -Xms to the same value and adjusting the tuning parameter that sets the maximum "time spent garbage collecting" ratio.