Java 使用的堆与分配的对象大小
我有一个可能很愚蠢的问题。我目前正在测试 CSP 求解器 choco 和 jacop。当我运行应用程序分析(图形着色,大约 3000 个节点)时,我不完全理解结果。
Profiler声明的已用堆空间约为1GB内存。创建的所有对象的总和小于 100MB。另外 900MB 内存在哪里?
我认为方法调用(求解器可能使用大量回溯)被分配在堆栈上,所以这里不应该是问题。当我使用 Xmx 参数减少最大内存时,应用程序失败并出现异常:
Exception in thread "main" java.lang.OutOfMemoryError: 超出 GC 开销限制
所以看来,其余的不是未使用的未收集内存(因为在这种情况下 GC会取消分配它(并且不会失败))。
感谢您的帮助。
I have one probably dumb question. I am currently testing CSP solvers choco and jacop. When I run profiling of the app (graph colouring, about 3000 nodes), I dont fully understand the results.
The used heap space declared by profiler is about 1GB of memory. The sum of all object created is less than 100MB. Where are the other 900MB of RAM?
I think that method calls (solvers probably use massive backtracking) are being alocated on stack, so here should not be the problem. When I reduce maximum memory by using Xmx param, the app fails with exception:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
So it seems, that the rest isnt unused uncollected memory (because in this case the GC would dealocate it (and would not fail)).
Thanks for your help.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
你能得到堆的映射吗?最有可能的是它是碎片化的,因此这 100M 的对象分布在整个内存空间中。所需的内存既是分配对象的函数,也是分配对象和取消引用对象的速度的函数。该错误意味着内存区域对于工作负载来说太小,垃圾收集器正在消耗大量 CPU 来管理它,并且它超出了允许的阈值。
Can you get a map of the heap? Most likely it's fragmented so those 100M of objects are spread out across the entire memory space. The memory needed is both a function of allocated objects and how fast they're being allocated and then de-referenced. That error means the memory area is too small for the work load, the garbage collector is consuming a lot CPU managing it, and it went beyond the allowed threshold.
阿米尔·阿富汗尼的评论可能是正确的。 Netbeans 6.9.1 中的类(对象)可能以某种方式被过滤(?或者分析器是假的?),因为当我从 java Visual VM 执行堆转储并分析它时,它向我展示了!非常!不同的数字(总和与已使用的堆相同)。
感谢您的回复。
Amir Afghani was probably correct in his comment. The classes (objects) in Netbeans 6.9.1 are probably somehow filtered (?or the profiler is bogus?), because when I performed the heap dump from java visual VM and analyzed it, it showed me !very! different numbers (which were in sum the same as the used heap).
Thanks for your replies.