使用 jhat 分析堆的内存开销是多少?
jhat 是分析 Java 堆转储的绝佳工具,但对于大型堆,它很容易浪费大量时间。给 jhat 运行时堆太小,可能需要 15 分钟才会失败并耗尽内存。
我想知道的是:根据堆转储文件的大小,我应该给 jhat 多少 -Xmx 堆,是否有一个经验法则? 目前仅考虑二进制堆转储。
一些非常有限的实验表明,其至少是堆转储大小的 3-4 倍。我能够使用 -J-mx12G 分析一个三变千兆字节的堆文件。
还有其他人有更确凿的实验数据,或者了解 jhat 在运行时如何表示堆对象吗?
数据点:
jhat is a great tool for analyzing Java heap dumps, but for large heaps its easy to waste a lot of time. Give jhat a runtime heap too small, and it may take 15 minutes to fail and run out of memory.
What I'd like to know is: is there is a rule of thumb for how much -Xmx heap I should give jhat based on the size of the heapdump file? Only considering binary heap dumps for now.
Some very limited experimentation indicates that its at least 3-4 times the size of the heap dump. I was able to analyze a three-and-change gigabyte heap file with -J-mx12G.
Does anyone else have more conclusive experimental data, or an understanding of how jhat represents heap objects at runtime?
data points:
- this thread indicates a 5x overhead, but my experimentation on late model jhats (1.6.0_26) indicates its not quite that bad
- this thread indicates a ~10x overhead
- a colleague backs up the 10x theory: 2.5gb heap file fails with a -J-mx23G
- yet another colleauge got a 6.7 gb dump to work with a 30 gb heap, for a 4.4x overhead.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论