识别没有被垃圾收集的对象的更好方法?
简而言之,
我有一个程序随着时间的推移逐渐使用越来越多的内存。我正在使用 jmap 和 jhat 来尝试诊断它,但仍然不太到位。
背景
该程序是一个长期运行的服务器,由 hbase 数据存储支持,为许多其他东西提供节俭服务。然而,运行几天后,它最终会达到分配的堆限制,并且几乎所有时间都花在垃圾收集上来回颠簸。似乎引用在某处保留了大量数据
到目前为止我所做的事情
在摆弄了 jstat 和 jconsole 之后,我最终使用正在运行的进程的 jmap 进行堆转储,并通过 jhat 运行它,数字很简单加起来不要接近内存利用率
jmap -F -dump:live,format=b,file=heap.dump 12765
jmap -F -dump:format=b,file =heap.all 12765
直方图顶部的一些内容
Class Instance Count Total Size
class [B 7493 228042570
class java.util.HashMap$Entry 2833152 79328256
class [Ljava.util.HashMap$Entry; 541 33647856
class [Ljava.lang.Object; 303698 29106440
class java.lang.Long 2851889 22815112
class org.apache.hadoop.hbase.KeyValue 303593 13358092
class org.apache.hadoop.hbase.client.Result 303592 9714944
class [I 14074 9146580
class java.util.LinkedList$Entry 303841 7292184
class [Lorg.apache.hadoop.hbase.KeyValue; 303592 7286208
class org.apache.hadoop.hbase.io.ImmutableBytesWritable 305097 4881552
class java.util.ArrayList 302633 4842128
class [Lorg.apache.hadoop.hbase.client.Result; 297 2433488
class [C 5391 320190
虽然此处的总数没有加起来,但在进行堆转储时,该进程正在使用超过 1GB 的内存。
直接明显的罪魁祸首似乎是我将 HBase Result 和 KeyValue 条目留在各处。试图追踪引用,我最终点击了
Object at 0x2aab091e46d0
instance of org.apache.hadoop.hbase.ipc.HBaseClient$Call@0x2aab091e46d0 (53 bytes)
Class:
class org.apache.hadoop.hbase.ipc.HBaseClient$Call
Instance data members:
done (Z) : true
error (L) : <null>
id (I) : 57316
param (L) : org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation@0x2aab091e4678 (48 bytes)
this$0 (L) : org.apache.hadoop.hbase.ipc.HBaseClient@0x2aaabfb78f30 (86 bytes)
value (L) : org.apache.hadoop.hbase.io.HbaseObjectWritable@0x2aab092e31c0 (40 bytes)
References to this object:
Other Queries
Reference Chains from Rootset
Exclude weak refs
Include weak refs
Objects reachable from here
需要帮助:
似乎没有对这个最终 HBaseCLient$Call 对象的引用(或任何其他类似的对象,每个对象都包含一千个左右的键值及其所有内部数据)。难道不应该获得GC吗?我是否只是误解了 gc 的工作原理或 jhat 验证引用的程度?如果是这样,我还能做些什么来追踪我“丢失”的记忆呢?我还可以采取哪些其他步骤来追踪此情况?
In a nutshell
I have a program that is gradually using more and more memory over time. I am using jmap and jhat to try and diagnose it but am still not quite there.
Background
The program is a long-running server backed by an hbase datastore providing a thrift service to a bunch of other stuff. However, after running for a few days, it will eventually hit the allocated heap limit, and thrash back and forth with nearly all time spent in garbage collection. It would seem references are getting kept to a lot of data somewhere
What I've done so far
After fiddling about with jstat and jconsole some I ended up taking heapdumps with jmap of the running process, and run it through jhat, and the numbers simple don't add up to anywhere near the memory utilisation
jmap -F -dump:live,format=b,file=heap.dump 12765
jmap -F -dump:format=b,file=heap.all 12765
Some stuff off the top of the histogram
Class Instance Count Total Size
class [B 7493 228042570
class java.util.HashMap$Entry 2833152 79328256
class [Ljava.util.HashMap$Entry; 541 33647856
class [Ljava.lang.Object; 303698 29106440
class java.lang.Long 2851889 22815112
class org.apache.hadoop.hbase.KeyValue 303593 13358092
class org.apache.hadoop.hbase.client.Result 303592 9714944
class [I 14074 9146580
class java.util.LinkedList$Entry 303841 7292184
class [Lorg.apache.hadoop.hbase.KeyValue; 303592 7286208
class org.apache.hadoop.hbase.io.ImmutableBytesWritable 305097 4881552
class java.util.ArrayList 302633 4842128
class [Lorg.apache.hadoop.hbase.client.Result; 297 2433488
class [C 5391 320190
While the totals here don't add up to it, at the point that heap dump was taken the process was using over 1gb of memory.
The immediate apparent culprit seems like I'm leaving HBase Result and KeyValue entries all over the place. Trying to trace up the references, I eventually hit
Object at 0x2aab091e46d0
instance of org.apache.hadoop.hbase.ipc.HBaseClient$Call@0x2aab091e46d0 (53 bytes)
Class:
class org.apache.hadoop.hbase.ipc.HBaseClient$Call
Instance data members:
done (Z) : true
error (L) : <null>
id (I) : 57316
param (L) : org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation@0x2aab091e4678 (48 bytes)
this$0 (L) : org.apache.hadoop.hbase.ipc.HBaseClient@0x2aaabfb78f30 (86 bytes)
value (L) : org.apache.hadoop.hbase.io.HbaseObjectWritable@0x2aab092e31c0 (40 bytes)
References to this object:
Other Queries
Reference Chains from Rootset
Exclude weak refs
Include weak refs
Objects reachable from here
Help needed:
There seems to be no references to this final HBaseCLient$Call object(or any of the others like it, each which hold a thousand or so keyvalues with all their internal data). Shouldn't it be getting GCed? Am I just misunderstanding how the gc works or the extent to which jhat will verify references? If so what further can I do to track down my "missing" memory? What other steps can I take to track this down?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
查看这篇 Java 内存监控文章
链接
这可能会帮助您解决问题
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
Check this Java Memory Monitoring article
Link
This May help you to over come your issue
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
您可以免费试用 JProfiler 10 天。使用 JProfiler,您将在几分钟内解决这个问题。
You can try JProfiler 10 days for free. With JProfiler you will solve this issue in a few minutes.
我建议首先尝试 JVisualVM,它随最新的 JDK 一起分发。
另外,如果你能证明成本合理的话,多年来我发现 JProfiler 是一个出色的工具。
I recommend first trying JVisualVM, which distributes with the latest JDK.
Also, if you can justify the cost, I have found JProfiler to be an excellent tool, over the years.