休眠搜索期间内存泄漏
您好,
我们的一个应用程序最近面临内存泄漏问题。
开发环境:Lucene2.4.0、hibernate search3.2.0、hibernate 3.5.0、spring2.5和ehcache 1.4.1
问题是老一代的内存在一段时间内逐渐增加。最终,JVM 耗尽了内存,从 jvm 统计数据中可以看出,老年代容量达到了最大值。结果,我必须重新启动网络以释放所有内存。
我从应用程序生成了堆转储并使用内存分析器来检查它。我看到这个:
123,726 instances of "org.apache.lucene.index.TermInfosReader$ThreadResources", loaded by "org.apache.catalina.loader.WebappClassLoader @ 0x7f5d71ffe3c8" occupy 3,139,449,272 (79.54%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by "<system class loader>"
你能给我一些建议吗?
谢谢
Greeting,
We are facing memory leak issue recently on one of our apps.
Development environment : Lucene2.4.0, hibernate search3.2.0, hibernate 3.5.0, spring2.5 and ehcache 1.4.1
The problem is that memory in old gen gradually goes up in a time period. Eventually, JVM runs out of memory as we see from jvm stats that old generation capacity reaches the maximum. As a result, I have to restart web to release all memory.
I generated a heap dump from app and use memory analyzer to check it. I see this:
123,726 instances of "org.apache.lucene.index.TermInfosReader$ThreadResources", loaded by "org.apache.catalina.loader.WebappClassLoader @ 0x7f5d71ffe3c8" occupy 3,139,449,272 (79.54%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by "<system class loader>"
Can you give me some advices please?
thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论