为什么 New Relic 会占用大量 tomcat 内存?
最近,我们开始使用 New Relic 来监控 Tomcat 7.0.6 服务器中托管的生产 Web 应用程序,但我们观察到该 Tomcat 的内存占用量不断增加,并在一周内耗尽了所有服务器(AWS High-Memory Double Extra Large Instance) )内存并变得无响应,恢复它的唯一方法是重新启动它。 我们提供 Xms 和启动 tomcat 时存在 Xmx 参数,但在几个小时内,tomcat 进程的内存使用量会超过 Xmx 值,并且会不断增加,直到所有服务器内存耗尽。这是进程命令:
/usr/java/jdk1.6.0_24//bin/java
-Djava.util.logging.config.file=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/conf/logging.properties
-Xms8192m
-Xmx8192m
-javaagent:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/newrelic/newrelic.jar
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Duser.timezone=Asia/Calcutta
-Djava.endorsed.dirs=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/endorsed
-classpath /xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/bootstrap.jar:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/tomcat-juli.jar
-Dcatalina.base=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Dcatalina.home=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Djava.io.tmpdir=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/temp org.apache.catalina.startup.Bootstrap start"
理想情况下,我希望这个进程不会使用超过 8GB 的内存,但在几个小时内它会超过 10GB,几天之内它会超过 20GB,并且该服务器上的其他所有内容都会因此而受到影响(我使用“top”)查看内存使用情况)。这怎么可能?
Recently we started using New Relic to monitor our production webapp hosted in tomcat 7.0.6 server but we have observed that memory footprint of this tomcat is increasing continuously and within a week it eats up all the server(AWS High-Memory Double Extra Large Instance) memory and become unresponsive, only way to get it back is by restarting it.
We provide Xms & Xmx arguments while starting the tomcat but within few hours memory usage of tomcat process cross Xmx value and it keeps on increasing until all the server memory is over. Here is process command:
/usr/java/jdk1.6.0_24//bin/java
-Djava.util.logging.config.file=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/conf/logging.properties
-Xms8192m
-Xmx8192m
-javaagent:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/newrelic/newrelic.jar
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Duser.timezone=Asia/Calcutta
-Djava.endorsed.dirs=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/endorsed
-classpath /xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/bootstrap.jar:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/tomcat-juli.jar
-Dcatalina.base=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Dcatalina.home=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Djava.io.tmpdir=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/temp org.apache.catalina.startup.Bootstrap start"
Ideally I would expect this process not to use more than 8GB of memory but within hours it goes above 10GB and within few days it goes above 20GB and everything else on this server suffers because of it(I use 'top' to see memory usage). How is this possible?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
有一个问题会影响任何 Sun/Oracle JVM,并表现为非堆(本机)内存无限增长。 New Relic Java 代理版本 2.16+ 有一个解决方法,即在公共部分的 newrelic.yml 文件中的类转换中添加关闭延迟。
来自变更日志
There's an issue which affects any Sun/Oracle JVM and will manifest as unbounded growth in non-heap (native) memory. There is a workaround in place for New Relic Java agent versions 2.16+ by adding a shutdown delay to class transformation in your newrelic.yml file in the common section.
From the changelog
我正在分享有关上述事件的更多信息。内存泄漏不在Java堆中。应用程序永远不会出现任何内存不足错误(8 GB 是我们设置的 Java 堆最大限制)。然而,虚拟内存和常驻内存不断增加,直到 RAM 内存耗尽。
我们已经确认,使用遗物剂时会发生这种泄漏。
版本:New Relic Agent v2.1.2
I am sharing some more information on above reported incident. memory leak is not in Java heap. The application never reaches any OUT OF MEMORY error(8 gb is the Java heap max limit what we have set). However the virtual and resident memory keep on increasing till the time RAM runs out of memory.
We have confirmed that this leak happens when relic agent is used.
Version : New Relic Agent v2.1.2
抱歉给您带来麻烦了。我们(New Relic)正在调查该问题,但第一个建议是请尝试最新的 2.2.1 版本的 Java 代理,它对我们检测类的方式进行了重大更改。
当我们有更多信息时,我会在这里跟进。
Sorry for the trouble. We (New Relic) are investigating the problem but the first suggestion is to please try the latest 2.2.1 version of the Java Agent which made substantial changes to the way we instrument classes.
I will follow-up here when we have more information.