内存分析:如何检测哪个应用程序/包消耗过多内存

发布于 2024-09-03 19:07:30 字数 319 浏览 3 评论 0原文

我在工作中遇到过这样的情况:我们运行一个 Java EE 服务器,并在其上部署了多个应用程序。最近,我们经常遇到 OutOfMemoryException。我们怀疑某些应用程序可能表现不佳,可能存在泄漏或其他问题。

问题是,我们无法真正分辨出是哪一个。我们运行了一些内存分析器(例如 YourKit),它们非常擅长告诉哪些类使用最多的内存。但它们没有显示类之间的关系,因此这给我们留下了这样的情况:我们看到有很多字符串、int 数组和 HashMap 条目,但我们无法真正分辨出它们是哪个应用程序或包来自。

有没有办法知道这些对象来自哪里,以便我们可以尝试查明分配最多内存的包(或应用程序)?

I have a situation here at work where we run a Java EE server with several applications deployed on it. Lately, we've been having frequent OutOfMemoryException's. We suspect some of the apps might be behaving badly, maybe leaking, or something.

The problem is, we can't really tell which one. We have run some memory profilers (like YourKit), and they're pretty good at telling what classes use the most memory. But they don't show relationships between classes, so that leaves us with a situation like this: We see that there are, say, lots of Strings and int arrays and HashMap entries, but we can't really tell which application or package they come from.

Is there a way of knowing where these objects come from, so we can try to pinpoint the packages (or apps) that are allocating the most memory?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

翻身的咸鱼 2024-09-10 19:07:30

在这种情况下,可以执行以下操作:

  • 配置 Java EE 应用程序服务器以在 OOME 上生成堆转储。从 1.5 天起,此功能可通过 JVM 参数使用。获得转储后,可以使用 Eclipse MAT 等工具对其进行离线分析。重要的部分是找出支配树。
  • 在测试服务器上执行内存分析; Netbeans 擅长于此。在分析根本原因时,这肯定会比第一次花费更多的时间,因为必须存在内存分配失败的确切条件。如果您确实有自动化集成/功能测试,那么推断根本原因会更容易。诀窍是定期进行堆转储,并分析导致堆消耗增加的类。不一定存在泄漏 - 可能是堆大小不足的情况。

There are several things that one could do in this situation:

  • Configure the Java EE application server to produce a heap dump on OOME. This feature is available via a JVM parameter since the 1.5 days. Once a dump has been obtained, it can be analyzed offline, using tools like Eclipse MAT. The important part is figuring out the dominator tree.
  • Perform memory profiling on a test server; Netbeans is good at this. This is bound to take more time that the first when it comes to analyzing the root cause, since the exact conditions of memory allocation failure must be present. If you do have automated integration/functional tests, then deducing the root cause will be easier. The trick is to take periodic heap dumps, and analyze the classes that are contributing to the increase in heap consumption. There might not necessarily be a leak - it could be a case of insufficient heap size.
忆沫 2024-09-10 19:07:30

一个快速的想法是,如果您不介意进行一些性能权衡,您可能可以进行一些反思......

A quick thought is that you probably can do some reflection, if you don't mind some performance trade-off....

深海夜未眠 2024-09-10 19:07:30

我发现有用的是:(

jmap -J-d64 -histo $PID

删除 32 位架构的 -J-d64 选项)

这将输出如下内容:

num     #instances         #bytes  class name
----------------------------------------------
1:       4040792     6446686072  [B
2:       3420444     1614800480  [C
3:       3365261      701539904  [I
4:       7109024      227488768  java.lang.ThreadLocal$ThreadLocalMap$Entry
5:       6659946      159838704  java.util.concurrent.locks.ReentrantReadWriteLock$Sync$HoldCounter

然后从那里您可以尝试进一步诊断问题,进行差异以及不应该比较连续快照的内容。

这只会使虚拟机暂停一小段时间,即使是大堆,因此您可以在生产中安全地执行此操作(希望在非高峰时段:))

What I have found helpful is:

jmap -J-d64 -histo $PID

(remove the -J-d64 option for 32-bit arch)

This will output something like this:

num     #instances         #bytes  class name
----------------------------------------------
1:       4040792     6446686072  [B
2:       3420444     1614800480  [C
3:       3365261      701539904  [I
4:       7109024      227488768  java.lang.ThreadLocal$ThreadLocalMap$Entry
5:       6659946      159838704  java.util.concurrent.locks.ReentrantReadWriteLock$Sync$HoldCounter

And then from there you can try to further diagnose the problem, doing diffs and what not to compare successive snapshots.

This will only pause the VM for a brief time, even for big heaps, so you can safely do this in production (during off-peak hours, hopefully :) )

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文