与使用百分比相比,堆大小出乎意料地大

发布于 2024-10-25 19:32:34 字数 765 浏览 1 评论 0原文

有一个内存分配问题希望得到您的帮助。我们分析了顶部的一些服务,我们注意到它们的 RES 值约为 1.8GB,据我了解,这意味着它们当时持有 1.8GB 的​​内存。如果我们刚刚启动它们(它们本质上是从缓存中读取、进行处理,然后推送到另一个缓存),那就太好了,但是看到在 CPU 密集型处理完成后我们仍然看到这一点,我们想知道它是否是这样意味着某些东西没有像我们预期的那样被 GC 处理。

我们使用以下参数运行程序:-Xms256m -Xmx3096m,据我了解,这意味着初始堆大小为 256,最大堆大小为 3096。

现在我期望< /em> 看到的是堆最初根据需要增长,然后随着内存被释放而根据需要收缩(尽管这可能是我的第一个错误)。我们在 jvisualvm 中实际看到的内容如下:

  • 3 mins in:used heap is 1GB, heap 大小为 2GB
  • 5 分钟:我们已经完成处理,所以 使用的堆急剧下降到接近 足够的zilch,但堆大小仅 7 分钟降至约 1.5GB
  • ->:少量实时数据 定期处理,已用堆 仅在 100-200MB 左右, 但堆大小保持不变 大约1.7GB。

我的问题是,为什么我的堆没有像我预期的那样缩小?这不是抢走了linux机器上其他进程的宝贵内存吗?如果是这样,我该如何修复它?有时我们确实会看到内存不足错误,并且这些进程被分配了最“意外”的内存大小,我认为最好从它们开始。

干杯, 戴夫.

(~请原谅可能对 JVM 内存调优缺乏了解!)

Have a memory allocation question I'd like your help with. We've analysed some of our services in top and we note that they have a RES value of about 1.8GB, which as far as I understand things means they're holding on to 1.8GB of memory at that time. Which would be fine if we'd just started them (they essentially read from a cache, do processing, and push off to another cache) but seeing as we still see this after CPU-intensive processing is completed, we're wondering if it means something isn't being GC'ed as we expected.

We run the program with the following parameters: -Xms256m -Xmx3096m which as I understand means an initial heap size of 256, and a maximum heap size of 3096.

Now what I'd expect to see is the heap grow as needed initially, and then shrink as needed as the memory becomes deallocated (though this could be my first mistake). What we actually see with jvisualvm is the following:

  • 3 mins in: used heap is 1GB, heap
    size is 2GB
  • 5 mins in: we've done processing, so
    used heap drops dramatically to near
    enough zilch, heap size however only
    drops to about 1.5GB
  • 7 mins ->: small bits of real time
    processing periodically, used heap
    only ever between 100-200MB or so,
    heap size however remaining constant
    at about 1.7GB.

My question would be, why hasn't my heap shrunk as I perhaps expected it to? Isn't this robbing other processes on the linux box of valuable memory, and if so how could I fix it? We do see out of memory errors on it sometimes, and with these processes being allocated the most 'unexpected' memory size, I thought it best to start with them.

Cheers,
Dave.

(~please excuse possible lack of understanding on JVM memory tuning!)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

尛丟丟 2024-11-01 19:32:34

您可能想查看这个答案< /a> 关于调整堆扩展和收缩。默认情况下,JVM 不会过于积极地收缩堆。此外,如果堆在很长一段时间内有足够的可用空间,它不会触发GC,我相信这是唯一考虑收缩它的时候。

理想情况下,您将最大值配置为一个值,该值可以在满负载下为您的应用程序提供足够的空间,但如果始终都在使用,则操作系统性能可以接受。为了可预测性和潜在的更好性能而将最小值设置为最大值的情况并不罕见(我对此没有任何参考)。

You might want to see this answer about tuning heap expansion and shrinking. By default the JVM is not too aggressive about shrinking the heap. Furthermore if the heap has enough free space for a long period of time it won't trigger a GC, which I believe is the only time is considers to shrink it.

Ideally you configure the maximum to a value that gives your application enough headroom under full load, yet is acceptable to OS performance if it were always all in use. It's not uncommon to set the minimum to the maximum for predictability and potentially better performance (I don't have anything to reference for that offhand).

留一抹残留的笑 2024-11-01 19:32:34

I don't have a complete answer, but a similar question has come up before. From the earlier discussion you should investigate -XX:MaxHeapFreeRatio= as your tuning parameter to force heap release back to the operating system. There's documentation here, and I believe the default value allows a very large amount of unused heap to remained owned by the JVM.

私野 2024-11-01 19:32:34

好吧,GC 并不总是在您思考时运行,并且它并不总是收集符合条件的内容。如果旧代空间几乎耗尽堆空间,它也可能只开始从旧代空间收集对象(因为从旧代收集通常会涉及到停止世界收集,GC 会尝试避免这种情况,直到它确实需要这样做)它)。

Well, the GC does not always run when you think and it does not always collect what is elligible. It might as well only start to collect objects from the old gen space if it nearly runs out of heap space (since collecting from old gen normally would involve a stop-the-world collection which the GC tries to avoid until it really needs to do it).

说谎友 2024-11-01 19:32:34

也许您可以尝试使用 TPTP、visualvm 或 JProbe(商业版,但可以试用)进行一些分析,以了解到底发生了什么。

另一件需要注意的事情是文件处理程序;我不知道详细信息,但我的一位同事几年前遇到了一个堆饱和问题,该问题是由打开许多文件的进程引起的,并发现每次他打开一个文件时,都会在本机堆中分配一个 4kb 缓冲区,在处理结束时释放。我希望这个相当模糊的指示可以有所帮助......

Maybe you could try some profiling, with TPTP, visualvm or JProbe (commercial, but trial available), to find out exactly what happens.

Another thing to look out for is file handlers; I don't have the details but one of my colleagues encountered a couple of years ago a heap saturation problem caused by a process that opened many files, and found out that each time he opened one a 4kb buffer was allocated in the native heap, freed at the end of its processing. I hope this quite vague indications may help...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文