为什么即使堆等大小稳定,Sun JVM 仍继续消耗更多的 RSS 内存?
在过去的一年里,我在应用程序的 Java 堆使用方面取得了巨大的进步——显着减少了 66%。为了实现这一目标,我一直在通过 SNMP 监控各种指标,例如 Java 堆大小、CPU、Java 非堆等。
最近,我一直在监视 JVM 有多少实际内存(RSS,驻留集),并且有些惊讶。 JVM 消耗的实际内存似乎完全独立于我的应用程序堆大小、非堆、eden 空间、线程计数等。
通过 Java SNMP 测量的堆大小 Java 堆使用图 http://lanai.dietpizza.ch/images/jvm-heap -used.png
实际内存(以 KB 为单位)。 (例如:1 MB KB = 1 GB) Java 堆使用图 http://lanai.dietpizza.ch/images/jvm-rss.png
(堆图中的三个下降对应于应用程序更新/重新启动。)
这对我来说是一个问题,因为 JVM 消耗的所有额外内存都是“窃取”内存,而这些内存可能会被由操作系统用于文件缓存。事实上,一旦 RSS 值达到 ~2.5-3GB,我开始看到应用程序的响应时间变慢,CPU 利用率更高,主要是 IO 等待。当某个点开始对交换分区进行分页时。这都是非常不可取的。
那么,我的问题:
- 为什么会发生这种情况? “幕后”到底发生了什么?
- 我能做些什么来控制 JVM 的实际内存消耗?
血淋淋的细节:
- RHEL4 64 位(Linux - 2.6.9-78.0.5.ELsmp #1 SMP Wed Sep 24 ... 2008 x86_64 ... GNU/Linux)
- Java 6 (build 1.6.0_07-b06)
- Tomcat 6
- 应用程序 (点播 HTTP 视频流)
- 通过 java.nio FileChannels 实现高 I/O
- 数百到数千个线程
- 数据库使用率低
- 春季、休眠
相关 JVM 参数:
-Xms128m
-Xmx640m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:+CMSIncrementalMode
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCApplicationStoppedTime
-XX:+CMSLoopWarn
-XX:+HeapDumpOnOutOfMemoryError
我如何测量 RSS:
ps x -o command,rss | grep java | grep latest | cut -b 17-
这会进入一个文本文件,并定期被监控系统读入 RRD 数据库。请注意,ps 输出千字节。
问题与解决方案s:
虽然最终是ATorras的事实证明最终正确的答案,kdgregory引导我使用正确的诊断路径pmap
。 (对他们的两个答案进行投票!)发生的事情如下:
我确定的事情:
- 我的应用程序使用 JRobin 1.4,这是我三年多前编写到我的应用程序中的东西。
- 当前创建的应用程序最繁忙的实例
- 启动后一小时内生成超过 1000 个新的 JRobin 数据库文件(每个约 1.3MB)
- 启动后每天~100+
- 如果有需要写入的内容,应用程序每 15 秒更新一次这些 JRobin 数据库对象。
- 在默认配置JRobin中:
- 使用基于
java.nio
的文件访问后端。该后端将MappedByteBuffers
映射到文件本身。 - 每五分钟,JRobin 守护线程就会在每个 JRobin 底层数据库 MBB 上调用
MappedByteBuffer.force()
- 使用基于
- 列出的每个 JRobin 基础数据库 MBB
pmap
调用MappedByteBuffer.force()
:- 6500 个映射
- 其中 5500 个是 1.3MB 的 JRobin 数据库文件,计算结果约为 7.1GB
最后一点是我的“我发现了!”时刻。
我的纠正措施:
- 考虑更新到最新的 JRobinLite 1.5.2,这显然可以更好地
- 在 JRobin 数据库上实现正确的资源处理。目前,一旦我的应用程序创建了一个数据库,然后在该数据库不再被主动使用后就不再转储它。
- 尝试将 MappedByteBuffer.force() 移至数据库更新事件,而不是定期计时器。问题会神奇地消失吗?
- 立即,将 JRobin 后端更改为 java.io 实现——一行一行的更改。这会比较慢,但这可能不是问题。下图显示了此更改的直接影响。
Java RSS 内存使用图 http://lanai.dietpizza.ch/images/ stackoverflow-rss-problem-fixed.png
我可能有时间也可能没有时间弄清楚的问题:
JVM 内部发生了什么>MappedByteBuffer.force()
?如果没有任何改变,它是否仍然写入整个文件?文件的一部分?它首先加载它吗?- RSS 中是否始终存在一定数量的 MBB? (RSS 大约是分配的 MBB 大小总数的一半。巧合?我怀疑不是。)
- 如果我将 MappedByteBuffer.force() 移动到数据库更新事件,而不是定期计时器,问题会神奇地消失吗离开?
- 为什么 RSS 斜率如此规则?它与任何应用程序负载指标都不相关。
Over the past year I've made huge improvements in my application's Java heap usage--a solid 66% reduction. In pursuit of that, I've been monitoring various metrics, such as Java heap size, cpu, Java non-heap, etc. via SNMP.
Recently, I've been monitoring how much real memory (RSS, resident set) by the JVM and am somewhat surprised. The real memory consumed by the JVM seems totally independent of my applications heap size, non-heap, eden space, thread count, etc.
Heap Size as measured by Java SNMP
Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-heap-used.png
Real Memory in KB. (E.g.: 1 MB of KB = 1 GB)
Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-rss.png
(The three dips in the heap graph correspond to application updates/restarts.)
This is a problem for me because all that extra memory the JVM is consuming is 'stealing' memory that could be used by the OS for file caching. In fact, once the RSS value reaches ~2.5-3GB, I start to see slower response times and higher CPU utilization from my application, mostly do to IO wait. As some point paging to the swap partition kicks in. This is all very undesirable.
So, my questions:
- Why is this happening? What is going on "under the hood"?
- What can I do to keep the JVM's real memory consumption in check?
The gory details:
- RHEL4 64-bit (Linux - 2.6.9-78.0.5.ELsmp #1 SMP Wed Sep 24 ... 2008 x86_64 ... GNU/Linux)
- Java 6 (build 1.6.0_07-b06)
- Tomcat 6
- Application (on-demand HTTP video streaming)
- High I/O via java.nio FileChannels
- Hundreds to low thousands of threads
- Low database use
- Spring, Hibernate
Relevant JVM parameters:
-Xms128m
-Xmx640m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:+CMSIncrementalMode
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCApplicationStoppedTime
-XX:+CMSLoopWarn
-XX:+HeapDumpOnOutOfMemoryError
How I measure RSS:
ps x -o command,rss | grep java | grep latest | cut -b 17-
This goes into a text file and is read into an RRD database my the monitoring system on regular intervals. Note that ps outputs Kilo Bytes.
The Problem & Solutions:
While in the end it was ATorras's answer that proved ultimately correct, it kdgregory who guided me to the correct diagnostics path with the use of pmap
. (Go vote up both their answers!) Here is what was happening:
Things I know for sure:
- My application records and displays data with JRobin 1.4, something I coded into my app over three years ago.
- The busiest instance of the application currently creates
- Over 1000 a few new JRobin database files (at about 1.3MB each) within an hour of starting up
- ~100+ each day after start-up
- The app updates these JRobin data base objects once every 15s, if there is something to write.
- In the default configuration JRobin:
- uses a
java.nio
-based file access back-end. This back-end mapsMappedByteBuffers
to the files themselves. - once every five minutes a JRobin daemon thread calls
MappedByteBuffer.force()
on every JRobin underlying database MBB
- uses a
pmap
listed:- 6500 mappings
- 5500 of which were 1.3MB JRobin database files, which works out to ~7.1GB
That last point was my "Eureka!" moment.
My corrective actions:
- Consider updating to the latest JRobinLite 1.5.2 which is apparently better
- Implement proper resource handling on JRobin databases. At the moment, once my application creates a database and then never dumps it after the database is no longer actively used.
- Experiment with moving the
MappedByteBuffer.force()
to database update events, and not a periodic timer. Will the problem magically go away? - Immediately, change the JRobin back-end to the java.io implementation--a line line change. This will be slower, but it is possibly not an issue. Here is a graph showing the immediate impact of this change.
Java RSS memory used graph http://lanai.dietpizza.ch/images/stackoverflow-rss-problem-fixed.png
Questions that I may or may not have time to figure out:
- What is going on inside the JVM with
MappedByteBuffer.force()
? If nothing has changed, does it still write the entire file? Part of the file? Does it load it first? - Is there a certain amount of the MBB always in RSS at all times? (RSS was roughly half the total allocated MBB sizes. Coincidence? I suspect not.)
- If I move the
MappedByteBuffer.force()
to database update events, and not a periodic timer, will the problem magically go away? - Why was the RSS slope so regular? It does not correlate to any of the application load metrics.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
只是一个想法:NIO 缓冲区放置在 JVM 之外。
编辑:
根据 2016 年,值得考虑 @Lari Hotari 评论 [ 为什么即使堆等大小稳定,Sun JVM 仍继续消耗更多的 RSS 内存?] 因为早在 2009 年,RHEL4 就有了 glibc < 2.10(~2.3)
问候。
Just an idea: NIO buffers are placed outside the JVM.
EDIT:
As per 2016 it's worth considering @Lari Hotari comment [ Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable? ] because back to 2009, RHEL4 had glibc < 2.10 (~2.3)
Regards.
RSS 表示正在使用的页面——对于 Java,它主要是堆中的活动对象以及 JVM 中的内部数据结构。除了使用更少的对象或进行更少的处理之外,您无法采取太多措施来减小其大小。
就你的情况而言,我认为这不是问题。该图表显示消耗了 3 兆,而不是您在文本中所写的 3 吉。这确实很小,不太可能引起分页。
那么您的系统中还发生了什么?是否有这样的情况:您有很多 Tomcat 服务器,每个服务器消耗 3M 的 RSS?您添加了很多 GC 标志,它们是否表明该进程将大部分时间花费在 GC 上?您是否有在同一台机器上运行的数据库?
根据评论进行编辑
关于 3M RSS 大小 - 是的,对于 Tomcat 进程来说这似乎太低了(我选中了我的框,有一个 89M 的大小已经有一段时间没有活动了)。然而,我并不一定期望它是>堆大小,我当然不希望它几乎是堆大小的 5 倍(您使用 -Xmx640)——最坏的情况应该是堆大小 + 一些每个应用程序常量。
这让我怀疑你的数字。因此,请运行以下命令来获取快照,而不是随时间变化的图表(将 7429 替换为您正在使用的任何进程 ID):
(由 Stu 编辑,以便我们可以将结果格式化为上述 ps 信息请求:)
编辑为后代解释这些数字
RSS,如前所述,是驻留集大小:物理内存中的页面。 SZ 保存进程可写的页数(提交费用);联机帮助页将此值描述为“非常粗糙”。 VSZ 保存进程的虚拟内存映射的大小:可写页加上共享页。
通常情况下,VSZ略大于0。 SZ,非常> RSS。此输出表明一种非常不寻常的情况。
详细说明为什么唯一的解决方案是减少对象
RSS 表示驻留在 RAM 中的页面数量——主动访问的页面。对于 Java,垃圾收集器将定期遍历整个对象图。如果此对象图占据了大部分堆空间,则收集器将触及堆中的每个页面,要求所有这些页面都驻留在内存中。 GC 非常擅长在每次主要回收后压缩堆,因此如果您使用部分堆运行,则大多数页面不需要位于 RAM 中。
还有其他一些选项
我注意到您提到有数百到数千个线程。这些线程的堆栈也会添加到 RSS 中,尽管数量应该不会太多。假设线程的调用深度较浅(对于应用程序服务器处理程序线程来说是典型的),每个线程应该只消耗一页或两页物理内存,即使每个线程都有半兆的提交费用。
RSS represents pages that are actively in use -- for Java, it's primarily the live objects in the heap, and the internal data structures in the JVM. There's not much that you can do to reduce its size except use fewer objects or do less processing.
In your case, I don't think it's an issue. The graph appears to show 3 meg consumed, not 3 gig as you write in the text. That's really small, and is unlikely to be causing paging.
So what else is happening in your system? Is it a situation where you have lots of Tomcat servers, each consuming 3M of RSS? You're throwing in a lot of GC flags, do they indicate the process is spending most of its time in GC? Do you have a database running on the same machine?
Edit in response to comments
Regarding the 3M RSS size - yeah, that seemed too low for a Tomcat process (I checked my box, and have one at 89M that hasn't been active for a while). However, I don't necessarily expect it to be > heap size, and I certainly don't expect it to be almost 5 times heap size (you use -Xmx640) -- it should at worst be heap size + some per-app constant.
Which causes me to suspect your numbers. So, rather than a graph over time, please run the following to get a snapshot (replace 7429 by whatever process ID you're using):
(Edit by Stu so we can have formated results to the above request for ps info:)
Edit to explain these numbers for posterity
RSS, as noted, is the resident set size: the pages in physical memory. SZ holds the number of pages writable by the process (the commit charge); the manpage describes this value as "very rough". VSZ holds the size of the virtual memory map for the process: writable pages plus shared pages.
Normally, VSZ is slightly > SZ, and very much > RSS. This output indicates a very unusual situation.
Elaboration on why the only solution is to reduce objects
RSS represents the number of pages resident in RAM -- the pages that are actively accessed. With Java, the garbage collector will periodically walk the entire object graph. If this object graph occupies most of the heap space, then the collector will touch every page in the heap, requiring all of those pages to become memory-resident. The GC is very good about compacting the heap after each major collection, so if you're running with a partial heap, there most of the pages should not need to be in RAM.
And some other options
I noticed that you mentioned having hundreds to low thousands of threads. The stacks for these threads will also add to the RSS, although it shouldn't be much. Assuming that the threads have a shallow call depth (typical for app-server handler threads), each should only consume a page or two of physical memory, even though there's a half-meg commit charge for each.
JVM 使用的内存不仅仅是堆。例如,Java 方法、线程堆栈和本机句柄是在与堆以及 JVM 内部数据结构分开的内存中分配的。
在你的情况下,可能的问题原因可能是:NIO(已经提到)、JNI(已经提到)、过多的线程创建。
关于 JNI,您写道应用程序没有使用 JNI,但是...您使用什么类型的 JDBC 驱动程序?会不会是 2 型,并且正在泄漏?正如您所说,数据库使用率很低,但这种可能性很小。
关于过多的线程创建,每个线程都有自己的堆栈,该堆栈可能非常大。堆栈大小实际上取决于虚拟机、操作系统和体系结构,例如 JRockit 在 Linux x64 上是 256K,我在 Sun 的文档中没有找到有关 Sun VM 的参考。这直接影响线程内存(线程内存=线程堆栈大小*线程数)。如果您创建和销毁大量线程,则内存可能不会被重用。
老实说,数百到数千个线程对我来说似乎是巨大的。也就是说,如果您确实需要那么多线程,可以通过 -Xss 选项配置线程堆栈大小。这可以减少内存消耗。但我认为这并不能解决整个问题。当我查看真实内存图时,我倾向于认为某个地方存在泄漏。
JVM uses more memory than just the heap. For example Java methods, thread stacks and native handles are allocated in memory separate from the heap, as well as JVM internal data structures.
In your case, possible causes of troubles may be: NIO (already mentioned), JNI (already mentioned), excessive threads creation.
About JNI, you wrote that the application wasn't using JNI but... What type of JDBC driver are you using? Could it be a type 2, and leaking? It's very unlikely though as you said database usage was low.
About excessive threads creation, each thread gets its own stack which may be quite large. The stack size actually depends on the VM, OS and architecture e.g. for JRockit it's 256K on Linux x64, I didn't find the reference in Sun's documentation for Sun's VM. This impacts directly the thread memory (thread memory = thread stack size * number of threads). And if you create and destroy lots of thread, the memory is probably not reused.
To be honest, hundreds to low thousands of threads seems enormous to me. That said, if you really need that much threads, the thread stack size can be configured via the
-Xss
option. This may reduce the memory consumption. But I don't think this will solve the whole problem. I tend to think that there is a leak somewhere when I look at the real memory graph.众所周知,Java 中当前的垃圾收集器不会释放分配的内存,尽管不再需要该内存。然而,很奇怪的是,尽管堆大小限制为 640MB,但 RSS 大小却增加到 >3GB。您是否在应用程序中使用任何本机代码,或者是否启用了 Tomcat 的本机性能优化包?在这种情况下,您的代码或 Tomcat 中当然可能存在本机内存泄漏。
在 Java 6u14 中,Sun 引入了新的“Garbage-First”垃圾收集器,如果不再需要内存,它能够将内存释放回操作系统。它仍然被归类为实验性的并且默认情况下未启用,但如果它对您来说是一个可行的选项,我会尝试升级到最新的 Java 6 版本并使用命令行参数“-XX:+UnlockExperimentalVMOptions -”启用新的垃圾收集器XX:+使用G1GC”。它可能会解决你的问题。
The current garbage collector in Java is well known for not releasing allocated memory, although the memory is not required anymore. It's quite strange however, that your RSS size increases to >3GB although your heap size is limited to 640MB. Are you using any native code in your application or are you having the native performance optimization pack for Tomcat enabled? In that case, you may of course have a native memory leak in your code or in Tomcat.
With Java 6u14, Sun introduced the new "Garbage-First" garbage collector, which is able to release memory back to the operating system if it's not required anymore. It's still categorized as experimental and not enabled by default, but if it is a feasible option for you, I would try to upgrade to the newest Java 6 release and enable the new garbage collector with the command line arguments "-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC". It might solve your problem.