最佳空闲堆与总堆的比率是多少?
最佳空闲堆与总堆的比率是多少? 在这个比率的什么值时,我应该考虑增加堆大小/减少堆大小?
What is the optimal freeheap to totalheap ratio? At what values of this ratio should I consider increasing the heap size/ decreasing the heap size?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
理想的瞬时比率是 1。理想情况下,您的 JVM 将消耗其所需的内存,不多也不少。 这是一个很难达到的目标;)
问题(正如 TNilsson 指出的那样)是应用程序的内存需求随着时间的推移而变化,因此您希望它有足够的空间,以免比您更频繁地导致持续收集/压缩可以容忍,并且您希望它消耗足够少的空间,这样您就不必购买更多 RAM。
The ideal momentary ratio is 1. Ideally, your JVM would consume exactly the memory it required, no more and no less. That's a very hard target to reach ;)
The problem (as TNilsson points out) is that your application's memory requirements change over time as it does work, so you want it to have enough space not to cause constant collection/compaction more often than you can tolerate, and you want it to consume little enough space that you don't have to buy more RAM.
没有一个简单的答案,让我给你举两个例子:
示例 1 - 你的程序在启动时分配了 100M 的内存,然后在其余的运行过程中不再分配任何内存。
在这种情况下,您显然希望堆大小为 100M(好吧,也许是 101 或其他,但您明白了......)以避免浪费空间。
示例 2 - 您的程序每秒分配 10M 内存。 所有数据的保留时间均不超过 1 秒。 (例如,您正在进行的计算需要大量临时数据,完成后将返回一个整数...)
知道确切的数字可能不太现实,但它是一个例子。
由于您有 10M 的“实时”数据,因此您必须至少有 10M 的堆。 除此之外,您需要检查垃圾收集器的工作原理。 简单来说,一次GC完成的时间是O(live set),即“死”数据量并没有真正进入其中。 通过恒定的实时集大小,无论堆大小如何,GC 时间都是恒定的。 这会导致更大的堆 -> 更好的吞吐量。
(现在,为了真正搞乱事情,你添加了诸如堆压缩之类的东西,图像变得更加不清晰......)
结论
这是问题的简化版本,但简短的答案是——这取决于情况。
There is no single easy answer, let me give you two examples:
Example 1 - Your program allocates 100M worth of memory at startup, and then does not allocate any memory what so ever for the rest of its run.
In this case, you clearly want to have a heap size of 100M (Well, perhaps 101 or something, but you get the point...) to avoid wasting space.
Example 2 - Your program allocates 10M of memory per second. None of the data is persisted longer than 1 second. (e.g. you are doing a calculation that requires a lot of temporary data, and will return a single integer when you are done...)
Knowing the exact numbers is perhaps not so realistic, but it's an example.
Since you have 10M of "live" data, you will have to have at least 10M heap. Other than that, you need to check how your garbage collector works. Simplified, the time a GC takes to complete is O(live set), that is, the amount of "dead" data does not really enter into it. With a constant live set size, your GC time is constant no matter your heap size. This leads to larger heap -> Better throughput.
(Now, to really mess things up you add stuff like compaction of the heap and the image becomes even less clear...)
Conclusion
It's a simplified version of the matter, but the short answer is - It depends.
这可能取决于您分配新对象的速率。 垃圾收集涉及大量跟踪活动对象引用的工作。 我刚刚处理过这样一种情况:有大量可用内存(例如已使用 500 MB,可用 500 MB),但发生了如此多的数组分配,以至于 JVM 将花费 95% 的时间进行 GC。 所以不要忘记运行时内存行为。
所有那些性能调优文章都说“Java 中的对象分配非常快”,却没有提到某些分配会导致 1 秒的 GC 时间,这让我笑了。
This probably depends on the rate that you allocate new objects. Garbage collection involves a lot of work tracing references from live objects. I have just been dealing with a situation where there was plenty of free memory (say 500 MB used, 500 MB free) but so much array allocation was happening that the JVM would spend 95% of its time doing GC. So don't forget about the runtime memory behaviour.
All those performance tuning articles that say something like "object allocation is really fast in Java" without mentioning that some allocations cause 1 second of GC time make me laugh.