将最大和最小 JVM 堆大小设置为相同好吗?

发布于 2024-11-26 23:34:51 字数 1432 浏览 0 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

月朦胧 2024-12-03 23:34:51

更新:这个答案最初写于 2014 年,现已过时。

Peter 的答案是正确的,因为 -Xms 在启动时分配,并且它将增长到 -Xmx (最大堆大小),但这在他如何分配时有点误导说出了他的回答。 (抱歉,彼得,我知道你知道这些东西)。

设置 ms == mx 可有效关闭此行为。虽然这在旧版 JVM 中曾经是一个好主意,但现在情况已不再如此。增大和缩小堆可以让 JVM 适应内存压力的增加,同时在内存压力降低时通过缩小堆来减少暂停时间。有时,这种行为不会给您带来预期的性能优势,在这种情况下,最好设置mx == ms
当堆超过 98% 的时间用于收集并且集合无法恢复超过其中的 2% 时,会引发 OOME。如果您未达到最大堆大小,那么 JVM 将简单地增长,从而超出该界限。除非您的堆达到最大堆大小并满足定义 OutOfMemoryError 的其他条件,否则启动时不会出现 OutOfMemoryError

对于我发布后收到的评论。我不知道 JMonitor 博客 条目显示的是什么,但这是来自 PSYoung 收集器。

size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
                           min_gen_size());

我可以做更多的挖掘,但我敢打赌我会在 ParNewPSOldGenCMS Tenured 中找到具有相同目的的代码实施。事实上,除非发生并发模式故障,否则 CMS 不太可能返回内存。在 CMF 的情况下,串行收集器将运行,并且应该包括压缩,之后堆顶部很可能是干净的,因此有资格被释放。

Update: This answer was originally written in 2014 and is obsolete.

Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).

Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.

For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.

size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
                           min_gen_size());

I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.

随梦而飞# 2024-12-03 23:34:51

设置 -Xms 的主要原因是如果您在启动时需要特定的堆。 (防止启动时发生 OutOfMemoryErrors。)如上所述,如果您需要启动堆来匹配最大堆,那么您就需要匹配它。否则你并不真正需要它。只是要求应用程序占用它最终可能需要的更多内存。在负载测试和使用应用程序时观察内存随时间的使用情况(分析)应该可以让您很好地了解需要将它们设置为什么。但在启动时将它们设置为相同并不是更糟糕的事情。对于我们的许多应用程序,我实际上一开始的最小值为 128、256 或 512(启动),最大值为 1 GB(这适用于非应用程序服务器应用程序)。

刚刚在堆栈溢出上发现了这个问题,这可能也有帮助 side-增加 maxpermsize 和 max-heap-size 的效果。值得一看。

Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).

Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.

心凉 2024-12-03 23:34:51

AFAIK,将两者设置为相同的大小可以省去调整堆大小的额外步骤,如果您非常知道要使用多少堆,这可能对您有利。此外,较大的堆大小会减少 GC 调用,使其发生的次数很少。在我当前的项目(交易风险分析)中,我们的风险引擎的 XmxXms 的值相当大(大约 8Gib)。这确保了即使在调用引擎一整天之后,也几乎不会发生 GC。

另外,我在此处发现了一个有趣的讨论。

AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.

Also, I found an interesting discussion here.

ゃ懵逼小萝莉 2024-12-03 23:34:51

截至 2023 年更新了答案,适用于 JVM 7- 到 JVM 20+。

对于使用多 GB 内存的服务器应用程序,您应始终设置 Xms=Xmx。

1) 有必要避免堆调整大小。调整大小是一种非常缓慢且非常密集的操作,应该避免。

调整大小需要重新分配大量内存并移动所有现有的 java 对象。这可能需要一段时间,并且在此期间应用程序会被冻结。较小的 Xms 和较大的 Xmx 将导致多次堆大小调整。某些 JVM 版本和垃圾收集器可能不会立即移动所有对象或暂停所有线程,但原则仍然存在。

考虑一个具有 30 GB 堆的典型 ElasticSearch 数据库。堆从 1GB 开始并达到完整大小将需要十几次调整大小操作。较大的调整大小将需要整整几秒钟(冻结)才能在 GB 内存中洗牌数百万个对象。加分点是,ElasticSearch 集群正在监视节点在 N 秒内无响应并丢弃它的情况。

设置Xms=Xms 至关重要。大多数服务器软件都有相对较大的堆,并且对暂停(延迟)很敏感。

2) JVM 可能永远不会增加堆。

当存在内存压力时,JVM 将决定执行完整 GC 或调整堆大小。如果可能的话,它将释放对象,而不必增加堆。根据我的经验,JVM 会尝试限制堆大小(这可能会因 JVM 而异)。

在内存有限且有其他应用程序要运行(例如典型桌面)的系统中,最小化堆大小是有意义的。
在具有专用资源的系统(例如典型的服务器)中最小化堆大小是没有意义的。

考虑一个 Web 服务器,每个传入请求都会占用一些 kB 或 MB 的堆,Web 服务器将时不时地运行 GC 来回收内存:

  • Web 服务器可以使用 128 MB 的堆运行(假设 50 MB 的基线不断)使用)并每 78 个小请求运行一次缓慢的 GC
  • 或者 Web 服务器可以使用 1024 MB 堆运行并在 GC 之间处理一千个请求。提示:这更加稳定且响应更快

您可能期望 JVM 在第一个场景中增加堆,但事实并非如此。该应用程序只需很少的内存即可运行。 JVM 只需要更频繁地收集未使用的对象,因为空间较少。

3) 可变堆大小与缓存和缓冲相反。

服务器软件可以进行大量缓存、缓冲和排队,以优化性能并减少延迟。
所有这些模式都期望预先有可用的内存。

通常它们需要配置固定大小或堆的百分比。

考虑一个 Logstash 中继,默认配置为使用 20% 的内存来缓冲传入的日志消息。

当堆大小可变时会发生什么?也许logstash有10MB的缓冲区,也许logstash有1GB?调整堆大小时是否调整缓冲区?

在这里使用可变堆大小没有任何意义,Xms 应该与 Xmx 相同。

Updated answer as of 2023, valid from JVM 7- to JVM 20+.

You should always set Xms=Xmx for server applications that use multiple GB of memory.

1) It's necessary to avoid heap resize. Resizing is a very slow and very intensive operation that should be avoided.

A resize requires to reallocate large amount of memory and move all existing java objects. It can take a while and the application is frozen during that time. Having a small Xms and large Xmx will lead to multiple heap resize. Some JVM versions and garbage collectors may not move all objects or pause all threads at once, but the principle remains.

Consider a typical ElasticSearch database with 30 GB heap. Starting the heap at 1GB and reaching the full size will require a dozen of resize operations. The larger resizes will take entire seconds (freeze) to shuffle millions of objects across GB of memory. Bonus points, the ElasticSearch cluster is monitoring when a node is unresponsive for N seconds and dropping it.

It's critical to set Xms=Xms. Most server software have relatively large heap and are sensitive to pauses (latency).

2) The JVM might never grow the heap.

The JVM will decide to do a full GC or to resize the heap when there is memory pressure. It's going to free objects if possible and not have to grow the heap. In my experience the JVM is trying to limit heap size (this might vary with the JVM).

It makes sense to minimize heap size in a system where the memory is limited and there are other applications to run (e.g. typical desktop).
It doesn't make sense to minimize heap size in a system with dedicated resources (e.g. typical server).

Consider a web server, each incoming request will churn through some kB or MB of heap, the web server will run a GC every now and then to reclaim memory:

  • The web server could run with 128 MB of heap (let's say 50 MB baseline constantly used) and run a slow GC every 78 small requests
  • Or the web server could run with 1024 MB of heap and process a thousand requests between GC. HINT: this is more stable and responsive

You might expect the JVM to grow the heap in the first scenario, but it doesn't have to. The application can run with little memory. The JVM simply needs to collect unused objects more frequently because there is less room.

3) Variable heap size is in opposition with caching and buffering.

Server software can do heavy caching and buffering and queuing to optimize performance and reduce latency.
All these patterns expect to have memory available upfront.

Usually they need to be configured with a fixed size or a percentage of the heap.

Consider a logstash relay configured to use 20% of memory for buffering incoming log messages by default.

What happens when the heap size is variable? Maybe logstash has 10 MB of buffer, maybe logstash has 1 GB? Is the buffer adjusted when the heap is resized?

It doesn't make any sense to use a variable heap size here, Xms should be the same as Xmx.

走过海棠暮 2024-12-03 23:34:51

对于服务器应用程序来说,绝对是。拥有这么多内存但不使用它有什么意义呢?
(不,如果不使用内存单元,它不会节省电力)

JVM 喜欢内存。对于给定的应用程序,JVM 拥有的内存越多,执行的 GC 就越少。最好的部分是更多的物体会在年轻时死亡,而更少的物体会长期存在。

特别是在服务器启动期间,负载甚至高于正常情况。在这个阶段给服务器一个小的内存来工作是很愚蠢的。

Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)

JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.

Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.

绝對不後悔。 2024-12-03 23:34:51

从我在这里看到的 http://java-monitor.com/forum/showthread .php?t=427
被测试的 JVM 从 Xms 设置开始,但会释放它不需要的内存,并在需要时将其占用到 Xmx 标记。

除非您最初需要一块专门用于大量内存消耗的内存,否则设置高 Xms=Xmx 没有多大意义。看起来即使 Xms=Xmx 也会发生释放和分配

From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.

Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文