ParallelGC 和 ParallelOldGC 有什么区别?

发布于 2024-11-14 06:12:56 字数 338 浏览 3 评论 0原文

我对GC算法有一些疑问: 首先,当我们使用UseSerialGCUseParallelGCUseParallelOldGC等参数时,我们指定一个GC算法。他们每个人都可以在所有代中进行GC,是吗?

例如,如果我使用java -XX:+UseSerialGC,则所有生成都将使用串行GC作为GC算法。

其次,我可以在旧代中使用ParallelGC并在年轻代中使用SerialGC吗?

最后如标题,ParallelGCParallelOldGC 有什么区别?

I have some questions about the GC Algorithm:
First when we use the parameters such UseSerialGC, UseParallelGC, UseParallelOldGC and so on, we specify a GC Algorithm. Each of them all can do GC in all generation, is it right?

For example, if I use java -XX:+UseSerialGC, all generation will use serial GC as the GC Algorithm.

Second can I use ParallelGC in Old Gneneration and use SerialGC in young generation?

The last as the title what's the difference between ParallelGC and ParallelOldGC?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

江心雾 2024-11-21 06:12:56

查看HotSpot VM 选项

-XX:+UseParallelGC = 使用并行垃圾收集进行清理。 (1.4.1 中介绍)。

-XX:+UseParallelOldGC = 对完整回收使用并行垃圾回收。启用此选项会自动设置-XX:+UseParallelGC。 (在 5.0 更新 6 中引入。)

其中 Scavenges = 年轻代 GC。

Take a look at the HotSpot VM Options:

-XX:+UseParallelGC = Use parallel garbage collection for scavenges. (Introduced in 1.4.1).

-XX:+UseParallelOldGC = Use parallel garbage collection for the full collections. Enabling this option automatically sets -XX:+UseParallelGC. (Introduced in 5.0 update 6.)

where Scavenges = Young generation GC.

情徒 2024-11-21 06:12:56

好吧,经过大量的搜索和研究,我了解到如下,

-XX:+UseParallelGC - 这使得 GC 能够在 Young 中使用多线程一代,但对于旧/终身一代,仍然使用序列标记和紧凑算法。

-XX:+UseParallelOldGC - 这使得 GC 能够在old/tenured生成中使用Parallel Mark and Compact算法。

让我们了解一下 -

由于多种原因,在年轻代中工作的算法和内存安排(例如标记和复制、交换空间)不适用于老年代

死亡率低——老一代的“死亡率”明显低于年轻一代。在典型的 Java 应用程序中,大多数对象很快就会消亡,很少有对象寿命更长。作为在年轻代中存活并晋升到老年代的对象,可以观察到这些对象的寿命往往更长。这导致老一代的死亡率比年轻一代要低得多。

规模显着 -老一代明显大于年轻代。由于年轻代很快就会被清理掉,因此可用于许多短命对象(小的年轻代)的空间相对较小。在老一代中,对象随着时间的推移而积累。因此,老年代中必须有比年轻代更多的空间(大老年代)

分配很少-老年代中的分配比年轻代中发生的情况要少。这是因为,只有当垃圾收集器将幸存的对象从年轻代提升到老年代时,老年代中的对象才会出现。另一方面,在年轻代中,应用程序使用 new 生成的所有对象(即大部分分配)都发生在年轻代中。

考虑到这些差异,为年轻一代选择了一种能够尽快完成垃圾回收的算法,因为它的死亡率很高,因此必须经常调用[点(1) ]。此外,该算法必须确保最有效的内存分配[点(3)]是可能的,因为在年轻代中分配了很多内存。年轻一代的标记和复制算法具有这些属性。

另一方面,这个算法对老一代没有意义。情况有所不同:垃圾收集器必须处理老年代[第(2)点]中的许多对象,并且其中大多数仍然活着;只有一小部分变得无法访问并且可以被释放[点(1)]。如果垃圾收集器要在每次垃圾收集时复制所有幸存的对象,就像标记和复制一样,那么它将花费大量时间来复制它而不会获得太多收益。

因此标记-清除算法是在老年代进行的,不复制任何内容,只是释放不可达的对象。由于该算法会导致堆碎片,因此还考虑了标记和清除算法的一种变体,其中在清除阶段之后进行压缩,通过该压缩来碎片减少了。该算法称为标记紧凑算法。

标记和紧凑算法可能非常耗时,因为它需要在接下来的阶段中遍历对象图。

  1. 标记。
  2. 新位置的计算。
  3. 参考调整。
  4. 移动

计算新位置阶段,当它获得空闲空间时,尝试寻找可以移动到该空间的对象(碎片整理)。存储该对以供后续阶段使用。这导致算法消耗更多时间。

虽然标记和比较解决了一些特定于终身生成的问题,但它也遇到了一些严重的问题,因为这是一个 STW(Stop the world)事件,并且消耗大量时间,会严重影响应用程序。

老一代的替代算法

为了减少中断时间,已经考虑了串行标记和压缩算法的替代方案:

并行标记和压缩算法它仍然锁存所有应用程序线程,但随后使用多个垃圾收集器线程处理标签和后续压缩。虽然这仍然是一种停止世界的方法,但在多核或多处理器计算机上产生的暂停比串行标记和压缩算法更短。老一代上的这种并行算法(称为“ParallelOld”)自 Java 5 Update 6 起可用,并通过选项 -XX: + UseParallelOldGC 进行选择。

竞争性标记和清除算法,至少部分与应用程序竞争,而不停止其线程,并且偶尔需要短暂的停止世界阶段。这种并发标记和清除算法(称为“CMS”)自 Java 1.4.1 以来就已存在;它通过选项 -XX:+UseConcMarkSweepGC 打开。重要的是,这只是一种标记-清除算法;不进行压缩,导致已经讨论过的碎片问题。

所以简而言之,-XX:+UseParallelOldGC用于指示在使用Mark和Compact算法进行主要收集时使用多线程。如果使用此方法,则次要集合或年轻集合是并行的,但主要集合仍然是单线程的。

我希望这个答案。

Well, after lots of search and research what I have come to understand is as below,

-XX:+UseParallelGC - This enables GC to use multiple threads in Young generation but for old/tenured generation still Serial Mark and Compact algorithm is used.

-XX:+UseParallelOldGC - This enables GC to use Parallel Mark and Compact algorithm in old/tenured generation.

Let's understand -

The algorithm and the memory arrangement, such as mark and copy, swap spaces, that works in Young generation does not work for Old generation for many reasons

Low mortality - In the Old Generation, the "mortality rate" is significantly lower than the same in the Young Generation. In a Typical Java application most objects die quickly and few live longer. As the objects which survive in young generation and promoted to old generation, it is observed that these objects tend to live a longer life. Which leads to very less mortality rate in old generation compared to young generation.

Significantly size - The Old Generation is significantly larger than the Young Generation. Because the Young Generation quickly clears up, relatively little space is available for the many short-lived objects (small Young Generation). In the Old Generation, objects accumulate over time. Therefore, there must be much more space in an old generation than in the Young Generation (big old generation)

Little allocation - In the Old Generation less allocation happens than in the Young Generation. This is because in the Old Generation objects arise only when the Garbage Collector promotes surviving objects from the Young to the Old Generation. In the Young Generation, on the other hand, all the objects that the application generates with new, i.e the majority of the allocations, occur in the Young Generation.

Taking these differences into account, an algorithm has been chosen for the Young Generation that will finish garbage collection as soon as possible, because it has to be called often because of the high mortality rate [point (1)]. In addition, the algorithm must ensure that the most efficient possible memory allocation [point (3)] is then possible because much is allocated in the Young Generation. The mark-and-copy algorithm on the Young Generation has these properties.

On the other hand, this algorithm does not make sense on the Old Generation. The situation is different: the garbage collector has to take care of many objects in the Old Generation [point (2)] and most of them are still alive; only a small part has become unreachable and can be released [point (1)]. If the garbage collector were to copy all the surviving objects on each garbage collection, just as it does with mark-and-copy, then it would spend a lot of time copying it without gaining much.

Therefore, the mark-and-sweep algorithm is made on the old generation, where nothing is copied, but simply the unreachable objects are released. Since this algorithm leads to the fragmentation of the heap, one has additionally considered a variation of the mark-and-sweep algorithm, in which, following the sweep phase, a compaction is made, by which the fragmentation is reduced. This algorithm is called a mark-and-compact algorithm.

A mark and compact algorithm can be time consuming as it needs to traverse the object graph in following for stages.

  1. Marking.
  2. Calculation of new locations.
  3. Reference adjustments.
  4. Moving

In the Calculation of new location phase, when ever it gets a free space, tries to find an object which can move to this space(defragmentation). Stores the the pair for use in later phases. This causes the algorithm consume more time.

Though mark and compare solves some issues specific to tenured generation, it has got some serious issue as this is an STW(Stop the world) event, and consumes much time, can seriously impact the application.

Alternative algorithms for the old generation

In order to reduce the break times, alternatives to the serial mark-and-compact algorithm have been considered:

A parallel mark-and-compact algorithm that still latches all application threads, but then handles labeling and subsequent compaction with multiple garbage collector threads. While this is still a stop-the-world approach, the resulting pause is shorter on a multi-core or multi-processor machine than the serial mark-and-compact algorithm. This parallel algorithm on the Old Generation (called "ParallelOld") has been available since Java 5 Update 6 and is selected with the option -XX: + UseParallelOldGC.

A competing mark-and-sweep algorithm that at least partially rivals the application without stopping its threads, and occasionally needs short stop-the-world phases. This concurrent mark-and-sweep algorithm (called "CMS") has been around since Java 1.4.1; it is switched on with the option -XX:+UseConcMarkSweepGC. Importantly, this is just a mark-and-sweep algorithm; Compaction does not take place, leading to the already discussed problem of fragmentation.

So in a nutshell -XX: + UseParallelOldGC is used as an indication to use multiple threads while doing major collection using Mark and Compact algorithm. If this is used instead, minor or young collection are parallel, but major collections are still single threaded.

I hope this answers .

别再吹冷风 2024-11-21 06:12:56

这是应用于 Java 堆不同区域(即新生代和老年代)的两个 gc 策略。这是一个链接,有助于澄清哪些选项暗示着其他选项。它很有帮助,尤其是当您开始了解指定 ParallelOldGC 或 ParNewGC 时会得到什么时。
http://www.fasterj.com/articles/oraclecollectors1.shtml

Those are two gc policies applied to different regions of a Java Heap namely New and Old generations. Here's a link that helps to clarify which options imply other ones. It's helpful especially when starting out to understand what you're getting when you specify say ParallelOldGC or ParNewGC.
http://www.fasterj.com/articles/oraclecollectors1.shtml

背叛残局 2024-11-21 06:12:56

来自 Oracle Java SE 8 文档:

https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/collectors.html

并行收集器(也称为吞吐量收集器)并行执行次要收集,这可以显着减少垃圾收集开销。它适用于在多处理器或多线程硬件上运行的具有中型到大型数据集的应用程序。在某些硬件和操作系统配置上默认选择并行收集器,或者可以使用选项 -XX:+UseParallelGC 显式启用。

并行压缩是一项使并行收集器能够并行执行主要收集的功能。如果没有并行压缩,主要收集将使用单个线程执行,这会严重限制可扩展性。如果指定了选项-XX:+UseParallelGC,则默认启用并行压缩。关闭它的选项是-XX:-UseParallelOldGC

因此,如果您指定-XX:+UseParallelGC,默认情况下主要收集也将使用多线程完成。反之亦然,即如果您指定 -XX:+UseParallelOldGC,次要收集也将使用多个线程完成。

From Oracle Java SE 8 docs:

https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/collectors.html

The parallel collector (also known as the throughput collector) performs minor collections in parallel, which can significantly reduce garbage collection overhead. It is intended for applications with medium-sized to large-sized data sets that are run on multiprocessor or multithreaded hardware. The parallel collector is selected by default on certain hardware and operating system configurations, or can be explicitly enabled with the option -XX:+UseParallelGC.

Parallel compaction is a feature that enables the parallel collector to perform major collections in parallel. Without parallel compaction, major collections are performed using a single thread, which can significantly limit scalability. Parallel compaction is enabled by default if the option -XX:+UseParallelGC has been specified. The option to turn it off is -XX:-UseParallelOldGC.

So if you specify -XX:+UseParallelGC, By default major collection will also be done using multiple threads. The reverse is also true i.e. if you specify -XX:+UseParallelOldGC, minor collections will also be done using multiple threads.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文