有哪些确定性垃圾收集算法?

发布于 2024-07-08 06:19:16 字数 167 浏览 7 评论 0原文

我所说的确定性模糊的意思是可以用于关键的实时软件,例如航空航天飞行软件。 垃圾收集器(以及与此相关的动态内存分配)是飞行软件中的一大禁忌,因为它们被认为是不确定的。 不过,我知道这方面的研究正在进行中,所以我想知道这个问题是否已经解决了。

我还在问题中加入了任何限制其使用方式的垃圾收集算法。

By deterministic I vaguely mean that can be used in critical real-time software like aerospace flight software. Garbage collectors (and dynamic memory allocation for that matter) are big no-no's in flight software because they are considered non-deterministic. However, I know there's ongoing research on this, so I wonder if this problem has been solved yet.

I'm also including in the question any garbage collection algorithms that put restrictions on how they're used.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

始于初秋 2024-07-15 06:19:17

我知道这篇文章有点过时,但我做了一些有趣的研究,并希望确保它得到更新。

Azul Systems“Zing JVM”和 JRocket 可以提供确定性 GC。 Zing 附带了一些非常有趣的附加功能,现在“100% 基于软件”(可以在 x86 机器上运行)。 不过目前它仅适用于 Linux...

价格:
如果您使用的是 Java 6 或更早版本,Oracle 现在将收取 300% 的费用并强制支持此功能(每个处理器 15,000 美元,支持费用 3,300 美元)。 Azul,据我所知大约是 10,000 - 12,000 美元,但按物理机器收费,而不是核心/处理器。 此外,该过程按数量分级,因此您使用的服务器越多,折扣就越深。 我与他们的交谈表明他们非常灵活。 Oracle 是永久许可证,而 Zing 是基于订阅的……但如果您计算一下并添加 Zing 具有的其他功能(请参阅下面的差异)。

您可以通过迁移到 Java 7 来降低成本,但随后会产生开发成本。 鉴于 Oracle 的路线图(每 18 个月左右发布一个新版本),以及他们历史上仅免费提供最新版本和一个旧版本 Java SE 更新的事实,“免费”期限预计为从最初的 GA 算起的 3 年发布是否有任何主要版本。 由于初始 GA 版本通常在 12-18 个月内不会在生产中采用,并且将生产系统迁移到新的主要 Java 版本通常会带来重大成本,这意味着 Java SE 支持费用将在初始部署后 6 到 24 个月之间开始达到。

显着差异:
JRocket 在 RAM 方面仍然存在一些可扩展性限制(尽管与以前相比有所改进)。 您可以通过一些调整来改善结果。 Zing 设计了他们的算法来允许连续、并发、压缩(没有停止世界的暂停,也不需要“调整”)。 这使得 Zing 可以在没有理论内存上限的情况下进行扩展(他们可以处理 300+ GB 的堆,而不会遭受停止世界或崩溃的困扰)。 谈论范式改变者(想想对大数据的影响)。 Zing 对锁定进行了一些非常酷的改进,通过一些工作即可获得令人惊叹的性能(如果进行调整,平均可以达到亚毫秒级)。 最后,他们可以了解生产中的类、方法和线程行为(无开销)。 在考虑更新、补丁和错误修复(例如泄漏和锁定)时,我们认为这可以节省大量时间。 这实际上可以消除在开发/测试中重新创建许多问题的需要。

我发现的 JVM 数据链接:

JRocket 确定性 GC< /a>

Azul 演示文稿 - Java 无抖动

Azul / MyChannels 测试

I know this post is a bit dated, but I have done some interesting research and want to make sure this is updated.

Deterministic GC can be offered by Azul Systems "Zing JVM" and JRocket. Zing comes with some very interesting added features and is now "100% software based" (can run on x86 machines). It is only for Linux at this time though ...

Price:
If you are on Java 6 or before Oracle is now charging a 300% uplift and forcing support for this capability ($15,000 per processor & $3,300 support). Azul, from what I have heard is around $10,000 - $12,000, but charges by physical machine, not core / processor. Also, the process are graduated by volume so the more servers you leverage the deeper the discounting. My conversations with them showed them to be quite flexible. Oracle is a perpetual license and Zing is subscription based ... but if you do the math and add in other features that Zing has (see differences below).

You can cut cost by moving to Java 7, but then incur development costs. Given Oracle's roadmap (a new release every 18 months or so), and the fact that they historically only offer the latest plus one older versions of Java SE updates for free, the "free" horizon is expected to be 3 years from the initial GA release if any major version. Since initial GA releases are typically not adopted in production for 12-18 months, and that moving production systems to new major java releases typically carries major costs, this means that Java SE support bills will start hitting somewhere between 6 and 24 months after initial deployment.

Notable differences:
JRocket does still have some scalability limitations in terms of RAM (though improved from days of old). You can improve your results with a bit of tuning. Zing has engineered their algorithm to allow continuous, concurrent, compaction (no stop the world pauses and no "tuning" required). This allows Zing to scale without a theoretical memory ceiling (they are doing 300+ GB heaps without suffering stop the world or crashing). Talk about a paradigm changer (think of the implications to big data). Zing has some really cool improvements to locking giving it amazing performance with a bit of work (if tuned, can go sub-millisecond average). Finally, they have visibility into classes, methods, and thread behavior in production (no overhead). We are considering this as a huge time saver when considering updates, patches, and bug-fixes (e.g. leaks & locks). This can practically eliminate the need to recreate many of the issues in Dev / Test.

Links to JVM Data I found:

JRocket Deterministic GC

Azul Presentation - Java without Jitter

Azul / MyChannels Test

流绪微梦 2024-07-15 06:19:17

我知道 azul Systems 有一个 jvm,其 GC 是硬件辅助的。 它还可以并发运行并非常快速地收集大量数据。

但不确定它的确定性如何。

I know azul systems has a jvm whose GC is hardware assisted. It can also run concurrently and collect massive amounts of data pretty fast.

Not sure how deterministic it is though.

楠木可依 2024-07-15 06:19:16

我知道我的回复可能会遭到很多反对,但如果您一开始就试图避免动态内存,因为您说这是不行的,那么您为什么还要使用 GC? 我绝不会在实时系统中使用 GC,因为可预测的运行时间速度是主要关注点。 我会尽可能避免动态内存,因此一开始只有非常非常少的动态对象,然后我会手动处理很少的动态分配,因此我可以 100% 控制何时释放某些内容以及它在哪里释放。 毕竟,不仅 GC 不是确定性的,free() 也和 malloc() 一样缺乏确定性。 没有人说 free() 调用只需将内存标记为空闲。 它也可能尝试将释放的内存周围的较小的空闲内存块组合成一个大的内存块,并且这种行为不是确定性的,它的运行时也不是确定性的(有时 free 不会这样做,而 malloc 会在下一次执行此操作)分配,但没有任何地方写明 free 不能这样做)。

在关键的实时系统中,您甚至可以用不同的实现替换系统标准 malloc()/free() ,甚至可能编写自己的实现(这并不像听起来那么难!我以前这样做只是为了好玩它)最具有确定性。 对我来说,GC 是一件很方便的事情,它让程序员不再关注复杂的 malloc()/free() 规划,而是让系统自动处理这个问题。 它有助于快速进行软件开发,并节省调试工作、查找和修复内存泄漏的时间。 但就像我永远不会在操作系统内核中使用 GC 一样,我也永远不会在关键的实时应用程序中使用它。

如果我需要更复杂的内存处理,我可能会编写自己的 malloc()/free() ,它可以根据需要(并且最具确定性)工作,并在其之上编写我自己的引用计数模型。 引用计数仍然是手动内存管理,但比仅使用 malloc()/free() 舒服得多。 它不是超快,而是确定性的(至少增加/减少引用计数器的速度是确定性的),除非您可能有循环引用,否则如果您在整个应用程序中遵循保留/释放策略,它将捕获所有死内存。 唯一不确定的部分是,您不知道调用release是否只会减少引用计数器或真正释放对象(取决于引用计数是否为零),但是您可以通过提供函数说“releaseWithoutFreeing”,这会将引用计数器减一,但即使它达到零,它也不会 free() 对象。 你的 malloc()/free() 实现可以有一个函数“findDeadObjects”,它搜索保留计数器为零的所有对象,这些对象尚未被释放并释放它们(稍后,当你处于不太关键的情况时)代码的一部分有更多的时间来完成此类任务)。 由于这也不是确定性的,因此您可以限制它可以用于此操作的时间量,例如“findDeadObjectsForUpTo(ms)”,ms 是它可以用于查找和释放它们的毫秒数,一次返回已经使用了Quantum,所以你不会在这个任务上花费太多时间。

I know I might get a lot of down-votes for this reply, but if you are already trying to avoid dynamic memory in the first place, because you said it's a no-no, why do you use GC at all? I'd never use GC in a real-time system where predictable runtime speed is the major concern. I'd avoid dynamic memory wherever possible, thus there are very, very little dynamic objects to start with and then I'd handle the very few dynamic allocations I have manually, so I have 100% control when something is released and where it is released. After all not just GC is not deterministic, free() is as little deterministic as malloc() is. Nobody says that a free() call just has to mark the memory as free. It might as well try to combine smaller free memory blocks surrounding the free'd one to a big one and this behavior is not deterministic, nor is the runtime for it (sometimes free won't do that and malloc will do that instead on next allocation, but nowhere is written that free mustn't do that).

In a critical realtime system, you might even replace the system standard malloc()/free() with a different implementation, maybe even writing your own one (it's not as hard as it sounds! I've done that before just for the fun of it) that works most deterministic. For me GC is a plain convenience thingy, it is to get programmers away from focusing on sophisticated malloc()/free() planing and instead having the system deal with this automatically. It helps doing rapid software development and saves hours of debugging working finding and fixing memory leaks. But just like I'd never use GC within an operating system kernel, I'd never use it within a critical realtime application either.

If I need a more sophisticated memory handling, I'd maybe write my own malloc()/free() that works as desired (and most deterministic) and write my own reference counting model on top of it. Reference counting is still manual memory management, but much more comfortable than just using malloc()/free(). It is not ultra fast, but deterministic (at least increasing/decreasing the ref counter is deterministic in speed) and unless you may have circular references, it will catch all dead memory if you follow a retain/release strategy throughout your application. The only non deterministic part about is that you won't know if calling release will just decrease the ref counter or really free the object (depending if the ref count goes to zero or not), but you could delay the actual free by offering a function to say "releaseWithoutFreeing", which decreases the ref counter by one, but even if it reaches zero, it won't free() the object yet. Your malloc()/free() implementation can have a function "findDeadObjects" that searches for all objects with a retain counter of zero, that have not yet been released and free them (at a later point, when you are in a less critical part of your code that has more time for such kind of tasks). Since this is also not deterministic, you could limit the amount of time it may use for this like "findDeadObjectsForUpTo(ms)", and ms is the amount of milliseconds it may use for finding and freeing them, coming back as soon as this time quantum has been used, so you won't spent too much time in this task.

黑色毁心梦 2024-07-15 06:19:16

Metronome GCBEA JRockit 是我所知道的两个确定性 GC 实现(均针对 Java)。

Metronome GC and BEA JRockit are two deterministic GC implementations that I'm aware of (both for Java).

∝单色的世界 2024-07-15 06:19:16

碰巧在 Stack Overflow 上搜索时注意到了这篇相当旧的帖子。

乔恩·安德森提到了 JamaicaVM。 由于这些帖子已经发布了 4 年多了,
我认为对此处发布的一些信息做出回应很重要。

我为 aicas 工作,该公司是 JamaicaVM、JamaicaCAR 和 Veriflux 的开发商和营销人员。

JamaicaVM 确实有一个硬实时垃圾收集器。 它是完全先发制人的。 最正确
实时操作系统所需的相同行为。 虽然抢占延迟是
取决于 CPU 速度,假设在 Ghz 级处理器上,垃圾收集器的抢占时间小于 1 微秒。 有一个 32 位单核版本,每个进程地址空间支持高达 3 GB 的内存。 有一个 32 位多核版本,支持每个进程地址空间 3 GB 内存和多个核心。 还有 64 位单核和多核版本,每个进程地址空间支持高达 128 GB 的内存。 垃圾收集器的性能与内存大小无关。 为了响应有关完全耗尽内存运行 GC 的响应之一,对于硬实时系统,您不会将程序设计为这样做。 事实上,尽管您可以在这种情况下使用硬实时 GC,但您必须考虑最坏情况下的执行时间,这可能是您的应用程序无法接受的。

相反,正确的方法是分析程序的最大内存分配,然后将硬实时垃圾收集器配置为在所有先前的分配期间增量释放块,以便永远不会发生所描述的特定情况。 这称为线程分布式、按工作节奏进行的垃圾收集。

Siebert 博士关于硬实时垃圾收集器的书描述了如何实现这一点,并提供了一个正式的证明,证明垃圾收集器将跟上应用程序的步伐,同时不会成为 O(N) 操作。

了解实时垃圾收集意味着以下几点非常重要:

  1. 垃圾收集器是可抢占的,就像任何其他操作系统服务一样
  2. 从数学上可以证明垃圾收集器将跟上,这样内存就不会因为某些原因而耗尽内存还没有被回收。
  3. 垃圾收集器不会对内存进行碎片化,因此只要有可用内存,内存请求就会成功。
  4. 此外,您将需要它成为具有优先级反转保护、固定优先级线程调度程序和其他功能的系统的一部分。 有关这方面的一些信息,请参阅 RTSJ。

尽管安全关键型应用程序需要硬实时垃圾收集,但它也可用于任务关键型和通用 Java 应用程序。 使用硬实时垃圾收集器没有固有的限制。 对于一般用途,您可以期望更流畅的程序执行,因为没有长时间的垃圾收集器暂停。

Happened to be searching through Stack Overflow and noticed this rather old post.

Jon Anderson mentioned JamaicaVM. Since these posts have been up for over 4 years now,
I think its important to respond to some of the information posted here.

I work for aicas, the developers and marketers of JamaicaVM, JamaicaCAR, and Veriflux.

JamaicaVM does have a hard realtime garbage collector. It is fully preemptive. The exact
same behavior required in a realtime operating system. Although the preemption latency is
CPU speed dependent, assume that on a Ghz class processor preemption of the garbage collector is less than 1 microsecond. There is a 32 bit singlecore version that supports up to 3 GB of memory per process address space. There is a 32 bit multicore version that supports 3 GB of memory per process address space and multiple cores. There are also 64 bit singlecore and multicore versions that support up to 128 GB of memory per process address space. The performance of the garbage collector is independent of the size of memory. In response to one of the responses regarding running the GC completely out of memory, for a hard realtime system you would not design your program to ever do that. Although you can, in fact, use a hard realtime GC in this scenario, you would have to account for a worst case execution time that probably would not be acceptable to your application.

Instead, the correct approach would be to analyze your program for maximum memory allocation, and then configure the hard realtime garbage collector to incrementally free blocks during all previous allocations so that the specific scenario described never occurs. This is known as thread-distributed, work-paced garbage collection.

Dr. Siebert's book on Hard Realtime Garbage Collectors describes how to accomplish this and presents a formal proof that the garbage collector will keep up with the application, while not becoming an O(N) operation.

It is very important to understand that realtime garbage collection means several things:

  1. The garbage collector is preemptible, just like any other operating system service
  2. It can be proven, mathematically that the garbage collector will keep up, such that memory will not be exhausted because some memory has not been reclaimed yet.
  3. The garbage collector does not fragment memory, such that as long as there is memory available, a memory request will succeed.
  4. Additionally, you will need this to be part of a system with priority inversion protection, a fixed priority thread scheduler and other features. Refer to the RTSJ for some information on this.

Although hard realtime garbage collection is needed for safety-critical applications, it can be used mission critical, and general purpose Java applications as well. There is no inherent limitations in using a hard realtime garbage collector. For general use, you can expect smoother program execution since there are no long garbage collector pauses.

︶葆Ⅱㄣ 2024-07-15 06:19:16

对我来说,100% 实时 Java 仍然是一种碰运气的技术,但我并不声称自己是专家。

我建议阅读这些文章 - Cliff Click 博客。 他是 Azul 的架构师,几乎编写了所有标准 1.5 Java 并发类等的代码...仅供参考,Azul 是为需要非常大堆大小的系统而设计的,而不仅仅是标准 RT 要求的系统。

To me, 100% real-time Java is still very much a hit-and-miss technology, but I don't claim to be an expert.

I'd recommend reading up on these articles - Cliff Click blog. He's the architect of Azul, has pretty much coded all of the standard 1.5 Java concurrent classes etc... FYI, Azul is designed for systems which require very large heap sizes, rather than just standard RT requirements.

故事灯 2024-07-15 06:19:16

它不是 GC,但有一些简单的 O(1) 固定大小的块分配/释放方案,您可以使用它来进行简单的使用。 例如,您可以使用固定大小块的空闲列表。

struct Block {
   Block *next;
}

Block *free_list = NULL; /* you will need to populate this at start, an 
                          * easy way is to just call free on each block you 
                          * want to add */

void release(void *p) {
    if(p != NULL) {
        struct Block *b_ptr = (struct Block *)p;
        b_ptr->next = free_list;
        free_list = b_ptr;
    }
}

void *acquire() {
    void *ret = (void *)free_list;
    if(free_list != NULL) {
        free_list = free_list->next;
    }
    return ret;
}

/* call this before you use acquire/free */
void init() {
    /* example of an allocator supporting 100 blocks each 32-bytes big */
    static const int blocks = 100;
    static const int size = 32;
    static unsigned char mem[blocks * size];
    int i;
    for(i = 0; i < blocks; ++i) {
        free(&mem[i * size]);
    }
}

如果您进行相应的规划,则可以将设计限制为仅用于动态分配的几个特定大小,并为每个潜在大小提供一个 free_list。 如果您使用 C++,您可以实现一些简单的东西,如scoped_ptr(对于每个大小,我会使用模板参数)来获得更简单但仍然是 O(1) 的内存管理。

唯一真正需要注意的是,您将无法避免双重释放,甚至意外地将并非来自获取的 ptr 传递给释放。

It's not GC, but there are simple O(1) fixed sized block allocation/free schemes you can use for simple usage. For example, you can use a free list of fixed sized blocks.

struct Block {
   Block *next;
}

Block *free_list = NULL; /* you will need to populate this at start, an 
                          * easy way is to just call free on each block you 
                          * want to add */

void release(void *p) {
    if(p != NULL) {
        struct Block *b_ptr = (struct Block *)p;
        b_ptr->next = free_list;
        free_list = b_ptr;
    }
}

void *acquire() {
    void *ret = (void *)free_list;
    if(free_list != NULL) {
        free_list = free_list->next;
    }
    return ret;
}

/* call this before you use acquire/free */
void init() {
    /* example of an allocator supporting 100 blocks each 32-bytes big */
    static const int blocks = 100;
    static const int size = 32;
    static unsigned char mem[blocks * size];
    int i;
    for(i = 0; i < blocks; ++i) {
        free(&mem[i * size]);
    }
}

If you plan accordingly, you could limit your design to only a few specific sizes for dynamic allocation and have a free_list for each potential size. If you are using c++, you can implement something simple like scoped_ptr (for each size, i'd use a template param) to get simpler yet still O(1) memory management.

The only real caveat, is that you will have no protection from double frees or even accidentally passing a ptr to release which didn't come from acquire.

够钟 2024-07-15 06:19:16

Sun 广泛记录了他们的实时垃圾收集器,并提供了您可以自己运行的基准此处。 其他人提到了 Metronome,这是另一种主要的生产级 RTGC 算法。 许多其他 RT JVM 供应商都有自己的实现 - 请参阅我的供应商列表 在这里,其中大多数都提供了大量的文档。

如果您对航空电子设备/飞行软件特别感兴趣,我建议您看看 aicas,这是一家 RTSJ 供应商,特别面向航空电子行业市场。 博士。 Siebert(aicas 首席执行官)的主页列出了一些学术出版物,其中详细介绍了 PERC 的 GC 实施。

Sun has extensively documented their real-time garbage collector, and provided benchmarks you can run for yourself here. Others mentioned Metronome, which is the other major production-grade RTGC algorithm. Many other vendors of RT JVMs have their own implementations -- see my list of vendors over here and most of them provide extensive documentation.

If your interest is particularly in avionics/flight software, I suggest you take a look at aicas, an RTSJ vendor who specifically markets to the avionics industry. Dr. Siebert's (aicas CEO) home page lists some academic publications that go into great detail about PERC's GC implementation.

小瓶盖 2024-07-15 06:19:16

实时意味着有保证的响应时间上限。 这意味着在交付结果之前可以执行的指令有上限。 这也对您可以接触的数据量设置了上限。 如果您不知道需要多少内存,则很可能您无法给出其执行时间上限的计算。 Otoh,如果您知道计算的上限,您也知道它会触及多少内存(除非您真的不知道您的软件是做什么的)。 因此,您对代码的了解程度足以消除对 GC 的需要。

有些功能(例如 RT-Java 中的区域)允许超出本地和全局(静态)变量的表达能力。 但它们不会免除您管理分配的内存的义务,因为否则您无法保证下一次即将进行的分配不会因为内存资源不足而失败。

不可否认:我对那些自称为“实时垃圾收集器”的东西有些怀疑。 当然,如果您假设每次分配都运行完整的收集,那么任何 GC 都是实时的(这仍然不能保证它之后会成功,因为可能会发现所有内存块都是可访问的)。 但对于任何承诺较低分配时间限制的 GC,请考虑其在以下示例代码中的性能:

// assume that on `Link` object needs k bytes:
class Link {
    Link next = null;
    /* further fields */
    static Link head = null;
}

public static void main (String args) {
    // assume we have N bytes free now
    // set n := floor (N/k), assume that n > 1

    for (int i = 0; i < n; i ++) {
        Link tmp = new Link ();
        tmp.next = Link.head;
        Link.head = tmp;
    } 
    // (1)
    Link.head = Link.head.next; // (2)
    Link tmp = new Link ();  // (3)
}
  • 在点 (1) 处,我们有少于 k
    空闲字节(分配另一个
    Link 对象将会失败),并且所有
    到目前为止分配的 Link 对象是
    可以从 开始到达
    Link.static 链接头 字段。
  • 在点 (2) 处,

    • (a) 以前列表中的第一个条目现在无法访问,但是
    • (b) 就内存管理部分而言,它仍然被分配。

  • 第(3)点,分配应
    成功是因为 (2a) - 我们可以使用
    曾经是第一个链接 - 但是,
    由于 (2b),我们必须开始
    GC,最终会遍历n-1
    对象,因此有运行时间
    O(N)。

所以,是的,这是一个人为的例子。 但是声称有分配限制的 GC 也应该能够掌握这个示例。

Real-time means a guaranteed upper bound on response time. This means an upper bound on the instructions that you can execute until you deliver the result. This also puts an upper limit on the amount of data you can touch. If you don't know how much memory you're going to need, it is extremely likely that you'll have a computation for which you cannot give an upper limit of its execution time. Otoh, if you know the upper bound of your computation, you also know how much memory gets touched by it (unless you don't really know what your software does). So, the amount of knowledge you have about your code obviates the need for a GC.

There are features, like regions in RT-Java, that allow for expressiveness beyond local and global (static) variables. But they will not relieve you from your obligation to manage the memory you allocate, because otherwise you cannot guarantee that the next upcoming allocation will not fail because of insufficient memory resources.

Admittedly: I've gotten somewhat suspicious about things that call themselves "realtime garbage collectors". Of course, any GC is real time if you assume that every allocation runs a full collection (which still doesn't guarantee that it will succeed afterwards, because all memory blocks might found to be reachable). But for any GC that promises a lower time bound on allocation, consider its performance on the following example code:

// assume that on `Link` object needs k bytes:
class Link {
    Link next = null;
    /* further fields */
    static Link head = null;
}

public static void main (String args) {
    // assume we have N bytes free now
    // set n := floor (N/k), assume that n > 1

    for (int i = 0; i < n; i ++) {
        Link tmp = new Link ();
        tmp.next = Link.head;
        Link.head = tmp;
    } 
    // (1)
    Link.head = Link.head.next; // (2)
    Link tmp = new Link ();  // (3)
}
  • At point (1), we have less than k
    bytes free (allocation of another
    Link object would fail), and all
    Link objects allocated so far are
    reachable starting from the
    Link.static Link head field.
  • At point (2),

    • (a) what used to be the first entry in the list is now not reachable, but
    • (b) it is still allocated, as far as the memory management part is concerned.
  • At
    point (3), the allocation should
    succeed because of (2a) - we can use
    what used to be the first link - but,
    because of (2b), we must start the
    GC, which will end up traversing n-1
    objects, hence have a running time
    of O(N).

So, yes, it's a contrived example. But a GC that claims to have a bound on allocation should be able to master this example as well.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文