谁从池化中获利更多?托管/非托管?

发布于 2024-10-12 12:47:47 字数 343 浏览 6 评论 0原文

有两个类似的应用程序:一个受管理,另一个不受管理。两者都对大型对象使用大量分配模式。即,它们在(长时间运行的)循环中请求大量这些对象,并且在使用后立即释放这些对象。托管应用程序使用立即调用的 IDisposable()。非托管正在使用析构函数。

一些但不是全部的对象可以重复使用。因此,考虑使用池来提高执行速度并最小化内存碎片的风险。

您希望哪个应用程序从池中获得更多利润?为什么?

@Update:这是关于数学库的。因此,那些大对象将是值类型的数组。大部分都足够大,足以容纳 LOH。我确信,池化将大大提高托管方面的性能。存在许多用于托管/非托管环境的库。据我所知,他们中没有一个人真正做到了这样的汇集。我想知道为什么?

Having two similar applications: one managed and the other unmanged. Both utilizing a heavy allocation pattern for large objects. I.e. they are requesting a lot of those objects in (long running) loop and both releasing those objects pretty right away after usage. The managed app uses IDisposable() which gets called immediately. The unmanaged is utilizing the destructors.

Some but not all of the objects could be reused. So, a pool is considered in order to increase execution speed and minimize the risk of memory fragmentation.

Which application would you expect to profit more from a pooling and why?

@Update: This is about a mathematical library. Therefore, those large objects will be arrays of value type. Mostly large enough for LOH. I am positive, pooling would improve performance on the managed side a lot. Many libraries exist - for managed/unmanaged environments. None of them I know does such pooling really. I am wondering why?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

栀梦 2024-10-19 12:47:47

首先,稍微考虑一下什么是大对象。在 .net 中,大对象被认为具有 85,000 或更多字节。你真的有这么大的物体,还是有一个由较小物体组成的非常大的图?

如果它是较小对象的图,那么它们存储在SOH(小对象堆)中。在这种情况下,如果您创建对象并立即释放它们,您将从采用分代模型的垃圾收集器优化中获得最大收益。我的意思是,你要么创建对象并让它们消亡,要么永远保留它们。保留它们“一段时间”,或者换句话说,池化,只会让它们提升到更高的世代(直到第 2 代),这会降低 GC 的性能,因为清理第 2 代对象的成本很高(然而,第二代中的永恒物品并不昂贵)。不用担心内存碎片。除非您正在执行互操作或固定对象之类的奇特操作,否则 GC 在避免内存碎片方面非常有效 - 它会压缩从临时段中释放的内存。

如果您确实有非常大的对象(例如非常大的数组),那么将它们池化是值得的。但请注意,如果数组包含对较小对象的引用,则池化这些对象将导致我在上一段中讨论的问题,因此您应该小心频繁地(每次迭代? )。

话虽如此,您调用 IDisposable 的事实并不是清理对象。 GC 这样做。 Dispose 负责清理非托管资源。尽管如此,非常重要的是,您要继续对每个类实现 IDisposable 的对象调用 Dispose(最好的方法是通过finally),因为您可能会立即释放非托管资源,也因为您告诉 GC 它不会这样做。不需要调用对象的终结器,这会导致对象不必要的提升,正如我们所看到的,这是不行的。

最重要的是,GC 在分配和清理东西方面确实很擅长。试图帮助它通常会导致性能更差,除非你真的知道发生了什么。

要真正理解我在说什么:

垃圾收集:Microsoft . NET Framework

垃圾收集:Microsoft .NET Framework 2 中的自动内存管理

大对象堆被发现

First, a little consideration about what a large object is. In .net, a large object is considered to have 85,000 or more bytes. Do you really have such big objects or do you have a very large graph of smaller objects?

If it is a graph of smaller objects, then they are stored in the SOH (small object heap). In this case, if you are creating the objects and letting them go immediately, you will get the best benefit from the Garbage Collector's optimizations that assume a generational model. What I mean is that you either create objects and let them go to die or you keep them forever. Holding on to them just "for a while", or in other words, pooling, will just let them have promoted to higher generations (up to Gen 2) and that will kill the GC's performance, because cleaning up gen 2 objects is expensive (eternal objects in gen 2 are not expensive, however). Don't worry about memory fragmentation. Unless you're doing interop or fancy stuff like pinning objects, the GC is very effective in what comes to avoiding memory fragmentation - it compacts memory it frees from the ephemeral segment.

If you do indeed have very large objects (for instance, very big arrays), then it can pay to pool them. Notice however, that if the arrays contain references to smaller objects, pooling those will lead to the problems I talked about on the previous paragraph, so you should be careful to clean the array (have its references pointing to null) frequently (every iteration?).

Having that said, the fact that you're calling IDisposable is not cleaning objects. The GC does so. Dispose is responsible for cleaning unmanaged resources. Nevertheless, it is very important you keep on calling Dispose on every object whose class implements IDisposable (the best way is through finally) because you are potentially releasing unmanaged resources immediately and also because you are telling GC it doesn't need to call the object's finalizer, which would lead to the unnecessary promotion of the object, which as we saw, is a no no.

Bottom line, the GC is really good in allocating and cleaning up stuff. Trying to help it usually results in worse performance, unless you really know what is going on.

To really understand what I am talking about:

Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework

Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework 2

Large Object Heap Uncovered

勿挽旧人 2024-10-19 12:47:47

感觉很奇怪,但我会尝试自己回答。我可以得到一些关于它的评论吗:

如果以繁重的方式(循环)分配和释放非常大的对象,两个平台都会遭受碎片。对于非托管应用程序,直接从虚拟地址空间进行分配。通常,工作数组将被包装在一个类 (c++) 中,为简洁的语法提供运算符重载、一些引用处理和析构函数,这确保在超出范围时立即释放该数组。然而,所请求的数组并不总是具有相同的大小 - 如果请求更大的数组,则无法重用相同的地址块,这可能会随着时间的推移导致碎片。此外,无法找到恰好满足请求的数组长度的块。操作系统将简单地使用第一个块,该块足够大 - 即使它根据需要更大,并且可能可以更好地满足稍后即将推出的更大阵列的请求。汇集如何改善这种情况?

可以想象的是,使用更大的数组来处理更小的请求。该类将处理从底层数组的真实长度到外部世界所需的虚拟长度的转换。该池可以帮助提供“第一个足够长的数组”——与操作系统相反,操作系统总是给出准确的长度。这可能会限制碎片,因为在虚拟地址空间中创建的漏洞较少。另一方面,整体内存大小会增加。对于几乎随机的分配模式,池化几乎不会带来任何利润,但我猜只会消耗稀有的内存。

托管方面,情况更糟。首先,存在两个可能的碎片目标:虚拟地址空间和托管大对象堆。在这种情况下,后者更多地是从操作系统单独分配的各个段的集合。每个序列大多仅用于单个数组(因为我们在这里讨论的是非常大的数组)。如果 GC 释放了一个数组,则整个段将返回给操作系统。因此,碎片不会成为 LOH 中的问题(参考:我自己的想法和使用 VMMap 的一些经验观察,因此非常欢迎任何评论!)。

但由于 LOH 段是从虚拟地址空间分配的,因此碎片也是一个问题 - 就像非托管应用程序一样。事实上,对于操作系统的内存管理器来说,两个应用程序的分配模式应该非常相似。 (?) 有一个区别:数组由 GC 同时释放。然而,“真正的大数组”会对GC产生很大的压力。在收集发生之前,只能同时保存相对少量的“真正大的数组”。基本上,应用程序通常在 GC 中花费合理的时间(大约 5..45%),这也是因为几乎所有回收都是昂贵的 Gen2 回收,并且几乎每次分配都会导致这样的 Gen 2 回收。

在这里,汇集可能会有很大帮助。一旦阵列没有释放给操作系统而是被收集到池中,它们就可以立即用于进一步的请求。 (这是 IDisposable 不仅仅适用于非托管资源的原因之一)。框架/库只需确保数组足够早地放入池中,并在实际需要较小大小的情况下能够重用较大的数组。

Feels strange, but I'll try to answer it myself. May I'll get some comments on it:

Both platforms suffer from fragmentation if very large objects are allocated and freed in a heavy manner (loops). In case of unmanaged apps, allocations are made from the virtual address space directly. Usually the working array will be wrapped in a class (c++), providing operator overloads for nice short syntax, some reference handling and a destructor, which makes sure, the array is freed immediately when running out of scope. Nevertheless, the arrays requested do not have the same size all the time - if larger arrays are requested, the same address block cannot get reused which may lead to fragmentation over time. Furthermore, there is no way of finding the block, which exactly serves the array length requested. The OS would simply use the first block, which is large enough - even if it was larger as needed and may could have been better fullfilling the request for an later upcoming even larger array. How could pooling improve that situation?

Imaginable would be, to use larger arrays for smaller requests. The class would handle the transition from the true length of the underlying array to the virtual length needed for the outside world. The pool could help to deliver the "first array, which is long enough" - in contrast to the OS, which will always give the exact length. This could possibly limit fragmentation, since less holes are created in virtual address space. On the other side, the overall memory size would increase. For nearly random allocation patterns, pooling would bring little to no profit, but only eat rare memory, i guess.

On the managed side, the situation is worse. First of all, two possible targets for fragmentation exist: virtual address space and the managed large object heap. Latter in that case would more be a collection of individual seqments, individually allocated from the OS. Each seqment would mostly be used for a single array only (since we are talking of really big arrays here). If one array is freed by the GC, the whole segment is returned to the OS. So fragmentation would not be an issue in the LOH (references: my own thoughts and some empirical observations using VMMap, so any comments very welcome!).

But since the LOH segments are allocated from the virtual address space, fragmentation is an issue here too - just like for unmanaged applications. In fact, the allocation pattern for both applications should look very similar for the memory manager of the OS. (?) With one distinction: the arrays are freed by the GC all at the same time. However, "really large arrays" will produce a lot of pressure on the GC. Only a relatively small number of "really large arrays" can be hold at the same time until collection occurs. Basically, the application usually spends a reasonable amount of time (seen about 5..45%) in the GC, also because virtually all collections will be expensive Gen2 collections and almost every allocation will result in such a Gen 2 collection.

Here pooling may help considerably. Once the arrays are not freed to the OS but rather collected in the pool, they are immediately available for further requests. (This is one reason, why IDisposable is not only meant for unmanaged resources). The framework/library does only have to make sure, the arrays are placed in the pool early enough and to enable the reuse of larger arrays for situations, where actually a smaller size is needed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文