内存分配的时间复杂度

发布于 2024-07-08 02:17:29 字数 148 浏览 7 评论 0原文

使用new、malloc等动态内存分配的时间复杂度是多少? 我对内存分配器的实现方式知之甚少,但我认为答案是它取决于实现。 因此,请回答一些更常见的案例/实现。

编辑: 我依稀记得听说堆分配在最坏的情况下是无界的,但我对平均/典型情况真的很感兴趣。

What is the time complexity of dynamic memory allocation using new, malloc, etc.? I know very little about how memory allocators are implemented, but I assume the answer is that it depends on the implementation. Therefore, please answer for some of the more common cases/implementations.

Edit:
I vaguely remember hearing that heap allocation is unbounded in the worst case, but I'm really interested in the average/typical case.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

抹茶夏天i‖ 2024-07-15 02:17:29

在处理 O 表示法时必须意识到的一件事是,理解 n 是什么通常非常重要。 如果n与你可以控制的东西相关(例如:你想要排序的列表中的元素数量),那么仔细研究它是有意义的。

在大多数堆实现中,n 是管理器正在处理的连续内存块的数量。 这绝对不是通常由客户控制的事情。 客户端真正可以控制的唯一n是她想要的内存块的大小。 通常这与分配器花费的时间无关。 大的 n 可以像小的 n 一样快速分配,或者可能需要更长的时间,或者甚至可能无法提供服务。 对于相同的n,所有这些都可能会发生变化,具体取决于其他客户端之前的分配和释放方式。所以实际上,除非您正在实现堆,否则正确的答案是它不是-确定性

这就是为什么硬实时程序员试图避免动态分配(启动后)。

One of the things you have to realise when dealing with O notation is that it is often very important to understand what n is. If the n is something related to something you can control (eg: the number of elements in a list you want sorted) then it makes sense to look hard at it.

In most heap implementations, your n is the number of contiguous chunks of memory the manager is handling. This is decidedly not something typically under client control. The only n the client really has control over is the size of the chunk of memory she wants. Often this bears no relation to the amount of time the allocator takes. A large n could be as quickly allocated as a small n, or it might take much longer, or it might even be unservicable. All this could change for the same n depending on how previous allocations and deallocations from other clients came in. So really, unless you are implementing a heap, then the proper answer is that it is non-deterministic.

This is why hard realtime programmers try to avoid dynamic allocation (after startup).

别靠近我心 2024-07-15 02:17:29

堆分配器的时间复杂度在不同的系统上可能不同,具体取决于它们可能优化的目的。

在桌面系统上,堆分配器可能使用不同策略的混合,包括缓存最近的分配、常见分配大小的后备列表、具有特定大小特征的内存块箱等,以尝试减少分配时间,同时保持碎片易于管理。 请参阅 Doug Lea 的 malloc 实现的注释,了解所使用的各种技术的概述: http: //g.oswego.edu/dl/html/malloc.html

对于更简单的系统,可以使用首次适应或最佳适应策略,将空闲块存储在链接列表中(这将给出 O (N) 时间,N 是空闲块的数量)。 但更复杂的存储系统(例如 AVL 树)可能用于在 O(log N) 时间内定位空闲块(http://www.oocities.org/wkaras/heapmm/heapmm.html)。

实时系统可能使用像 TLSF(两级隔离拟合)这样的堆分配器,其分配成本为 O(1):http://www.gii.upv.es/tlsf/

The time complexity for a heap allocator can be different on different systems, depending on what they might be optimizing for.

On desktop systems, the heap allocator probably uses a mixture of different strategies including caching recent allocations, lookaside lists for common allocation sizes, bins of memory chunks with certain size characteristics, etc. to try an keep allocation time down but also keep fragmentation manageable. See the notes for Doug Lea's malloc implementation for an overview of the various techniques that are used: http://g.oswego.edu/dl/html/malloc.html

For simpler systems, a strategy of first fit or best fit might be used, with the free blocks stored on a linked list (which would give a O(N) time with N being the number of free blocks). But a more sophisticated storage system such as an AVL tree might be used to be able to locate free blocks in O(log N) time (http://www.oocities.org/wkaras/heapmm/heapmm.html).

A realtime system might use an heap allocator like TLSF (Two-Level Segregate Fit), which has a O(1) allocation cost: http://www.gii.upv.es/tlsf/

苏辞 2024-07-15 02:17:29

仅两点说明:

  • TLSF 是 O(1) ,因为它没有单循环; 并可管理高达 2Gb。
    虽然真的很难相信,但看看代码就知道了。

  • “最适合”策略(找到紧块)是最适合的,这一点并不正确。
    实现小碎片化。
    证明这一说法绝非易事,事实上它尚未得到正式证明,但有很多证据表明这一点。 (很好的研究主题)。

Just two remarks:

  • TLSF is O(1) in the sense that is has not a single loop; and manages up to 2Gb.
    Although it is really hard to believe, just check the code.

  • It is not true that the "best fit" policy (find the tight block) is the most suitable to
    achieve small fragmentation.
    It is far from trivial to demonstrate this claim, in fact it has not been formally proven but there are lot of evidences that go in that direction. (nice research topic).

我不吻晚风 2024-07-15 02:17:29

我认为通常是 O(n),其中 n 是可用内存块的数量(因为您必须扫描可用内存块才能找到合适的内存块)。

话虽如此,我已经看到了可以使其更快的优化,特别是根据可用块的大小范围维护多个可用块列表(因此小于 1k 的块在一个列表中,从 1k 到 10k 的块在另一个列表中,依此类推) )。

然而,这仍然是 O(n),只是 n 更小。

我有兴趣查看您的消息来源,其中有一个无限制的堆分配(如果您的意思是它可能需要永远)。

I would think it would generally be O(n) where n is the number of available memory blocks (since you have to scan the available memory blocks to find a suitable one).

Having said that, I've seen optimizations that can make it faster, specifically maintaining multiple lists of available blocks depending on their size ranges (so blocks less than 1k are in one list, blocks from 1k to 10k are in another list and so on).

This is still O(n) however, just with a smaller n.

I'd be interested in seeing your source that there's a heap allocation that's unbounded (if, by that, you mean it could take forever).

沉睡月亮 2024-07-15 02:17:29

只需查看典型分配器的工作原理即可。

指针碰撞分配器的工作时间为 O(1),并且它是一个小的“1”。

对于隔离存储分配器,分配 k 字节意味着“从 List(n) 返回第一个块”,其中 List(n) 是 n 字节块的列表其中n >= k。 它可能发现List(n)为空,因此下一个列表(List(2n))中的一个块必须是将两个 n 个字节的结果块进行拆分,并将其放入列表 (n) 中,并且此效果可能波及所有可用大小,从而形成O(ns) 的复杂度,其中 ns 是可用的不同大小的数量,ns = log (N),其中 N 是可用的最大块大小,所以即使这样也会很小。 在大多数情况下,尤其是在分配和释放多个块之后,复杂度为O(1)

Just check out how typical allocators work.

A bump-the-pointer allocator works in O(1), and it's a small '1' at that.

For a segregated-storage allocator, allocation of k bytes means "return the first block from List(n)" where List(n) is the list of blocks of n bytes where n >= k. It might find that List(n) is empty, so that a block from the next list (List(2n)) would have to be split with both resulting blocks of n bytes being put onto List(n), and this effect might ripple through all availlable sizes, making for a complexity of O(ns) where ns is the number of different sizes availlable, and ns = log (N) where N is the size of the largest block size availlable, so even that would be small. In most cases, especially after a number of blocks have been allocated and deallocated, complexity is O(1).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文