什么是内存碎片?
我曾多次听说过 C++ 动态内存分配中使用过“内存碎片”这个术语。我发现了一些关于如何处理内存碎片的问题,但找不到处理它本身的直接问题。那么:
- 什么是内存碎片?
- 如何判断内存碎片是否是我的应用程序的问题?什么样的程序最有可能受到影响?
- 处理内存碎片的常用方法有哪些?
另外:
- 我听说大量使用动态分配会增加内存碎片。这是真的吗?在 C++ 上下文中,我了解所有标准容器(std::string、std::vector 等)都使用动态内存分配。如果在整个程序中使用这些(尤其是 std::string),内存碎片是否更有可能成为问题?
- 在 STL 密集型应用程序中如何处理内存碎片?
I've heard the term "memory fragmentation" used a few times in the context of C++ dynamic memory allocation. I've found some questions about how to deal with memory fragmentation, but can't find a direct question that deals with it itself. So:
- What is memory fragmentation?
- How can I tell if memory fragmentation is a problem for my application? What kind of program is most likely to suffer?
- What are good common ways to deal with memory fragmentation?
Also:
- I've heard using dynamic allocations a lot can increase memory fragmentation. Is this true? In the context of C++, I understand all the standard containers (std::string, std::vector, etc) use dynamic memory allocation. If these are used throughout a program (especially std::string), is memory fragmentation more likely to be a problem?
- How can memory fragmentation be dealt with in an STL-heavy application?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(12)
想象一下,您有一个“大”(32 字节)的可用内存:
现在,分配其中一些(5 次分配):
现在,释放前四个分配,但不释放第五个:
现在,尝试分配 16 字节。哎呀,我不能,尽管有几乎两倍的免费空间。
在具有虚拟内存的系统上,碎片问题比您想象的要小,因为大型分配只需要在虚拟地址空间中连续,而不是在物理地址空间中连续。因此,在我的示例中,如果我的虚拟内存页面大小为 2 字节,那么我可以毫无问题地进行 16 字节分配。物理内存看起来像这样:
而虚拟内存(更大)可能看起来像这样:
内存碎片的典型症状是,你尝试分配一个大块,但你不能,即使你看起来有足够的可用内存。另一个可能的结果是进程无法将内存释放回操作系统(因为它从操作系统分配的每个大块,用于
malloc
等进行细分,都会留下一些东西它,即使每个块的大部分现在都未使用)。C++ 中防止内存碎片的策略是根据对象的大小和/或预期寿命从不同区域分配对象。因此,如果您要创建大量对象并稍后将它们全部销毁,请从内存池中分配它们。您在它们之间进行的任何其他分配都不会来自池,因此不会位于内存中的它们之间,因此内存不会因此而产生碎片。或者,如果您要分配大量相同大小的对象,则从同一个池中分配它们。那么池中的一段可用空间永远不会小于您尝试从该池分配的大小。
一般来说,您不需要太担心,除非您的程序长时间运行并且进行大量分配和释放。当您同时拥有短期和长期对象时,您面临的风险最大,但即使如此,
malloc
也会尽力提供帮助。基本上,忽略它,直到您的程序分配失败或意外导致系统内存不足(在测试中捕捉到这一点,以供优先选择!)。标准库并不比任何分配内存的东西差,标准容器都有一个 Alloc 模板参数,如果绝对必要,您可以使用它来微调它们的分配策略。
Imagine that you have a "large" (32 bytes) expanse of free memory:
Now, allocate some of it (5 allocations):
Now, free the first four allocations but not the fifth:
Now, try to allocate 16 bytes. Oops, I can't, even though there's nearly double that much free.
On systems with virtual memory, fragmentation is less of a problem than you might think, because large allocations only need to be contiguous in virtual address space, not in physical address space. So in my example, if I had virtual memory with a page size of 2 bytes then I could make my 16 byte allocation with no problem. Physical memory would look like this:
whereas virtual memory (being much bigger) could look like this:
The classic symptom of memory fragmentation is that you try to allocate a large block and you can't, even though you appear to have enough memory free. Another possible consequence is the inability of the process to release memory back to the OS (because each of the large blocks it has allocated from the OS, for
malloc
etc. to sub-divide, has something left in it, even though most of each block is now unused).Tactics to prevent memory fragmentation in C++ work by allocating objects from different areas according to their size and/or their expected lifetime. So if you're going to create a lot of objects and destroy them all together later, allocate them from a memory pool. Any other allocations you do in between them won't be from the pool, hence won't be located in between them in memory, so memory will not be fragmented as a result. Or, if you're going to allocate a lot of objects of the same size then allocate them from the same pool. Then a stretch of free space in the pool can never be smaller than the size you're trying to allocate from that pool.
Generally you don't need to worry about it much, unless your program is long-running and does a lot of allocation and freeing. It's when you have mixtures of short-lived and long-lived objects that you're most at risk, but even then
malloc
will do its best to help. Basically, ignore it until your program has allocation failures or unexpectedly causes the system to run low on memory (catch this in testing, for preference!).The standard libraries are no worse than anything else that allocates memory, and standard containers all have an
Alloc
template parameter which you could use to fine-tune their allocation strategy if absolutely necessary.内存碎片是指大部分内存被分配在大量不连续的块或块中 - 总内存中有很大一部分未分配,但对于大多数典型情况来说无法使用场景。这会导致内存不足异常或分配错误(即 malloc 返回 null)。
思考这个问题的最简单方法是想象您有一面空荡荡的大墙,您需要在上面放置不同尺寸的图片。每张图片都占据一定的大小,显然您无法将其分割成更小的部分以使其适合。墙上需要有一个空位,和照片的大小一样,否则你就无法把它挂起来。现在,如果您开始将图片挂在墙上,并且不小心如何排列它们,那么您很快就会看到墙壁部分被图片覆盖,即使您可能有空位,大多数新图片也无法容纳因为它们比可用的位置大。您仍然可以悬挂非常小的图片,但大多数图片都不适合。因此,您必须重新排列(压缩)墙上已有的内容,以便为更多内容腾出空间。
现在,想象一下墙壁是您的(堆)内存,而图片是对象。这就是内存碎片
。 strong>如何判断内存碎片是否是我的应用程序的问题?哪种程序最有可能受到影响?
一个明显的迹象表明您可能正在处理内存碎片问题,那就是如果您遇到许多分配错误,特别是当已用内存的百分比很高时 - 但并非您还没有遇到这种情况用尽了所有内存 - 因此从技术上讲,您应该有足够的空间来容纳您尝试分配的对象。
当内存碎片严重时,内存分配可能会花费更长的时间,因为内存分配器必须做更多的工作来为新对象找到合适的空间。如果反过来你有很多内存分配(你可能会这样做,因为你最终产生了内存碎片),分配时间甚至可能会导致明显的延迟。
处理内存碎片的常用方法有哪些?
使用好的算法来分配内存。不要为大量小对象分配内存,而是为这些较小对象的连续数组预先分配内存。有时,分配内存时稍微浪费一点可以提高性能,并且可以省去处理内存碎片的麻烦。
Memory fragmentation is when most of your memory is allocated in a large number of non-contiguous blocks, or chunks - leaving a good percentage of your total memory unallocated, but unusable for most typical scenarios. This results in out of memory exceptions, or allocation errors (i.e. malloc returns null).
The easiest way to think about this is to imagine you have a big empty wall that you need to put pictures of varying sizes on. Each picture takes up a certain size and you obviously can't split it into smaller pieces to make it fit. You need an empty spot on the wall, the size of the picture, or else you can't put it up. Now, if you start hanging pictures on the wall and you're not careful about how you arrange them, you will soon end up with a wall that's partially covered with pictures and even though you may have empty spots most new pictures won't fit because they're larger than the available spots. You can still hang really small pictures, but most ones won't fit. So you'll have to re-arrange (compact) the ones already on the wall to make room for more..
Now, imagine that the wall is your (heap) memory and the pictures are objects.. That's memory fragmentation..
How can I tell if memory fragmentation is a problem for my application? What kind of program is most likely to suffer?
A telltale sign that you may be dealing with memory fragmentation is if you get many allocation errors, especially when the percentage of used memory is high - but not you haven't yet used up all the memory - so technically you should have plenty of room for the objects you are trying to allocate.
When memory is heavily fragmented, memory allocations will likely take longer because the memory allocator has to do more work to find a suitable space for the new object. If in turn you have many memory allocations (which you probably do since you ended up with memory fragmentation) the allocation time may even cause noticeable delays.
What are good common ways to deal with memory fragmentation?
Use a good algorithm for allocating memory. Instead of allocating memory for a lot of small objects, pre-allocate memory for a contiguous array of those smaller objects. Sometimes being a little wasteful when allocating memory can go along way for performance and may save you the trouble of having to deal with memory fragmentation.
内存碎片与磁盘碎片的概念相同:它指的是由于正在使用的区域没有足够紧密地排列在一起而浪费的空间。
假设一个简单的玩具示例,您有 10 个字节的内存:
现在让我们分配三个三字节块,名称为 A、B 和 C:
现在释放块 B:
现在如果我们尝试分配一个四字节块 D 会发生什么?好吧,我们有四个字节的空闲内存,但是我们没有四个连续字节的空闲内存,所以我们无法分配 D!这是对内存的低效利用,因为我们本来应该能够存储 D,但我们却做不到。我们无法移动 C 来腾出空间,因为我们程序中的某些变量很可能指向 C,而我们无法自动查找并更改所有这些值。
你怎么知道这是一个问题?嗯,最大的迹象是您的程序的虚拟内存大小比您实际使用的内存量大得多。在现实世界的示例中,您将拥有超过 10 个字节的内存,因此 D 将从字节 9 开始分配,而字节 3-5 将保持未使用状态,除非您后来分配了长度为 3 个字节或更小的内容。
在这个例子中,3个字节并不算太多浪费,但考虑一个更病态的情况,例如,两次分配几个字节在内存中相距10兆字节,并且您需要分配一个大小为10兆字节的块+ 1 字节。你必须向操作系统请求多十兆字节以上的虚拟内存才能做到这一点,即使你只差一个字节就已经拥有足够的空间了。
你如何预防它?当您频繁创建和销毁小对象时,往往会出现最糟糕的情况,因为这往往会产生“瑞士奶酪”效应,许多小对象被许多小孔分隔开,从而无法在这些孔中分配更大的对象。当您知道要这样做时,一个有效的策略是预先分配一大块内存作为小对象的池,然后手动管理该块内小对象的创建,而不是让默认分配器处理它。
一般来说,分配的次数越少,内存碎片的可能性就越小。然而,STL 相当有效地处理了这个问题。如果您有一个字符串正在使用其当前分配的全部空间,并且您向其附加一个字符,那么它不会简单地重新分配其当前长度加一,而是将其长度加倍。这是“频繁小额分配池”策略的变体。该字符串正在占用一大块内存,以便它可以有效地处理大小的重复小幅增加,而无需进行重复的小幅重新分配。事实上,所有 STL 容器都会执行此类操作,因此通常您无需太担心自动重新分配 STL 容器所导致的碎片。
当然,STL 容器不会在彼此之间池化内存,因此如果您要创建许多小型容器(而不是一些经常调整大小的容器),您可能需要担心防止碎片的方式与防止任何频繁创建的小对象(无论是否为 STL)的方式相同。
Memory fragmentation is the same concept as disk fragmentation: it refers to space being wasted because the areas in use are not packed closely enough together.
Suppose for a simple toy example that you have ten bytes of memory:
Now let's allocate three three-byte blocks, name A, B, and C:
Now deallocate block B:
Now what happens if we try to allocate a four-byte block D? Well, we have four bytes of memory free, but we don't have four contiguous bytes of memory free, so we can't allocate D! This is inefficient use of memory, because we should have been able to store D, but we were unable to. And we can't move C to make room, because very likely some variables in our program are pointing at C, and we can't automatically find and change all of these values.
How do you know it's a problem? Well, the biggest sign is that your program's virtual memory size is considerably larger than the amount of memory you're actually using. In a real-world example, you would have many more than ten bytes of memory, so D would just get allocated starting a byte 9, and bytes 3-5 would remain unused unless you later allocated something three bytes long or smaller.
In this example, 3 bytes is not a whole lot to waste, but consider a more pathological case where two allocations of a a couple of bytes are, for example, ten megabytes apart in memory, and you need to allocate a block of size 10 megabytes + 1 byte. You have to go ask the OS for over ten megabytes more virtual memory to do that, even though you're just one byte shy of having enough space already.
How do you prevent it? The worst cases tend to arise when you frequently create and destroy small objects, since that tends to produce a "swiss cheese" effect with many small objects separated by many small holes, making it impossible to allocate larger objects in those holes. When you know you're going to be doing this, an effective strategy is to pre-allocate a large block of memory as a pool for your small objects, and then manually manage the creation of the small objects within that block, rather than letting the default allocator handle it.
In general, the fewer allocations you do, the less likely memory is to get fragmented. However, STL deals with this rather effectively. If you have a string which is using the entirety of its current allocation and you append one character to it, it doesn't simply re-allocate to its current length plus one, it doubles its length. This is a variation on the "pool for frequent small allocations" strategy. The string is grabbing a large chunk of memory so that it can deal efficiently with repeated small increases in size without doing repeated small reallocations. All STL containers in fact do this sort of thing, so generally you won't need to worry too much about fragmentation caused by automatically-reallocating STL containers.
Although of course STL containers don't pool memory between each other, so if you're going to create many small containers (rather than a few containers that get resized frequently) you may have to concern yourself with preventing fragmentation in the same way you would for any frequently-created small objects, STL or not.
内存碎片是指内存在理论上可用的情况下变得不可用的问题。有两种碎片:内部碎片是已分配但无法使用的内存(例如,当内存以 8 字节块分配,但程序在只需要 4 字节时重复执行单个分配)。 外部碎片是空闲内存被分成许多小块的问题,因此尽管总体空闲内存足够,但仍无法满足大的分配请求。
如果您的程序使用的系统内存比其实际有效数据所需的多得多(并且您已经排除了内存泄漏),那么内存碎片就是一个问题。
使用好的内存分配器。 IIRC,那些使用“最适合”策略的人通常在避免碎片化方面要优越得多,尽管速度稍慢一些。然而,也有研究表明,对于任何分配策略,都存在病态的最坏情况。幸运的是,大多数应用程序的典型分配模式实际上对于分配器来说是相对良性的。如果您对细节感兴趣,可以参考很多论文:
国际内存管理研讨会,Springer Verlag LNCS,1995 年
摘自 ACM SIG-PLAN 公告,第 34 卷第 3 期,第 26-36 页,1999 年
Memory fragmentation is the problem of memory becoming unusable even though it is theoretically available. There are two kinds of fragmentation: internal fragmentation is memory that is allocated but cannot be used (e.g. when memory is allocated in 8 byte chunks but the program repeatedly does single allocations when it needs only 4 bytes). external fragmentation is the problem of free memory becoming divided into many small chunks so that large allocation requests cannot be met although there is enough overall free memory.
memory fragmentation is a problem if your program uses much more system memory than its actual paylod data would require (and you've ruled out memory leaks).
Use a good memory allocator. IIRC, those that use a "best fit" strategy are generally much superior at avoiding fragmentation, if a little slower. However, it has also been shown that for any allocation strategy, there are pathological worst cases. Fortunately, the typical allocation patterns of most applications are actually relatively benign for the allocators to handle. There's a bunch of papers out there if you're interested in the details:
International Workshop on Memory Management, Springer Verlag LNCS, 1995
In ACM SIG-PLAN Notices, volume 34 No. 3, pages 26-36, 1999
更新:
Google TCMalloc:线程缓存 Malloc
人们发现它非常擅长处理长时间运行的进程中的碎片。
我一直在开发一个服务器应用程序,该应用程序在 HP-UX 11.23/11.31 ia64 上存在内存碎片问题。
看起来像这样。有一个进程进行内存分配和释放并运行了数天。即使没有内存泄漏,进程的内存消耗仍然在增加。
关于我的经历。在 HP-UX 上,使用 HP-UX gdb 很容易找到内存碎片。您设置一个断点,当您点击它时,您运行此命令:
info heap
并查看进程的所有内存分配和堆的总大小。然后你继续你的程序,然后一段时间后你再次遇到断点。您再次执行信息堆
。如果堆的总大小较大,但单独分配的数量和大小相同,则可能存在内存分配问题。如有必要,请多次检查。我改善这种情况的方法是这样的。在使用 HP-UX gdb 进行一些分析后,我发现内存问题是由于我使用 std::vector 存储数据库中某些类型的信息而引起的。
std::vector
要求其数据必须保存在一个块中。我有一些基于 std::vector 的容器。这些容器会定期重新创建。经常会出现这样的情况:将新记录添加到数据库中,然后重新创建容器。由于重新创建的容器更大,因此它们不适合可用的空闲内存块,并且运行时要求操作系统提供一个新的更大的块。因此,即使没有内存泄漏,进程的内存消耗也会增加。当我更换容器时,情况得到了改善。我开始使用std::deque
而不是std::vector
,它具有不同的数据分配内存方式。我知道在 HP-UX 上避免内存碎片的方法之一是使用 Small Block Allocator 或使用 MallocNextGen。在 RedHat Linux 上,默认分配器似乎可以很好地处理大量小块的分配。 Windows 上有
低碎片堆
,它解决了大量小分配的问题。我的理解是,在大量使用 STL 的应用程序中,您首先必须发现问题。内存分配器(如 libc 中的)实际上处理大量小分配的问题,这对于 std::string 来说是典型的(例如,在我的服务器应用程序中,有很多 STL 字符串,但正如我所见)从运行
info heap
来看,它们不会造成任何问题)。我的印象是,您需要避免频繁的大量分配。不幸的是,在某些情况下您无法避免它们并且必须更改代码。正如我在我的例子中所说的,当切换到 std::deque 时,我改善了情况。如果你确定了你的记忆碎片,也许可以更准确地谈论它。Update:
Google TCMalloc: Thread-Caching Malloc
It has been found that it is quite good at handling fragmentation in a long running process.
I have been developing a server application that had problems with memory fragmentation on HP-UX 11.23/11.31 ia64.
It looked like this. There was a process that made memory allocations and deallocations and ran for days. And even though there were no memory leaks memory consumption of the process kept increasing.
About my experience. On HP-UX it is very easy to find memory fragmentation using HP-UX gdb. You set a break-point and when you hit it you run this command:
info heap
and see all memory allocations for the process and the total size of heap. Then your continue your program and then some time later your again hit the break-point. You do againinfo heap
. If the total size of heap is bigger but the number and the size of separate allocations are the same then it is likely that you have memory allocation problems. If necessary do this check few fore times.My way of improving the situation was this. After I had done some analysis with HP-UX gdb I saw that memory problems were caused by the fact that I used
std::vector
for storing some types of information from a database.std::vector
requires that its data must be kept in one block. I had a few containers based onstd::vector
. These containers were regularly recreated. There were often situations when new records were added to the database and after that the containers were recreated. And since the recreated containers were bigger their did not fit into available blocks of free memory and the runtime asked for a new bigger block from the OS. As a result even though there were no memory leaks the memory consumption of the process grew. I improved the situation when I changed the containers. Instead ofstd::vector
I started usingstd::deque
which has a different way of allocating memory for data.I know that one of ways to avoid memory fragmentation on HP-UX is to use either Small Block Allocator or use MallocNextGen. On RedHat Linux the default allocator seems to handle pretty well allocating of a lot of small blocks. On Windows there is
Low-fragmentation Heap
and it adresses the problem of large number of small allocations.My understanding is that in an STL-heavy application you have first to identify problems. Memory allocators (like in libc) actually handle the problem of a lot of small allocations, which is typical for
std::string
(for instance in my server application there are lots of STL strings but as I see from runninginfo heap
they are not causing any problems). My impression is that you need to avoid frequent large allocations. Unfortunately there are situations when you can't avoid them and have to change your code. As I say in my case I improved the situation when switched tostd::deque
. If you identify your memory fragmention it might be possible to talk about it more precisely.当您分配和释放许多不同大小的对象时,最有可能发生内存碎片。假设您的内存布局如下:
现在,当 obj2 被释放时,您有 120kb 未使用的内存,但您无法分配完整的 120kb 块,因为内存是碎片化的。
避免这种影响的常见技术包括环形缓冲区和对象池。在 STL 的上下文中,类似
std::vector::reserve 的方法()
可以提供帮助。Memory fragmentation is most likely to occur when you allocate and deallocate many objects of varying sizes. Suppose you have the following layout in memory:
Now, when
obj2
is released, you have 120kb of unused memory, but you cannot allocate a full block of 120kb, because the memory is fragmented.Common techniques to avoid that effect include ring buffers and object pools. In the context of the STL, methods like
std::vector::reserve()
can help.关于内存碎片的非常详细的答案可以在这里找到。
http://library.softwareverify.com/memory-fragmentation-your-worst- 。
这是 11 年来我一直在 softwareverify.com 上为向我询问有关内存碎片问题的人们提供的内存碎片答案的巅峰之作
A very detailed answer on memory fragmentation can be found here.
http://library.softwareverify.com/memory-fragmentation-your-worst-nightmare/
This is the culmination of 11 years of memory fragmentation answers I have been providing to people asking me questions about memory fragmentation at softwareverify.com
当您的应用程序使用动态内存时,它会分配和释放内存块。一开始,应用程序的整个内存空间是一个连续的可用内存块。但是,当您分配和释放不同大小的块时,内存开始变得碎片化,即不再是一个大的连续空闲块和许多连续的已分配块,而是会出现一个已分配和空闲块混淆了。由于空闲块的大小有限,因此很难重用它们。例如,您可能有 1000 字节的空闲内存,但无法为 100 字节的块分配内存,因为所有空闲块的长度最多为 50 字节。
另一个不可避免但问题较少的碎片来源是,在大多数体系结构中,内存地址必须与 2、4、8 等字节边界对齐(即地址必须是 2、4、 8 等)这意味着,即使您有一个包含 3 个 char 字段的结构,您的结构的大小也可能为 12 而不是 3,因为每个字段都与 4 对齐- 字节边界。
显而易见的答案是您会遇到内存不足异常。
显然,没有好的可移植方法来检测 C++ 应用程序中的内存碎片。有关更多详细信息,请参阅此答案。
这在 C++ 中很困难,因为您在指针中使用直接内存地址,并且无法控制谁引用特定内存地址。因此,重新排列分配的内存块(Java 垃圾收集器的方式)并不是一种选择。
自定义分配器可以帮助管理较大内存块中小对象的分配,并重用该块中的空闲槽。
When your app uses dynamic memory, it allocates and frees chunks of memory. In the beginning, the whole memory space of your app is one contiguous block of free memory. However, when you allocate and free blocks of different size, the memory starts to get fragmented, i.e. instead of a big contiguous free block and a number of contiguous allocated blocks, there will be a allocated and free blocks mixed up. Since the free blocks have limited size, it is difficult to reuse them. E.g. you may have 1000 bytes of free memory, but can't allocate memory for a 100 byte block, because all the free blocks are at most 50 bytes long.
Another, unavoidable, but less problematic source of fragmentation is that in most architectures, memory addresses must be aligned to 2, 4, 8 etc. byte boundaries (i.e. the addresses must be multiples of 2, 4, 8 etc.) This means that even if you have e.g. a struct containing 3
char
fields, your struct may have a size of 12 instead of 3, due to the fact that each field is aligned to a 4-byte boundary.The obvious answer is that you get an out of memory exception.
Apparently there is no good portable way to detect memory fragmentation in C++ apps. See this answer for more details.
It is difficult in C++, since you use direct memory addresses in pointers, and you have no control over who references a specific memory address. So rearranging the allocated memory blocks (the way the Java garbage collector does) is not an option.
A custom allocator may help by managing the allocation of small objects in a bigger chunk of memory, and reusing the free slots within that chunk.
这是针对傻瓜的超级简化版本。
当对象在内存中创建时,它们会被添加到内存中已使用部分的末尾。
如果删除了不在已用内存部分末尾的对象,这意味着该对象位于其他两个对象之间,它将创建一个“洞”。
这就是所谓的碎片化。
This is a super-simplified version for dummies.
As objects get created in memory, they get added to the end of the used portion in memory.
If an object that is not at the end of the used portion of memory is deleted, meaning this object was in between 2 other objects, it will create a "hole".
This is what's called fragmentation.
当您想在堆上添加一个项目时,计算机必须搜索空间来容纳该项目。这就是为什么动态分配如果不在内存池上或使用池化分配器完成会“减慢”速度。对于重型 STL 应用程序,如果您正在进行多线程处理,可以使用 Hoard 分配器 或 TBB Intel 版本。
现在,当内存碎片化时,可能会发生两种情况:
When you want to add an item on the heap what happens is that the computer has to do a search for space to fit that item. That's why dynamic allocations when not done on a memory pool or with a pooled allocator can "slow" things down. For a heavy STL application if you're doing multi-threading there is the Hoard allocator or the TBB Intel version.
Now, when memory is fragmented two things can occur:
内存碎片的产生是因为请求了不同大小的内存块。考虑 100 字节的缓冲区。您请求两个字符,然后请求一个整数。现在您释放这两个字符,然后请求一个新的整数,但该整数无法容纳这两个字符的空间。该内存无法重新使用,因为它不在足够大的连续块中以进行重新分配。最重要的是,您为字符调用了大量分配器开销。
本质上,在大多数系统上,内存仅以一定大小的块的形式存在。一旦您分割了这些块,它们就无法重新连接,直到整个块被释放为止。这可能会导致整个块都在使用,而实际上只有块的一小部分在使用。
减少堆碎片的主要方法是进行更大、频率更低的分配。在极端情况下,您可以使用能够移动对象的托管堆,至少可以在您自己的代码中移动对象。无论如何,从内存的角度来看,这完全消除了问题。显然移动物体等是有成本的。实际上,只有当您经常从堆中分配非常少量的资源时,才会真正遇到问题。使用连续的容器(向量、字符串等)并尽可能多地在堆栈上分配(对于性能来说总是一个好主意)是减少它的最佳方法。这还提高了缓存一致性,从而使您的应用程序运行得更快。
您应该记住的是,在 32 位 x86 桌面系统上,您拥有整个 2GB 内存,它被分成 4KB“页面”(可以肯定页面大小在所有 x86 系统上都是相同的)。您必须调用一些 omgwtfbbq 碎片才会出现问题。碎片确实是一个过去的问题,因为现代堆对于绝大多数应用程序来说都太大了,而且有很多系统能够承受碎片,例如托管堆。
Memory fragmentation occurs because memory blocks of different sizes are requested. Consider a buffer of 100 bytes. You request two chars, then an integer. Now you free the two chars, then request a new integer- but that integer can't fit in the space of the two chars. That memory cannot be re-used because it is not in a large enough contiguous block to re-allocate. On top of that, you've invoked a lot of allocator overhead for your chars.
Essentially, memory only comes in blocks of a certain size on most systems. Once you split these blocks up, they cannot be rejoined until the whole block is freed. This can lead to whole blocks in use when actually only a small part of the block is in use.
The primary way to reduce heap fragmentation is to make larger, less frequent allocations. In the extreme, you can use a managed heap that is capable of moving objects, at least, within your own code. This completely eliminates the problem - from a memory perspective, anyway. Obviously moving objects and such has a cost. In reality, you only really have a problem if you are allocating very small amounts off the heap often. Using contiguous containers (vector, string, etc) and allocating on the stack as much as humanly possible (always a good idea for performance) is the best way to reduce it. This also increases cache coherence, which makes your application run faster.
What you should remember is that on a 32bit x86 desktop system, you have an entire 2GB of memory, which is split into 4KB "pages" (pretty sure the page size is the same on all x86 systems). You will have to invoke some omgwtfbbq fragmentation to have a problem. Fragmentation really is an issue of the past, since modern heaps are excessively large for the vast majority of applications, and there's a prevalence of systems that are capable of withstanding it, such as managed heaps.
与内存碎片相关的问题的一个很好的(=可怕的)例子是“Elemental: War of Magic”的开发和发布 >,Stardock 开发的一款电脑游戏。
该游戏是为 32 位/2GB 内存构建的,必须在内存管理方面进行大量优化才能使游戏在 2GB 内存内运行。由于“优化”导致不断的分配和取消分配,随着时间的推移,堆内存碎片会发生,并导致游戏每次一次崩溃。
YouTube 上有一个“战争故事”采访。
A nice (=horrifying) example for the problems associated with memory fragmentation was the development and release of "Elemental: War of Magic", a computer game by Stardock.
The game was built for 32bit/2GB Memory and had to do a lot of optimisation in memory management to make the game work within those 2GB of Memory. As the "optimisation" lead to constant allocation and de-allocation, over time heap memory fragmentation occurred and made the game crash every time.
There is a "war story" interview on YouTube.