64 位领域中的堆碎片
过去,当我研究长期运行的 C++ 守护进程时,我必须处理堆碎片问题。 为了避免耗尽连续的堆空间,需要保留大量分配池之类的技巧。
对于 64 位地址空间来说这仍然是一个问题吗? 性能对我来说不是一个问题,所以我更愿意简化我的代码,不再处理缓冲池之类的事情。 有没有人有关于这个问题的经验或故事? 我使用的是 Linux,但我想许多相同的问题也适用于 Windows。
In the past, when I've worked on long-running C++ daemons I've had to deal with heap fragmentation issues. Tricks like keeping a pool of my large allocations were necessary to keep from running out of contiguous heap space.
Is this still an issue with a 64 bit address space? Perf is not a concern for me, so I would prefer to simplify my code and not deal with things like buffer pools anymore. Does anyone have any experience or stories about this issue? I'm using Linux, but I imagine many of the same issues apply to Windows.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
不,这仍然不是一个问题。
您说得对,这是 32 位系统上的问题,但它不再是 64 位系统上的问题。
64 位系统上的虚拟地址空间非常大(目前在 x86_64 处理器上为 2^48 字节,随着新的 x86_64 处理器的出现,虚拟地址空间将逐渐增加到 2^64),以致耗尽连续的虚拟地址空间由于碎片实际上是不可能的(除了一些精心设计的极端情况之外)。
(这是一个常见的直觉错误,因为 64“只是”双倍 32,这导致人们认为 64 位地址空间在某种程度上大约是 32 位地址空间的两倍。事实上,完整的 64 位地址空间是 32 位地址空间的 40 亿倍。)
换句话说,如果您的 32 位守护进程花费了一周的时间来碎片化到无法分配 x 的阶段字节块,而不是至少需要一千年才能将当今的 x86_64 处理器 48 位地址空间进行碎片化,而将未来计划的完整地址空间碎片化则需要8000万年 64 位地址空间。
No, it is not still an issue.
You are correct that it was an issue on 32-bit systems, but it no longer is an issue on 64-bit systems.
The virtual address space is so large on 64-bit systems (2^48 bytes at the moment on todays x86_64 processors, and set to increase gradually to 2^64 as new x86_64 processors come out), that running out of contiguous virtual address space due to fragmentation is practically impossible (for all but some highly contrived corner cases).
(It is a common error of intuition caused by the fact that 64 is "only" double 32, that causes people to think that a 64-bit address space is somehow roughly double a 32-bit one. In fact a full 64-bit address space is 4 billion times as big as a 32-bit address space.)
Put another way if it took your 32-bit daemon one week to fragment to a stage where it couldn't allocate an x byte block, than it would take at minimum one thousand years to fragment today's x86_64 processors 48-bit address spaces, and it would take 80 million years to fragment the future planned full 64-bit address space.
堆碎片在 64 位和 32 位下都是同样严重的问题。 如果您发出大量具有不同生命周期的请求,那么您将得到一个碎片堆。 不幸的是,64 位操作系统对此并没有真正的帮助,因为它们仍然无法真正将少量的可用内存打乱以形成更大的连续块。
如果你想处理堆碎片,你仍然必须使用同样的老技巧。
64 位操作系统可以提供帮助的唯一方法是,如果有一定数量的内存“足够大”,您永远不会将其碎片化。
Heap fragmentation is just as much of an issue under 64 bit as under 32 bit. If you make lots of requests with varying lifetimes, then you are going to get a fragmented heap. Unfortunately, 64 bit operating systems don't really help with this, as they still can't really shuffle the small bits of free memory around to make larger contiguous blocks.
If you want to deal with heap fragmentation, you still have to use the same old tricks.
The only way that a 64 bit OS could help here is if there is some amount of memory that is 'large enough' that you would never fragment it.
如果您的进程确实需要 GB 的虚拟地址空间,那么升级到 64 位确实可以立即消除对解决方法的需要。
但值得计算一下您期望进程使用多少内存。 如果它只在 GB 或更少的区域,那么即使是疯狂的碎片也不会耗尽 32 位地址空间 - 内存泄漏可能是问题所在。
(顺便说一句,Windows 的限制更为严格,因为它在每个进程中为操作系统保留了大量的地址空间)。
If your process genuinely needs gigabytes of virtual address space, then upgrading to 64-bit really does instantly remove the need for workarounds.
But it's worth working out how much memory you expect your process to be using. If it's only in the region of a gigabyte or less, there's no way even crazy fragmentation would make you run out of 32-bit address space - memory leaks might be the problem.
(Windows is more restrictive, by the way, since it reserves an impolite amount of address space in each process for the OS).