我的进程中所有这些未提交的保留内存是什么?
我使用 SysInternals 中的 VMMap 来查看 WinXP 上的 Win32 C++ 进程分配的内存,我看到了一堆分配,其中部分分配的内存被保留但未提交。据我所知,从我的阅读和测试来看,C++ 程序中使用的所有常见内存分配器(例如,malloc、new、LocalAlloc、GlobalAlloc)始终分配完全提交的内存块。 堆是保留内存但在需要时才提交的代码的常见示例。我怀疑其中一些块是 Windows/CRT 堆,但这些类型的块似乎比我预期的堆要多。我在我的进程中看到了大约 30 个这样的块,大小在 64k 到 8MB 之间,并且我知道我的代码从未有意调用 VirtualAlloc 来分配保留的、未提交的内存。
以下是 VMMap 中的几个示例: http://www.flickr.com/photos /95123032@N00/5280550393/
还有什么可以分配这样的内存块,其中大部分是保留但未提交的?我的进程有 30 个堆有意义吗?谢谢。
I'm using VMMap from SysInternals to look at memory allocated by my Win32 C++ process on WinXP, and I see a bunch of allocations where portions of the allocated memory are reserved but not committed. As far as I can tell, from my reading and testing, all of the common memory allocators (e.g., malloc, new, LocalAlloc, GlobalAlloc) used in a C++ program always allocate fully committed blocks of memory.
Heaps are a common example of code that reserves memory but doesn't commit it until needed. I suspect that some of these blocks are Windows/CRT heaps, but there appears to be more of these types of blocks than I would expect for heaps. I see on the order of 30 of these blocks in my process, between 64k and 8MB in size, and I know that my code never intentionally calls VirtualAlloc to allocate reserved, uncommitted memory.
Here are a couple of examples from VMMap: http://www.flickr.com/photos/95123032@N00/5280550393/
What else would allocate such blocks of memory, where much of it is reserved but not committed? Would it make sense that my process has 30 heaps? Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我发现了——它是通过调用
malloc
来分配的 CRT 堆。如果您使用malloc
分配一大块内存(例如,2 MB),它会分配单个已提交的内存块。但是,如果您分配较小的块(例如 177kb),那么它将保留 1 MB 的内存块,但仅提交大约您要求的内容(例如,对于我的 177kb 请求,为 184kb)。当您释放该小块时,较大的 1 MB 块不会返回给操作系统。除 4k 之外的所有内容均未提交,但仍保留完整的 1 MB。如果您随后再次调用
malloc
,它将尝试使用该 1 MB 块来满足您的请求。如果它无法用已保留的内存满足您的请求,它将分配一个新的内存块,该内存块是先前分配的两倍(在我的例子中,它从 1 MB 变为 2 MB)。我不确定这种翻倍的模式是否会持续下去。要实际将释放的内存返回给操作系统,您可以调用
_heapmin
。我认为这将使未来的大型分配更有可能成功,但这完全取决于内存碎片,如果分配失败(?),也许 heapmin 已经被调用,我不确定。性能也会受到影响,因为 heapmin 会释放内存(需要时间),而 malloc 需要在再次需要时从操作系统重新分配内存。此信息适用于 Windows/32 XP,您的情况可能会有所不同。更新:在我的测试中,heapmin 完全没有做任何事情。并且malloc堆仅用于小于512kb的块。即使 malloc 堆中有 MB 的连续可用空间,它也不会将其用于超过 512kb 的请求。就我而言,这个已释放、未使用但保留的 malloc 内存占用了进程 2GB 地址空间的大部分,最终导致内存分配失败。由于 heapmin 不会将内存返回给操作系统,因此除了重新启动进程或编写自己的内存管理器之外,我还没有找到任何解决此问题的方法。
I figured it out - it's the CRT heap that gets allocated by calls to
malloc
. If you allocate a large chunk of memory (e.g., 2 MB) usingmalloc
, it allocates a single committed block of memory. But if you allocate smaller chunks (say 177kb), then it will reserve a 1 MB chunk of memory, but only commit approximately what you asked for (e.g., 184kb for my 177kb request).When you free that small chunk, that larger 1 MB chunk is not returned to the OS. Everything but 4k is uncommitted, but the full 1 MB is still reserved. If you then call
malloc
again, it will attempt to use that 1 MB chunk to satisfy your request. If it can't satisfy your request with the memory that it's already reserved, it will allocate a new chunk of memory that's twice the previous allocation (in my case it went from 1 MB to 2 MB). I'm not sure if this pattern of doubling continues or not.To actually return your freed memory to the OS, you can call
_heapmin
. I would think that this would make a future large allocation more likely to succeed, but it would all depend on memory fragmentation, and perhaps heapmin already gets called if an allocation fails (?), I'm not sure. There would also be a performance hit since heapmin would release the memory (taking time) and malloc would then need to re-allocate it from the OS when needed again. This information is for Windows/32 XP, your mileage may vary.UPDATE: In my testing, heapmin did absolutely nothing. And the malloc heap is only used for blocks that are less than 512kb. Even if there are MBs of contiguous free space in the malloc heap, it will not use it for requests over 512kb. In my case, this freed, unused, yet reserved malloc memory chewed up huge parts of my process' 2GB address space, eventually leading to memory allocation failures. And since heapmin doesn't return the memory to the OS, I haven't found any solution to this problem, other than restarting my process or writing my own memory manager.
每当在应用程序中创建线程时,都会在地址空间中为线程的调用堆栈保留一定的(可配置的)内存量。没有必要提交所有保留的内存,除非您的线程实际上需要所有这些内存。所以只需要提交一部分。
如果需要的内存量超过了提交的内存量,则可以获得更多的系统内存。
实际考虑是,保留内存是堆栈大小的硬限制,会减少应用程序可用的地址空间。但是,通过仅提交一部分保留,我们不必在需要时从系统消耗相同数量的内存。
因此,每个线程都可能拥有一部分保留的未提交内存。我不确定在这些情况下页面类型是什么。
Whenever a thread is created in your application a certain (configurable) amount of memory will be reserved in the address space for the call stack of the thread. There's no need to commit all the reserved memory unless your thread is actually going to need all of that memory. So only a portion needs to be committed.
If more than the committed amount of memory is required, it will be possible to obtain more system memory.
The practical consideration is that the reserved memory is a hard limit on the stack size that reduces address space available to the application. However, by only committing a portion of the reserve, we don't have to consume the same amount of memory from the system until needed.
Therefore it is possible for each thread to have a portion of reserved uncommitted memory. I'm unsure what the page type will be in those cases.
它们可能是加载到您的进程中的 DLL 吗? DLL(和可执行文件)是映射到进程地址空间的内存。我相信这最初只是预留空间。该空间由文件本身(至少最初)而不是页面文件支持。
只有实际触及的代码才会被分页。如果我正确理解了术语,那就是它被提交的时候。
您可以通过在调试器中运行应用程序并查看加载的模块并将其位置和大小与您在 VMMap 中看到的进行比较来确认这一点。
Could they be the DLLs loaded into your process? DLLs (and the executable) are memory mapped into the process address space. I believe this initially just reserves space. The space is backed by the files themselves (at least initially) rather than the pagefile.
Only the code that's actually touched gets paged in. If I understand the terminology correctly, that's when it's committed.
You could confirm this by running your application in a debugger and looking at the modules that are loaded and comparing their locations and sizes to what you see in VMMap.