Win32 上的内存不足(?)问题(与 Linux 相比)
我有以下问题:
在 Windows 机器(32 位,3.1Gb 内存,VC++2008 和 mingw 编译的代码)上运行的程序失败,并抛出 bad_alloc
异常(分配大约 1.2Gb 后;当尝试分配 900 万个双精度向量(即大约 75Mb)且仍有大量可用 RAM(至少根据任务管理器)时,会引发异常。
在 Linux 机器(32 位,4Gb 内存;32 位,2Gb 内存)上运行的相同程序运行良好,峰值内存使用量约为 1.6Gb。有趣的是,由 mingw 生成的 win32 代码在 wine 下的 4Gb linux 机器上运行时也会失败,并出现 bad_alloc,尽管是在不同的(稍后)位置,然后在 Windows 下运行时......
可能存在哪些问题?
- 堆碎片? (我怎么知道?如何解决这个问题?)
- 堆损坏? (我在启用了 pageheap.exe 的情况下运行了代码,没有报告任何错误;通过边界检查实现了向量访问——同样没有错误;代码基本上没有指针,只有
std::vector
和 std::list 被使用正在运行。 Valgrind(memcheck)下的程序消耗太多内存并提前结束,但没有发现任何错误) - 内存不足??? (内存要足够)
此外,windows版本失败,而Windows版本失败的原因可能是什么? linux版本可以工作吗(甚至在内存较少的机器上)? (另请注意 /LARGEADDRESSAWARE 链接器标志与 VC+2008 一起使用(如果可以产生任何效果)
任何想法将不胜感激,我对此束手无策......:-(
I have the following problem:
A program run on a windows machine (32bit, 3.1Gb memory, both VC++2008 and mingw compiled code) fails with a bad_alloc
exception thrown (after allocating around 1.2Gb; the exception is thrown when trying to allocate a vector of 9 million doubles, i.e. around 75Mb) with plenty of RAM still available (at least according to task manager).
The same program run on linux machines (32bit, 4Gb memory; 32bit, 2Gb memory) runs fine with peak memory usage of around 1.6Gb. Interestingly the win32 code generated by mingw run on the 4Gb linux machine under wine also fails with a bad_alloc, albeit at a different (later) place then when run under windows...
What are the possible problems?
- Heap fragmentation? (How would I know? How can this be solved?)
- Heap corruption? (I have run the code with pageheap.exe enabled with no errors reported; implemented vector access with bounds checking --- again no errors; the code is essentially free of pointers, only
std::vector
s andstd::list
s are used. Running
the program under Valgrind (memcheck) consumes too much memory and ends prematurely, but does not find any errors) - Out of memory??? (There should be enough memory)
Moreover, what could be the reason that the windows version fails while the
linux version works (and even on machines with less memory)? (Also note that
the /LARGEADDRESSAWARE linker flag is used with VC+2008 if that can have any effect)
Any ideas would be much appreciated, I am at my wits end with this... :-(
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
它与系统中有多少 RAM 无关。您的虚拟地址空间即将耗尽。对于 32 位 Windows 操作系统进程,您将获得 4GB 虚拟地址空间(无论您使用多少 RAM),其中 2GB 用于用户模式(如果是 LARGEADDRESSAWARE,则为 3GB),2GB 用于内核。当您尝试使用 new 分配内存时,操作系统将尝试找到足够大的连续虚拟内存块来满足内存分配请求。如果你的虚拟地址空间碎片严重或者你需要一个巨大的内存块,那么它将失败并抛出 bad_alloc 异常。检查您的进程使用了多少虚拟内存。
It has nothing to do with how much RAM is in your system. You are running out of virtual address space. For a 32 bit windows OS process, you get a 4GB virtual address space (irrespective of how much RAM you are using) out of 2GB for the user-mode (3GB in case of LARGEADDRESSAWARE) and 2 GB for kernel. When you do try to allocate memory using new, OS will try to find the contiguos block of virtual memory which is large enough to satisfy the memory allocation request. If your virtual address space is badly fragmented or you are asking for a huge block of memory then it will fail throwing a bad_alloc exception. Check how much virtual memory your process is using.
对于 Windows XP x86 和默认设置,1.2 GB 大约是系统库、代码、堆栈和其他内容获得共享后为堆保留的所有地址空间。请注意,largeaddressaware 要求您使用 /3GB 启动标志来启动,以尝试为您的进程提供高达 3GB 的空间。 /3GB 标志会导致许多 XP 系统不稳定,这就是默认情况下不启用它的原因。
Windows x86 的服务器变体通过使用 3GB/1GB 分割和使用 PAE 来允许使用完整的 4GB RAM,为您提供更多的地址空间。
Linux x86 默认使用 3GB/1GB 分割。
64 位操作系统将为您提供更多的地址空间,即使对于 32 位进程也是如此。
With Windows XP x86 and the default settings, 1.2 GB is about all the address space you have left for your heap after system libraries, your code, the stack and other stuff get their share. Note that largeaddressaware requires you to boot with the /3GB boot flag to try to give your process up to 3GB. The /3GB flag causes instability on a lot of XP systems, which is why it's not enabled by default.
Server variants of Windows x86 give you more address space, both by using the 3GB/1GB split and by using PAE to allow the use of your full 4GB of RAM.
Linux x86 uses a 3GB/1GB split by default.
A 64 bit OS would give you more address space, even for a 32bit process.
您是否在
调试
模式下编译?如果是这样,分配将生成大量调试数据,这可能会生成您所看到的错误,即真正的内存不足。尝试使用Release
看看是否可以解决问题。我只在 VC 上遇到过这种情况,没有在 MinGW 上遇到过这种情况,但后来我也没有检查过,这仍然可以解释问题。
Are you compiling in
Debug
mode? If so, the allocation will generate a huge amount of debugging data which might generate the error you have seen, with a genuine out-of-memory. Try inRelease
to see if that solves the problem.I have only experienced this with VC, not MinGW, but then I haven't checked either, this could still explain the problem.
详细说明虚拟内存:
当您的应用程序尝试分配单个 90MB 数组时,它会失败,并且没有可以容纳该数组的连续虚拟内存空间。如果您切换到使用较少内存的数据结构,您可能会走得更远——也许是一些类通过使用所有数据都保存在 1MB(左右)叶节点中的树来近似一个巨大的数组。另外,在 C++ 下,当进行大量分配时,如果所有这些大分配都具有相同的大小,这确实很有帮助,这有助于重用内存并保持碎片更低。
然而,从长远来看,正确的做法就是切换到 64 位系统。
To elaborate more about the virtual memory:
Your application fails when it tries to allocate a single 90MB array, and there is no contiguous space of virtual memory where this can fit left. You might be able to get a little farther if you switched to data structures that use less memory -- perhaps some class that approximates a huge array by using a tree where all data is kept in 1MB (or so) leaf nodes. Also, under c++ when doing a huge amount of allocations, it really helps if all those big allocations are of same size, this helps reusing memory and keeps fragmentation much lower.
However, the correct thing to do in the long run is simply to switch to a 64-bit system.