在 Mac OS X 上的 64 位进程中,中等大小的内存分配怎么会失败?

发布于 2024-10-19 16:25:30 字数 1061 浏览 2 评论 0 原文

我正在构建一个相册布局应用程序。该应用程序经常将 JPEG 图像解压缩到内存中的位图缓冲区中。图像的大小限制为 100 兆像素(但通常不超过 15 兆像素)。

有时,这些缓冲区的内存分配会失败:[[NSMutableData alloc] initWithLength:] 返回 nil。这似乎发生在系统的可用物理内存接近零的情况下。

我对 Mac OS X 中的虚拟内存系统 是指 64 位进程中的分配虚拟(原文如此)不会失败。有 16 艾字节的地址空间,我尝试一次最多分配 400 兆字节。理论上,我可以分配 400 亿个这样的缓冲区,而不会达到可用地址空间的硬限制。当然,实际限制会阻止这种情况,因为交换空间受到启动卷大小的限制。事实上,我只进行了很少的分配(少于十次)。

我不明白的是,无论当时的物理内存有多低,分配都会失败。我认为,只要还有剩余的交换空间,内存分配就不会失败(因为此时页面甚至还没有映射)。

该应用程序被垃圾收集。

编辑:

我有时间进一步深入研究这个问题,以下是我的发现:

  1. 该问题仅发生在垃圾收集过程中。
  2. 当从 NSMutableData 分配失败时,普通的 malloc 仍然可以成功分配相同数量的内存。
  3. 当总体物理内存接近零(即将发生交换)时,总是会发生该错误。

我假设在垃圾回收下运行时,NSData 使用NSAllocateCollectable 来执行分配,而不是malloc

我的结论是,当物理内存较低时,收集器无法分配大块内存。再说一遍,我不明白。

I'm building a photo book layout application. The application frequently decompresses JPEG images into in-memory bitmap buffers. The size of the images is constrained to 100 megapixels (while they usually do not exceed 15 megapixels).

Sometimes memory allocations for these buffers fail: [[NSMutableData alloc] initWithLength:] returns nil. This seems to happen in situations where the systems's free physical memory approaches zero.

My understanding of the virtual memory system in Mac OS X was that an allocation in a 64 bit process virtually (sic) can't fail. There are 16 exabyte of address space of which I'm trying to allocate a maximum of 400 megabytes at a time. Theoretically I could allocate 40 billion of these buffers without hitting the hard limit of the available address space. Of course practical limits would prevent this scenario as swap space is constrained by the boot volume's size. In reality I'm only making very few of these allocations (less than ten).

What I do not understand is the fact that an allocation fails, no matter how low physical memory is at that point. I thought that—as long as there's swap space left—memory allocation would not fail (as the pages are not even mapped at this point).

The application is garbage collected.

Edit:

I had time to dig into this problem a little further and here are my findings:

  1. The problem only occurs in a garbage collected process.
  2. When the allocation from NSMutableData fails, a plain malloc still succeeds to allocate the same amount of memory.
  3. The error always happens when overall physical memory approaches zero (swapping is about to take place).

I assume NSData uses NSAllocateCollectable to perform the allocation instead of malloc when running under garbage collection.

My conclusion from all that is that the collector is unable to allocate big chunks of memory when physical memory is low. Which again, I don't understand.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

娇纵 2024-10-26 16:25:30

答案在于 libauto 的实现。

从 OS X 10.6 开始,在 64 位平台上为垃圾收集内存分配了 8 GB 的空间。对于大型分配(>=128k)和小型(<2048b)或中型(<128k)分配,该区域被切成两半。

因此,在 10.6 中,您有 4Gb 内存可用于大量分配垃圾收集内存。在 10.5 上,arena 的大小为 32Gb,但是Apple 在 10.6 上将该大小降低至 8Gb。

The answer lies in the implementation of libauto.

As of OS X 10.6 an arena of 8 Gb is allocated for garbage collected memory on 64-bit platforms. This arena is cut in half for large allocations (>=128k) and small (<2048b) or medium (<128k) allocations.

So in effect on 10.6 you have 4Gb of memory available for large allocations of garbage collected memory. On 10.5 the arena had a size of 32Gb, but Apple lowered that size to 8Gb on 10.6.

或十年 2024-10-26 16:25:30

另一种猜测,但可能是您同事的机器配置了更严格的每个用户进程最大内存设置。要进行检查,请

输入ulimit -a

在控制台中 。对我来说,我得到:

~ iainmcgin$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 266
virtual memory          (kbytes, -v) unlimited

从上面的设置来看,内存使用似乎没有每个进程的限制。由于某种原因,您的同事可能并非如此。

我正在使用雪豹:

~ iainmcgin$ uname -rs
Darwin 10.6.0

Another guess, but it may be that your colleague's machine is configured with a stricter maximum memory per user process setting. To check, type

ulimit -a

Into a console. For me, I get:

~ iainmcgin$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 266
virtual memory          (kbytes, -v) unlimited

From my settings above, it seems there is no per-process limit on memory usage. This may not be the case for your colleague, for some reason.

I'm using Snow Leopard:

~ iainmcgin$ uname -rs
Darwin 10.6.0
不及他 2024-10-26 16:25:30

尽管 64 位计算机理论上可以寻址 18 EB,但当前的处理器仅限于 256 TB。当然,您也没有达到这个限制。但是您的进程一次可以使用的内存量受到可用 RAM 量的限制。操作系统还可能限制您可以使用的 RAM 量。根据您发布的链接,“即使对于具有 4 GB 或更多可用 RAM 的计算机,系统也很少将这么多 RAM 专用于单个进程。”

Even though a 64 bit computer can theoretically address 18 EB, current processors are limited to 256 TB. Of course, you aren't reaching this limit either. But the amount of memory your process can use at one time is limited to the amount of RAM available. The OS may also limit the amount of RAM you can use. According to the link you posted, "Even for computers that have 4 or more gigabytes of RAM available, the system rarely dedicates this much RAM to a single process."

白云悠悠 2024-10-26 16:25:30

您可能用完了交换空间。即使您有交换文件和虚拟内存,可用交换空间量仍然受到硬盘上用于交换文件的空间的限制。

You may be running out of swap space. Even though you have a swap file and virtual memory, the amount of swap space available is still limited by the space on your hard disk for swap files.

旧梦荧光笔 2024-10-26 16:25:30

这可能是内存碎片问题。也许在分配时没有任何 400 MB 的连续块可用?

您可以尝试在应用程序生命周期的一开始就分配这些大块,以免堆有机会因大量较小的分配而变得碎片化。

It could be a memory fragmentation issue. Perhaps there are not any single contiguous chunks of 400 MB available at the time of allocation?

You could try to allocate these large chunks at the very start of your application's life cycle, before the heap gets a chance to become fragmented by numerous smaller allocations.

み零 2024-10-26 16:25:30

initWithBytes:length: 尝试在活动内存中分配其整个长度,本质上相当于该大小的 malloc()。如果长度超过可用内存,您将得到 nil。如果您想通过 NSData 使用大文件,我建议使用 initWithContentsOfMappedFile: 或类似的初始化程序,因为它们使用 VM 系统将文件的部分内容拉入和拉出需要时主动记忆。

initWithBytes:length: tries to allocate its entire length in active memory, essentially equivalent to malloc() of that size. If the length exceeds available memory, you will get nil. If you want to use large files with NSData, I'd recommend initWithContentsOfMappedFile: or similar initializers, as they use the VM system to pull parts of the file in and out of active memory when needed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文