为什么malloc可以比mmap分配更多的内存?
我正在尝试查看可以在 64 位 Linux 上分配多少虚拟内存,当前通过 repl.it 运行 Ubuntu 。我使用一些简单的代码通过重复调用 realloc()
或 mmap()
进行实验来找到此限制。我还使用 getrlimit()
和 RLIMIT_AS
来查询操作系统的最大地址空间。
这是输出:
Soft mem limit: 17592186044415 MB
Hard mem limit: 17592186044415 MB
----------------- Using mmap() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Failed.
----------------- Using realloc() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Success.
Trying 131072 MB.. Failed.
这让我感到惊讶有几个原因,也许每个原因都应该是他们自己的问题:
- 标题问题:Why can
realloc()
allocate 64GB whilemmap( )
在 32GB 之后失败? 也许我以某种方式误用了mmap()
? - 为什么不能
realloc()
或mmap()
甚至无法接近内存限制?在 64 位进程上,我预计有数百 TB 的可用虚拟地址空间。 - 删除
PROT_WRITE
并仅使用PROT_READ
或PROT_NONE
时,mmap()
设法分配最多 67108864 MB,这约为 64 TB (!)。PROT_WRITE
如何导致分配失败?这对匿名映射有什么用处(如果有的话)?
以下是完整代码,以防提供任何见解:
#include <iostream>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/resource.h>
const size_t KB = 1024;
const size_t MB = 1024 * 1024;
const size_t GB = 1024 * 1024 * 1024;
void usingMmap(size_t size);
void usingRealloc(size_t size);
int main() {
rlimit limit;
getrlimit(RLIMIT_AS, &limit);
std::cout << "Soft mem limit: " << limit.rlim_cur / MB << " MB\n";
std::cout << "Hard mem limit: " << limit.rlim_max / MB << " MB\n";
std::cout << "----------------- Using mmap() -----------------\n";
usingMmap(32 * GB);
std::cout << "----------------- Using realloc() -----------------\n";
usingRealloc(32 * GB);
return 0;
}
void usingRealloc(size_t size) {
void *p = NULL;
while (true) {
std::cout << "Trying " << size / MB << " MB.. ";
p = realloc(p, size);
if (p == NULL)
break;
std::cout << "Success.\n";
size *= 2;
}
std::cout << "Failed.\n";
if (errno != ENOMEM)
perror("realloc");
}
void usingMmap(size_t size) {
void *p = NULL;
while (true) {
std::cout << "Trying " << size / MB << " MB.. ";
p = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0);
if (p == MAP_FAILED)
break;
std::cout << "Success.\n";
if (munmap(p, size) == -1) {
perror("munmap");
exit(-1);
}
size *= 2;
}
std::cout << "Failed.\n";
if (errno != ENOMEM)
perror("mmap");
}
请注意,向上或向下更改起始大小或仅调用 mmap()
或 malloc()< /code> 没有改变此行为。
实际上,将起始值更改为 64 GB 会导致 mmap()
和 两者 code>realloc() 失败。我开始认为这更多地与每个调用一次可以处理的分配大小有关,而不是与允许进程使用多少虚拟地址空间有关。
(我知道有很多错误报告代码分散了要点;我将其保留在那里是为了证明没有任何来自 mmap()
、munmap 的意外错误()
或 realloc()
)
I'm experimenting to see how much virtual memory I can allocate on 64-bit Linux, currently running Ubuntu via repl.it. I'm using some simple code to find this limit through experimentation by repeatedly calling realloc()
or mmap()
. I also use getrlimit()
with RLIMIT_AS
to query the OS for the maximum address space.
Here is the output:
Soft mem limit: 17592186044415 MB
Hard mem limit: 17592186044415 MB
----------------- Using mmap() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Failed.
----------------- Using realloc() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Success.
Trying 131072 MB.. Failed.
This surprised me for a few reasons, which should perhaps each be their own SO question:
- The title question: Why can
realloc()
allocate 64GB whilemmap()
fails after 32GB? Perhaps I'm misusingmmap()
somehow? - Why can't
realloc()
ormmap()
come even close to approaching the memory limit? On a 64-bit process, I would expect hundreds of terabytes of virtual address space available. - When removing
PROT_WRITE
and using onlyPROT_READ
orPROT_NONE
,mmap()
manages to allocate up to 67108864 MB, which is around 64 terabytes (!). How doesPROT_WRITE
cause that allocation to fail? What use would this have (if any) with anonymous mappings?
Here is the code in full, in case that offers any insight:
#include <iostream>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/resource.h>
const size_t KB = 1024;
const size_t MB = 1024 * 1024;
const size_t GB = 1024 * 1024 * 1024;
void usingMmap(size_t size);
void usingRealloc(size_t size);
int main() {
rlimit limit;
getrlimit(RLIMIT_AS, &limit);
std::cout << "Soft mem limit: " << limit.rlim_cur / MB << " MB\n";
std::cout << "Hard mem limit: " << limit.rlim_max / MB << " MB\n";
std::cout << "----------------- Using mmap() -----------------\n";
usingMmap(32 * GB);
std::cout << "----------------- Using realloc() -----------------\n";
usingRealloc(32 * GB);
return 0;
}
void usingRealloc(size_t size) {
void *p = NULL;
while (true) {
std::cout << "Trying " << size / MB << " MB.. ";
p = realloc(p, size);
if (p == NULL)
break;
std::cout << "Success.\n";
size *= 2;
}
std::cout << "Failed.\n";
if (errno != ENOMEM)
perror("realloc");
}
void usingMmap(size_t size) {
void *p = NULL;
while (true) {
std::cout << "Trying " << size / MB << " MB.. ";
p = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0);
if (p == MAP_FAILED)
break;
std::cout << "Success.\n";
if (munmap(p, size) == -1) {
perror("munmap");
exit(-1);
}
size *= 2;
}
std::cout << "Failed.\n";
if (errno != ENOMEM)
perror("mmap");
}
Note that changing the starting size up or down or calling only one of either Actually, changing the starting value to 64 GB causes both mmap()
or malloc()
did not change this behavior.mmap()
and realloc()
to fail. I'm starting to think it has more to do with how large an allocation each call can handle at once rather than how much virtual address space a process is allowed to use up.
(I know there's a lot of error reporting code that distracts from the main point; I've kept it there to demonstrate that there aren't any unexpected errors coming from mmap()
, munmap()
, or realloc()
)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论