如何找到glibc库中malloc分配的chunk的大小?
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char * ptr1 = NULL;
char * newptr = NULL;
ptr1 = (char *) malloc(8 * sizeof(int));
if (ptr1 == NULL)
exit(EXIT_FAILURE);
printf("%p\n", ptr1);
newptr = ptr1 - sizeof(size_t);
printf("%zu\n", (*(size_t *)newptr));
free(ptr1);
return 0;
}
输出:
0x...... [some address]
49
49-1 = 48 - (8 * 4)= 16
假设sizeof(int)
= 4
根据glibc
malloc(Malloc()
实现每块的最大开销为8个字节
,但此块字段的大小给出48
减去可用内存的最大大小给出16
。因此,开销显示16个字节
。 我知道有什么问题,但我无法弄清楚我在哪里进行错误的计算。
请帮忙。谢谢。
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char * ptr1 = NULL;
char * newptr = NULL;
ptr1 = (char *) malloc(8 * sizeof(int));
if (ptr1 == NULL)
exit(EXIT_FAILURE);
printf("%p\n", ptr1);
newptr = ptr1 - sizeof(size_t);
printf("%zu\n", (*(size_t *)newptr));
free(ptr1);
return 0;
}
Output :
0x...... [some address]
49
49 - 1 = 48 - (8 * 4) = 16
Assuming sizeof(int)
= 4
According to the glibc
malloc()
implementation the maximum overhead per chunk is 8 bytes
but size of this chunk field gives 48
subtracting the usable memory gives 16
. So the overhead is showing 16 bytes
.
I know something is going wrong but I can't figure it out where I doing the wrong calculation.
Please help. Thank you.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
除了实验或研究目的外,这是您不应该弄乱的实施定义的事物之一。
在GNU平台中,在非C标准标头中存在
malloc.h
natermalloc_usable_size()
,它将告诉您malloc的实际字节长度()
heap的块给定了一些指针,这是malloc()
的返回值,但在其自己的文档它说“ , 虽然应用程序可以被应用程序覆盖而没有疾病效果,但这不是很好的计划惯例: 基本实现。分配中的多余字节数取决于 您要求,但实际上,如果不是全部,实践中的实现为您提供了一些额外的填充或内存管理器的目的。我记得我的一位计算机科学教授,说明您如何在她的系统上从
malloc()
传递的指针之前,就像您正在读取有关内存片段的信息一样,但是C编程语言不需要以这种方式实现malloc()
,也不需要保证如何编码任何此类数据。看来您正在使用GNU,因此,如果您想了解更多有关
malloc()
在平台上如何实现的信息当您要求32时,确实给您48个字节。对于记录,如果是的话,我一点都不会感到惊讶。我不确定当您说最大开销是Glibc中的8个字数时,您指的是什么。这可能意味着堆管理器为每个块使用的最大内存是8个字,不包括分配额外的存储器以进行对齐?
This is one of those implementation-defined things that you shouldn't mess with except for experimentation or research purposes.
In GNU platforms a function exists in a non-C-standard header
malloc.h
calledmalloc_usable_size()
which will tell you the actual byte-length of yourmalloc()
ed chunk of heap given some pointer that is the return value ofmalloc()
but in its own documentation it says "Although the excess bytes can be overwritten by the application without ill effects, this is not good programming practice: the number of excess bytes in an allocation depends on the underlying implementation."According to the C-standard, the chunk of memory allocated by
malloc()
is only guaranteed to be as many bytes as you ask for, but in practice most, if not all, implementations give you a little extra for padding or the purposes of the memory manager. I remember a computer science professor of mine demonstrating how you could look at the memory right before the pointer passed frommalloc()
on her system just like you are doing to read information about the chunk of memory, but the C programming language does not requiremalloc()
to be implemented this way nor does it guarantee how any such data would be encoded.It seems you're using GNU, so if you want to learn more about how
malloc()
is implemented on your platform take a peek atmalloc_usable_size()
and see if it really is giving you 48 bytes when you asked for 32. For the record I wouldn't be at all surprised if it is.I'm not sure what you're referring to when you say the maximum overhead is 8-bytes in glibc. That might mean that the maximum memory used by the heap manager for each chunk is 8-bytes, not including extra memory allocated for alignment?