堆和栈内存是如何管理、实现和分配的?

发布于 2024-07-29 05:02:19 字数 322 浏览 3 评论 0原文

在 C/C++ 中,我们可以在堆栈或堆上存储变量、函数、成员函数、类的实例。

每一项是如何实施的? 它是如何管理的(高层)? gcc 是否预先分配一块内存用于堆栈和堆,然后根据请求分配出去? 原始内存是来自RAM吗?

函数可以分配在堆上而不是堆栈上吗?

澄清

我实际上是在询问堆和堆栈内存的实现和管理。 阅读引用的问题后,我没有找到任何可以解决这个问题的内容。感谢您的链接。

In C/C++ we can store variables, functions, member functions, instances of a class either on a stack or a heap.

How is each implemented? How is it managed (high level)? Does gcc preallocates a chunk of memory to be used for the stack and heap, and then doles out on request? Is original memory coming from RAM?

Can a function be allocated on the heap instead of a stack?

Clarification

I am really asking about implementation and management of heap and stack memories. After reading referenced question, I didn't find anything that addresses that... thanks for the link

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

离不开的别离 2024-08-05 05:02:19

现代操作系统不允许您直接访问硬件 RAM,而是将其抽象为所谓的虚拟内存,并根据需要将其映射到 RAM。 每个进程通常都有自己的完整地址空间的私有副本。 这允许操作系统在运行时在 RAM 中移动进程的内存,甚至将其交换到磁盘。 这是透明地发生的,即进程不会被通知这样的重定位并且不需要有代码来处理这个。 (某些实时应用程序可能会使用技术来防止其内存被换出)。

将目标文件链接到可执行文件或动态库时,链接器会为函数/方法的 CPU 指令以及所有全局变量静态分配内存。 当操作系统加载可执行文件或动态库时,它将预先分配的内存映射到实际内存中。

启动时,每个线程都会收到一个称为堆栈的私有内存区域。 每次调用函数/方法时,编译器都会插入代码以自动从堆栈中分配(通过递增堆栈指针)足够的内存来保存函数/方法使用的所有参数、局部变量和返回值(如果有)。 如果编译器确定将某些变量保留在处理器寄存器中就足够了,则它不会在堆栈上为其分配内存。 当函数/方法返回时,它运行编译器生成的代码以释放(通过递减堆栈指针)该内存。 请注意,堆栈上任何对象的析构函数都会在定义它们的块退出时被调用,这可能需要很长时间才能返回。 此外,编译器可以自由地重用它认为合适的分配的内存。

当抛出异常时,编译器会插入特殊的代码,该代码知道堆栈的布局并可以展开它,直到找到合适的异常处理程序。

与此相反,堆上的内存是使用new / delete分配的,编译器为此插入代码以使用系统库请求或释放内存。

请注意,这是一个简化的描述,旨在让您了解内存分配的工作原理。

Modern operating systems do not give you direct access to hardware RAM and instead abstract it in so called virtual memory, which it maps to RAM on demand. Each process is usually given its own private copy of the complete address space. This allows the OS to move a process' memory around in RAM at runtime or even swap it out to disk. This happens transparently, i.e. a process is not notified of such a relocation and needs not have code to handle this. (Some real time applications might use techniques to prevent having its memory swapped out).

When linking object files to an executable or a dynamic library, the linker statically allocates memory for the cpu instructions of a function/method and for all global variables. When the os loads the executable or dynamic library, it maps this pre-allocated memory into real memory.

On startup, each thread receives a private memory area called the stack. Each time you call a function/method, the compiler inserts code to automatically allocate (by incrementing the stack pointer) enough memory from the stack to hold all parameters, local variables and the return value (if any) the function/method uses. If the compiler determines that it is sufficient to leave some variables in processor registers, it does not allocate memory on the stack for it. When the function/method returns, it runs code generated by the compiler to free (by decrementing the stack pointer) this memory. Note that the destructors of any objects on the stack will be called when the block they are defined in exits, which might be a long time before returning. Also, the compiler is free to reuse the alloacated memory as it sees fit.

When an exception is thrown, the compiler compiler inserts special code that knows the layout of the stack and that can unwind it until finding a suitable exception handler.

As opposed to this, memory on the heap is allocated using new / delete, for which the compiler inserts code to request or release memory using a system library.

Please note that this is a simplified description to give you an idea of how memory allocation works.

毁梦 2024-08-05 05:02:19

基本上堆不是由编译器实现的,而是由 C 运行时库实现的。 显然,这段代码非常依赖于平台。 在Unix或类Unix系统上,实现通常基于sbrk/brk系统调用,并分配更大的内存以减少系统调用的次数。 然后该内存由堆内存管理器管理。 如果需要更多内存,则会发出对 sbrk 的新调用。 如果您有兴趣调试堆管理例程,可以使用 sbrk(0) 获取堆的当前结束地址。 大多数内存管理器在进程的生命周期内不会将内存返回给操作系统(如果满足某些约束,gnu c 运行时库就会这样做)。

更详细的描述请参见 http://gee.cs.oswego.edu/ dl/html/malloc.html

Basically the heap is not implemented by the compiler, but instead by the C runtime library. Obviously this code is very platform dependent. On Unix or Unix-like systems the implementation is generally based on the sbrk/brk system call and a bigger amount of memory is allocated to reduce the number of system calls. This memory is then managed by the heap memory manager. If more memory is required a new call to sbrk is is issued. The current end address of the heap can be obtained with sbrk(0) if you are interested in debugging the heap management routines. Most memory managers to do not return memory to the OS during the lifetime of a process (gnu c runtime library does if certain constraints are met).

A more detailed description is available in http://gee.cs.oswego.edu/dl/html/malloc.html.

祁梦 2024-08-05 05:02:19

一些非常有趣的阅读

FreeRTOS:内存管理

Arm C 和 C++ 库和浮点支持用户指南

2.13.2 选择内存分配函数的堆实现

malloc()、realloc()、calloc() 和 free() 构建于堆抽象数据类型之上。 你可以选择
在Heap1或Heap2之间,两者提供了堆实现。

可用的堆实现有:

  • Heap1,默认实现,实现最小且最简单的堆管理器。

  • Heap2 提供了一种实现,其 malloc() 或 free() 的性能成本随着空闲块的数量呈对数增长。

Heap1,默认实现,实现最小且最简单的堆管理器。 堆作为空闲块的单链表进行管理,这些空闲块按递增的地址顺序保存。 该实现的开销较低。 然而,malloc() 或 free() 的性能成本随着空闲块的数量线性增长,并且对于某些用例来说可能太慢。

如果您预计有超过 100 个未分配块,Arm 建议您在需要接近恒定时间的性能时使用 Heap2。

Heap2 提供了一种 malloc() 或 free() 的性能成本随空闲块数量呈对数增长的实现。 当您需要接近恒定时间的性能时,建议使用 Heap2
数百个空闲块。

Some very interesting reading

FreeRTOS: Memory Management

Arm C and C++ Libraries and Floating-Point Support User Guide

2.13.2 Choosing a heap implementation for memory allocation functions

malloc(), realloc(), calloc(), and free() are built on a heap abstract data type. You can choose
between Heap1 or Heap2, the two provided heap implementations.

The available heap implementations are:

  • Heap1, the default implementation, implements the smallest and simplest heap manager.

  • Heap2 provides an implementation with the performance cost of malloc() or free() growing logarithmically with the number of free blocks.

Heap1, the default implementation, implements the smallest and simplest heap manager. The heap is managed as a single-linked list of free blocks that are held in increasing address order. This implementation has low overheads. However, the performance cost of malloc() or free() grows linearly with the number of free blocks and might be too slow for some use cases.

If you expect more than 100 unallocated blocks, Arm recommends that you use Heap2 when you require near constant-time performance.

Heap2 provides an implementation with the performance cost of malloc() or free() growing logarithmically with the number of free blocks. Heap2 is recommended when you require near constant-time performance in the presence
of hundreds of free blocks.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文