程序栈和堆,它们是如何工作的?

发布于 2024-08-18 02:15:07 字数 272 浏览 8 评论 0原文

我知道每个正在运行的进程在虚拟内存中都有与其关联的页面,其中很少有页面会根据需要加载到主内存中。我还知道程序将有一个堆栈和一个堆来分配动态内存。这是我的问题。

  1. 堆栈也是主存中某些页面的一部分吗?
  2. 当程序进入等待状态时会发生什么?堆栈指针、程序计数器和其他信息存储在哪里?
  3. 为什么栈会向下增长而堆会向上增长?
  4. L1、L2 缓存可以仅包含一块连续内存,还是可以包含堆栈和堆的某些部分?

您能推荐一本涵盖这些内容的好书吗?

I know that every running process has pages associated with it in virtual memory and few of them will be loaded into main memory as required. I also know that program will have a stack and also a heap to allocate dynamic memory. Here are my questions.

  1. Is stack also part of some page in main memory?
  2. What happens when the program is moved to waiting state? Where are the stack pointer, program counter and other info stored?
  3. Why stack grows down and heap grows up?
  4. Can L1, L2 cache contain only one chunk of contiguous memory, or can it have some part of stack and heap?

Can you recommend any good book that covers these things?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

忆梦 2024-08-25 02:15:07

堆栈也是主内存中某些页面的一部分吗?

是的 - 堆栈通常存储在内存的“低”地址中,并向上填充到其上限。堆通常存储在地址空间的“顶部”,并向堆栈增长。

当程序进入等待状态时会发生什么?堆栈指针、程序计数器和其他信息存储在哪里?

操作系统为每个正在运行的进程存储一个“上下文”。保存和恢复进程状态的操作称为“上下文切换”。

为什么栈会向下增长而堆会向上增长?

据我所知,这只是一个约定。堆栈并没有真正“增长”,它有固定的分配。

L1、L2 缓存可以仅包含一块连续内存,还是可以包含堆栈和堆的某些部分?

缓存仅包含已使用(最近或附近)的 RAM 部分的快照。在任何时刻,它们都可以从其中的地址空间的任何部分获取内存。显示的内容在很大程度上取决于缓存的结构参数(块长度、关联性、总大小等)。

我建议计算机体系结构:定量方法作为底层的一个很好的参考硬件和任何有关操作系统的书籍,了解如何“管理”硬件。

Is stack also part of some page in main memory?

Yes - the stack is typically stored in the "low" addresses of memory and fills upward toward its upper limit. The heap is typically stored at the "top" of the address space and grows toward the stack.

What happens when the program is moved to waiting state? Where are the stack pointer, program counter and other info stored?

The O/S stores a "context" per running process. The operation of saving and restoring process state is called a "context switch".

Why stack grows down and heap grows up?

Just a convention as far as I know. The stack doesn't really "grow" it's got fixed allocation.

Can L1, L2 cache contain only one chunk of contiguous memory, or can it have some part of stack and heap?

Caches simply contain snapshots of parts of RAM that have been used (either recently or nearby). At any moment in time they can have memory from any part of the address space in them. What shows up where depends heavily on the structural parameters of the cache (block length, associativity, total size, etc.).

I would suggest Computer Architecture: A Quantitative Approach as a good reference on the underlying hardware and any book on Operating Systems for how the hardware is "managed".

面犯桃花 2024-08-25 02:15:07

这是我对这些问题的理解:

  1. 堆栈也是主内存中某些页面的一部分吗?

    是的,堆栈通常也存储在进程地址空间中。

  2. 当程序进入等待状态时会发生什么,堆栈指针、程序计数器和其他信息存储在哪里?

    当操作系统将进程从活动状态转变为等待状态时,它将所有寄存器(包括堆栈指针和程序计数器)存储在内核的进程表中。然后,当它再次激活时,操作系统会将所有信息复制回原位。

  3. 为什么栈会向下增长而堆会向上增长?

    因为它们通常必须共享相同的地址空间,并且为了方便起见,它们都从地址空间的一端开始。然后他们相互成长,产生向下成长的行为。

  4. 一级、二级缓存可以仅包含一块连续内存还是可以包含堆栈和堆的某些部分?

    CPU 缓存将存储最近使用的内存块。由于堆栈和堆都存储在主内存中,因此缓存可以包含两者的一部分。

This is my understanding of those questions:

  1. Is stack also part of some page in main memory?

    Yes, the stack is usually also stored in the process address space.

  2. What happens when the program is moved to waiting state, where is the stack pointer, program counter and other info stored?

    When the operative system takes the process from active to waiting, it stores all registers (that includes the stack pointer and the program counter) in the kernel's process table. Then, when it becomes active again, the OS copies all that information back into place.

  3. Why stack grows down and heap grows up?

    That because they usually have to share the same address space, and as a convenience they each begin on one end of the address space. Then they grow towards each other, giving that grow down-grow up behavior.

  4. Can L1,L2 cache contain only one chunk of contiguous memory or can it have some part of stack and heap?

    The CPU caches will store recently used chunks of the memory. Because both the stack and the heap are stored in main memory, the caches can contain portions of both.

萝莉病 2024-08-25 02:15:07

3.为什么堆栈向下增长而堆向上增长?

请注意,在某些系统(例如某些 HP 系统)上,堆栈向上增长而不是向下增长。而在其他系统(例如,IBM/390)上根本没有真正的硬件堆栈,而是从用户空间内存动态分配的页面池。

一般来说,堆可以向任何方向增长,因为它可能包含许多分配和释放孔,因此最好将其视为松散的页面集合,而不是后进先出堆栈类型结构。话虽这么说,大多数堆实现都会在预定的地址范围内扩展其空间使用,并根据需要增大和缩小它。

3. Why stack grows down and heap grows up?

Note that on some systems (some HP systems, for example), the stack grows up instead of down. And on other systems (e.g., IBM/390) there is no real hardware stack at all, but rather a pool of pages that are dynamically allocated from user space memory.

The heap can, in general, grow in any direction, since it may contain many allocation and deallocation holes, so it is better to think of it as a loose collection of pages than as a LIFO-stack type structure. That being said, most heap implementations expand their space usage within a predetermined address range, growing and shrinking it as necessary.

贩梦商人 2024-08-25 02:15:07

当使用保护模式操作系统(如 Windows 或 Linux)时,每个进程都有一大堆内存页面可供给定进程使用。如果需要更多内存,则可以调入更多内存。

通常,该进程将分配给它的内存分为两部分。一是堆,二是栈。栈底在arm 上由栈指针r13 指定,在x86 上则由栈指针r13 指定。当在堆栈上创建变量时,堆栈指针会移动以留出所需的额外空间。这是通过汇编指令 PUSH 完成的。同样,当变量超出范围时,它将从堆栈中弹出。

通常,PUSH 会导致堆栈指针递减,从而将堆栈指针值上方的值保留在“堆栈上”。

内存的其他部分可用于堆。然后可以使用 malloc 或 new 进行分配。每个线程必须有自己的堆栈,但可以与进程中的其他线程共享堆。

当内核重新调度线程时,它会存储堆栈寄存器并将堆栈寄存器更改为新堆栈。是否需要存储程序计数器取决于调度的方式。

缓存与堆栈或堆无关。它由处理器管理,并提供一种方法来确保 CPU 所需的数据触手可及,从而不必等待总线来获取数据。完全由 CPU 来确保主内存中的内容与缓存中存储的内容相同。唯一真正需要担心缓存的情况是在使用 DMA 时。必须手动刷新或同步缓存,以确保 CPU 不信任缓存并实际从主内存获取数据。

When one uses a protected mode operating system (like Windows or Linux), each process has whole bunch of memory pages made available to the given process. If more memory is required, more can be paged in.

Typically the process divides the memory given to it into two parts. One is the heap and the other is the stack. The bottom of the stack is designated by the stack pointer r13 on arm and esp on x86. When one creates a variable on the stack the stack pointer is moved to allow for the extra space needed. This is done by the assembler instruction PUSH. Similarly when a variable is out of scope it is POPed off the stack.

Typically PUSH causes the stack pointer to be decremented leaving the value above the stack pointers value "on the stack".

The other portion of memory may be used for a heap. This is then available for allocation with the use of malloc or new. Each thread must have its own stack but may share the heap with other threads in the process.

When the kernel reschedules a thread, it stores the stack register and changes the stack register to the new stack. if may or may not need to store the program counter depending on the way is does scheduling.

The cache has nothing to do with either stack or heap. It is managed by the processor and provides a way to ensure that data needed by the CPU is close at hand so that it does not have to wait for the bus to fetch it. It is totally up to the CPU to ensure that what is in main memory is the same as what is stored in the cache. The only time one really needs to worry about cache is when using DMA. The one will have to manually flush or sync the cache to ensure that the CPU does not trust the cache and actually fetches data from main memory.

怀念你的温柔 2024-08-25 02:15:07

您应该查看我的教授在我的建筑课上的幻灯片。第 6 单元。确实帮助我理解了您提出的所有问题和其他人的回答,如果您想要更深入的知识,还可以了解更多。

You should check out my professor's slides, from my Architecture Class. Unit 6. Really helped me understand, all that you have asked and others have answered, and MORE, if you want a more in-depth knowledge.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文