堆栈内存是连续的吗?

发布于 2024-10-18 21:47:29 字数 66 浏览 1 评论 0原文

编译器如何强制堆栈内存是连续的,是否会导致每次程序运行时都移动内存,或者是否在运行程序之前在堆栈上保留程序所需的内存?

How does the compiler enforce the stack memory to be contiguous, does it cause the memory to be moved everytime while the program is running or does it reserve the memory on stack needed by program before running it?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

等风也等你 2024-10-25 21:47:29

The stack for a given thread is often contiguous in virtual memory (on Linux and similar systems, and in user mode in Windows). The Windows kernel (in Windows Vista and above) and z/OS allow discontiguous stacks in virtual memory, and GCC 4.6 will also allow that. The compiler does not need to move the stack around at all, even for the systems that have discontiguous virtual addresses for the stack; they just change where new parts are allocated. The operating system might remap physical pages to virtual ones so that the stack may not be contiguous in physical memory, though, even if it is in virtual memory.

可遇━不可求 2024-10-25 21:47:29

堆栈在语言、操作系统或硬件中不要求连续。

我挑战任何人都可以提供一个参考资料,明确说明这是一项要求。

现在很多实现都使用连续内存,因为它很简单。这也是向计算机科学学生教授堆栈概念的方式(堆栈向下增长,堆向上扩展)。但没有要求这样做。我相信 MS 甚至尝试将堆栈帧放置在堆中的随机位置,以防止使用故意堆栈粉碎技术的攻击。

堆栈的唯一要求是帧是链接的。因此,允许堆栈在进入/离开范围时推送/弹出帧。

但这一切都与最初的问题正交。

编译器不会尝试强制堆栈位于连续内存中。语言级别没有要求堆栈是连续的。

堆栈通常是如何实现的。

如果是这个问题的话。然后您将从社区获得更详细、更准确的答案。

There are no requirements for the stack to be contiguous in the language the OS or the hardware.

I challenge anybody to site a reference that explicitly says this is a requirement.

Now a lot of implementations do use contiguous memory because it is simple. This is also how the stack concept is taught to CS students (Stack grows down heap expands up). But there is no requirements to do this. I believe that MS even experimented with placing stack frames in random locations in the heap to prevent attacks the used deliberate stack smashing techniques.

The only requirement of the stack is that frames are linked. Thus allowing the stack to push/pop frames as scopes are entered/left.

But this all orthogonal to the original question.

The compiler does not try and force the stack to be in contiguous memory. There is no requirements at the language level that require the stack to be contiguous.

How is the stack usually implemented.

If this was the question. Then you would get a more detailed and accurate answer from the community.

画骨成沙 2024-10-25 21:47:29

您有自己的内存地址空间,假设它从 1 到 100。您从 1 向上分配堆栈,从 100 向下分配堆。到目前为止还好吗?

由于堆栈的本质,它总是紧凑的(没有孔)。发生这种情况是因为堆栈中的所有内容都是被调用的某个函数的上下文。每当一个函数退出时,它的上下文就会从堆栈顶部删除,然后我们会回退到前一个函数。我认为如果您有一个调试器并遵循函数调用,同时牢记堆栈必须如何,您就可以很好地理解它。

另一方面,堆的表现则不太好,假设我们为堆保留了 70 到 100 的内存。我们可以在那里分配一个 4 字节的块,它可能从 70 到 74,然后我们再分配 4 个字节,现在我们的内存从 70 分配到 78。但是该内存可能在程序的任何点被释放。因此,您可能会释放一开始分配的 4 个字节,从而创建一个洞。

这就是你的地址空间中事情发生的方式。内核保留一个表,将地址空间中的页面映射到实际内存中的页面。您可能已经注意到,当您运行多个程序时,您不能指望所有的事情都设置得那么好。因此,内核所做的就是让每个进程认为整个地址空间是连续的内存(现在我们不考虑内存映射设备),即使它可能在内存中不连续地映射。

我希望对这个主题给出一个合理的概述,但可能有比我更好的作者,你可能会更喜欢阅读。因此,寻找有关虚拟内存的文本,这可能是您了解所需内容的一个很好的起点。有几本书或多或少地详细描述了它。我知道的一些: 结构化计算机组织,作者:tanenbaum;操作系统概念,作者:Silberschatz。我很确定 Knuth 在他的算法书中也讨论过这个问题。如果您喜欢冒险,您可以尝试阅读英特尔手册上的 x86 实现。

You have your memory address space, let's say it runs from 1 to 100. You allocate your stack from 1 upwards and you allocate your heap from 100 downwards. Ok so far?

Due to the very nature of the stack it's always compact (has no holes). That happens because everything that's in the stack is the context of some function that was called. Whenever a function exits, its context is removed from the top of the stack and we fall back to the previous function. I think you can understand it well if you get a debugger and just follow the function calls while keeping in mind how the stack must be.

Heap, on the other hand is not so well behaved, let's say that we have reserved memory from 70 to 100 for heap. We may allocate a block of 4 bytes there and it might go from 70 to 74 then we allocate 4 bytes more and now we have memory allocated from 70 to 78. But that memory may be deallocated at any point of the program. So you might deallocate the 4 bytes you allocated at the beginning, thus creating a hole.

That's how things happen in you address space. There's a table that the kernel keeps that maps pages from the address space to pages in real memory. As you probably have noticed, you can't hope to have all everything set up that nicely when you have more than one program running. So what kernel does is make each process think the whole address space is contiguous memory (let's not think about memory mapped devices for now), even though it might be mapped non-contiguously in memory.

I hope to have given a reasonable overview on the subject, but there are probably better authors than me, that you'll probably enjoy reading much more. So look for texts on virtual memory, it might be a nice starting point for you to understand what you want. There are several books that will describe it in greater or lesser detail. A few that I know of: Structured computer organization, by tanenbaum; Operating System Concept, by Silberschatz. I'm pretty sure Knuth discusses it in his algorithm books as well. If you feel adventurous, you might try reading x86 implementation of it on intel manuals.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文