为什么堆栈溢出仍然是一个问题?

发布于 2024-09-09 01:33:57 字数 933 浏览 1 评论 0原文

这个问题多年来一直让我困惑,考虑到这个网站的名称,这就是提问的地方。

为什么我们程序员仍然存在这个 StackOverflow 问题?

为什么在每种主要语言中都必须在线程创建时静态分配线程堆栈内存?

我将在 C#/Java 的背景下进行讨论,因为我最常使用它们,但这可能是一个更广泛的问题。

固定的堆栈大小会导致巨大的问题:

  • 除非您完全确定递归的深度很小,否则无法编写递归算法。递归算法的线性内存复杂度常常是不可接受的。
  • 没有便宜的方法来启动新线程。您必须为堆栈分配巨大的内存块,以考虑线程的所有可能使用。
  • 即使您不使用非常深的递归,由于堆栈大小是任意固定数字,您也始终存在耗尽堆栈空间的风险。考虑到StackOverflow通常是不可恢复的,这在我看来是一个大问题。

现在,如果动态调整堆栈的大小,上述所有问题都会大大缓解,因为只有在内存溢出时才可能出现堆栈溢出。

但目前情况还不是这样。为什么?现代 CPU 是否存在一些使其不可能/效率低下的基本限制?如果您考虑重新分配会对性能造成的影响,那么这应该是可以接受的,因为人们一直使用像 ArrayList 这样的结构,而不会受到太大影响。

所以,问题是,我是否遗漏了一些东西,而 StackOverflow 不是问题,或者我是否遗漏了一些东西,并且有很多具有动态堆栈的语言,或者是否有一些重要原因不可能/很难实施?

编辑: 有人说性能会是一个大问题,但请考虑一下:

  • 我们保持编译后的代码不变。堆栈访问保持不变,因此“通常情况”性能保持不变。
  • 我们处理当代码尝试访问未分配的内存并启动“重新分配”例程时发生的 CPU 异常。重新分配不会频繁,因为<将常用的 ArrayList 参数放在这里>。应该可以在大多数保护模式 CPU 上工作而不会损失性能。不?

This question is mystifying me for years and considering this site's name, this is the place to ask.

Why do we, programmers, still have this StackOverflow problem?

Why in every major language does the thread stack memory have to be statically allocated on thread creation?

I will speak in the context of C#/Java, because I use them most, but this is probably a broader problem.

Fixed stack size leads to huge problems:

  • There is no way to write a recursive algorithm unless you are absolutely sure that the depth of recursion is tiny. Linear memory complexity of the recursive algorithm is often unacceptable.
  • There is no cheap way to start new threads. You have to allocate huge block of memory for stack to account for all the possible uses of the thread.
  • Even if you don't use very deep recursion, you always have a risk of running out of stack space for the reason that the stack size is an arbitrary fixed number. Considering that the StackOverflow is usually unrecoverable, this is a big problem in my eyes.

Now, if the stack was resized dynamically, all of the problems above would be much alleviated, because stack overflow would only be possible when there is a memory overflow.

But this is not the case yet. Why? Are there some fundamental limitations of modern CPUs which would make it impossible/inefficient? If you think about the performance hit that reallocations would impose, it should be acceptable because people use structures like ArrayList all the time without suffering much.

So, the question is, am I missing something and the StackOverflow is not a problem, or am I missing something and there are a lot of languages with dynamic stack, or is there some big reason for this being impossible/hard to implement?

Edit:
Some people said that performance would be a large problem, but consider this:

  • We leave the compiled code untouched. The stack access stays the same, thus the "usual case" performance stays the same.
  • We handle CPU exception which happens when the code tries to access the unallocated memory and launch our "reallocation" routine. Reallocations won't be frequent because <put your usual ArrayList argument here>. Should work on most protected-mode CPUs without loss of performance. No?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(11

痴骨ら 2024-09-16 01:33:57

我个人从未遇到过不是由无限递归引起的堆栈溢出。在这些情况下,动态堆栈大小没有帮助,只是需要更长的时间来耗尽内存。

I've never personally encountered a stack overflow that wasn't caused by infinite recursion. In these cases, a dynamic stack size wouldn't help, it would just take a little longer to run out of memory.

一个人的夜不怕黑 2024-09-16 01:33:57

1)为了调整堆栈大小,您必须能够移动内存,这意味着在堆栈大小调整后,指向堆栈上任何内容的指针都可能变得无效。是的,您可以使用另一级间接来解决这个问题,但请记住堆栈的使用非常非常频繁。

2)它使事情变得更加复杂。堆栈上的压入/弹出操作通常只需在 CPU 寄存器上执行一些指针算术即可完成。这就是为什么堆栈上的分配比自由存储上的分配更快。

3) 一些CPU(特别是微控制器)直接在硬件上实现堆栈,与主存储器分开。

另外,您可以在使用 beginthread( 创建新线程时设置线程堆栈的大小) ),因此如果您发现不需要额外的堆栈空间,可以相应地设置堆栈大小。

根据我的经验,堆栈溢出通常是由无限递归或在堆栈上分配巨大数组的递归函数引起的。 根据MSDN,链接器设置的默认堆栈大小为1MB (可执行文件的标头可以设置自己的默认值),这对于大多数情况来说似乎足够大。

固定堆栈机制对于大多数应用程序来说工作得足​​够好,因此没有真正需要去改变它。如果没有,您随时可以推出自己的堆栈。

1) In order to resize stacks, you have to be able to move memory around, meaning that pointers to anything on a stack can become invalid after a stack resize. Yes, you can use another level of indirection to solve this problem, but remember that the stack is used very, very frequently.

2) It significantly makes things more complicated. Push/pop operations on stacks usually work simply by doing some pointer arithmetic on a CPU register. That's why allocation on a stack is faster than allocation on the free-store.

3) Some CPUs (microcontrollers in particular) implement the stack directly on hardware, separate from the main memory.

Also, you can set the size of a stack of a thread when you create a new thread using beginthread(), so if you find that the extra stack space is unnecessary, you can set the stack size accordingly.

From my experience, stack overflows are usually caused by infinite recursions or recursive functions that allocate huge arrays on the stack. According to MSDN, the default stack size set by the linker is 1MB (the header of executable files can set their own default), which seems to be more than big enough for a majority of cases.

The fixed-stack mechanism works well enough for a majority of applications, so there's no real need to go change it. If it doesn't, you can always roll out your own stack.

野心澎湃 2024-09-16 01:33:57

我不能代表“主要语言”。许多“次要”语言执行堆分配的激活记录,每次调用都使用一块堆空间而不是线性堆栈块。这允许递归尽可能深地分配地址空间。

这里有些人声称深度递归是错误的,使用“大线性堆栈”就可以了。那是不对的。我同意,如果您必须使用整个地址空间,那么您就会遇到某种问题。但是,当具有非常大的图形或树结构时,您希望允许深度递归,并且您不想首先猜测需要多少线性堆栈空间,因为您会猜错。

如果您决定并行,并且您有很多(数千到数百万个“颗粒”[想想,小线程]),您就不能为每个线程分配 10Mb 的堆栈空间,因为您将浪费千兆字节的 RAM。你到底怎么可能拥有一百万粒谷物?简单:许多颗粒彼此相连;当一个grain被冻结等待锁时,你无法摆脱它,但你仍然想运行其他grain来使用你的可用CPU。这最大化了可用工作量,从而允许有效地使用许多物理处理器。

PARLANSE 并行编程语言使用这种非常大量的并行颗粒模型,并在函数调用。我们设计 PARLANSE 是为了能够对非常大的源计算机程序(例如,几百万行代码)进行符号分析和转换。它们产生......巨大的抽象语法树,巨大的控制/数据流图,巨大的符号表,具有数千万个节点。并行工作者有很多机会。

堆分配允许 PARLANSE 程序在词法范围内,甚至跨越并行边界,因为可以将“堆栈”实现为仙人掌堆栈,其中分叉出现在子颗粒的“堆栈”中,因此每个颗粒都可以看到激活记录(其调用者的父范围)。这使得在递归时传递大数据结构变得很便宜;你只需从词汇上引用它们即可。

人们可能会认为堆分配会减慢程序速度。确实如此; PARLANSE 的性能损失了大约 5%,但获得了并行处理非常大的结构的能力,并且具有与地址空间可以容纳的尽可能多的粒度。

I can't speak for "major languages". Many "minor" languages do heap-allocated activation records, with each call using a chunk of heap space instead of a linear stack chunk. This allows recursion to go as deep as you have address space to allocate.

Some folks here claim that recursion that deep is wrong, and that using a "big linear stack" is just fine. That isn't right. I'd agree that if you have to use the entire address space, you do a problem of some kind. However, when one has very large graph or tree structures, you want to allow deep recursion and you don't want to guess at how much linear stack space you need first, because you'll guess wrong.

If you decide to go parallel, and you have lots (thousand to million of "grains" [think, small threads]) you can't have 10Mb of stack space allocated to each thread, because you'll be wasting gigabytes of RAM. How on earth could you ever have a million grains? Easy: lots of grains that interlock with one another; when a grain is frozen waiting for a lock, you can't get rid of it, and yet you still want to run other grains to use your available CPUs. This maximizes the amount of available work, and thus allows many physical processors to be used effectively.

The PARLANSE parallel programming language uses this very-large-number of parallel grains model, and heap allocation on function calls. We designed PARLANSE to enable the symbolic analysis and transformation of very large source computer programs (say, several million lines of code). These produce... giant abstract syntax trees, giant control/data flow graphs, giant symbol tables, with tens of millions of nodes. Lots of opportunity for parallel workers.

The heap allocation allows PARLANSE programs to be lexically scoped, even across parallelism boundaries, because one can implement "the stack" as a cactus stack, where forks occur in "the stack" for subgrains, and each grain can consequently see the activation records (parent scopes) of its callers. This makes passing big data structures cheap when recursing; you just reference them lexically.

One might think that heap allocation slows down the program. It does; PARLANSE pays about a 5% penalty in performance but gains the ability to process very large structures in parallel, with as many grains as the address space can hold.

绝影如岚 2024-09-16 01:33:57

堆栈动态调整大小 - 或者更准确地说,动态增长。当堆栈无法进一步增长时,就会出现溢出,这并不是说它耗尽了地址空间,而是增长到与用于其他目的的部分内存(例如进程堆)发生冲突。

也许您的意思是堆栈不能动态移动?其根源可能是堆栈与硬件紧密耦合。 CPU 具有专用于线程堆栈管理的寄存器和逻辑堆(esp、ebp、x86 上的调用/返回/进入/离开指令)。如果您的语言是编译的(甚至是抖动的),您就会受到硬件机制的限制,并且无法移动堆栈。

这种硬件“限制”可能会持续存在。在线程执行期间重新调整线程堆栈的基础似乎远不是硬件平台的合理要求(并且增加的复杂性将严重妨碍在这样一个假想的 CPU 上执行的所有代码,甚至编译)。人们可以想象一个完全虚拟化的环境,其中不存在此限制,但由于此类代码无法被抖动 - 它会慢得难以忍受。你没有机会与它进行任何互动。

Stacks are resized dynamically - or to be precise, grown dynamically. You get an overflow when a stack cannot grow any further, which is not to say it exhausted the address space, but rather grown to conflict with a portion of memory used to other purposes (e.g., a process heap).

Maybe you mean that stacks cannot be moved dynamically? The root of that is probably that stacks are intimately coupled to the hardware. CPUs have registers and piles of logic dedicated to thread stack management (esp, ebp, call/return/enter/leave instructions on x86). If your language is compiled (or even jitted) you're bound to the hardware mechanism and cannot move stacks around.

This hardware 'limitation' is probably here to stay. Re-basing a thread stack during thread execution seems far from a reasonable demand from a hardware platform (and the added complexity would badly hamper all executed code on such an imaginary CPU, even compiled). One can picture a completely virtualized environment where this limitation does not hold, but since such code couldn't be jitted - it would be unbearably slow. Not a chance you could do anything interactive with it.

淡莣 2024-09-16 01:33:57

为什么我们程序员仍然存在 StackOverflow 问题?

固定大小的栈很容易实现,99%的程序都可以接受。
“堆栈溢出”是一个小问题,这种情况很少见。所以没有真正的理由去改变事情。另外,这不是语言问题,它与平台/处理器设计更相关,所以你必须处理它。

除非你完全确定递归的深度很小,否则无法编写递归算法。递归算法的线性内存复杂度往往是不可接受的。

现在这是不正确的。在递归算法中,您(几乎?)总是可以用某种容器替换实际的递归调用 - 列表,std::vector,堆栈,数组,先进先出队列等,会起作用< /em> 就像堆栈一样。计算将从容器的末尾“弹出”参数,并将新参数推入容器的末尾或开头。通常,此类容器大小的唯一限制是 RAM 总量。

这是一个粗略的 C++ 示例:

#include <deque>
#include <iostream>

size_t fac(size_t arg){
    std::deque<size_t> v;
    v.push_back(arg);
    while (v.back() > 2)
        v.push_back(v.back() - 1);
    size_t result = 1;
    for (size_t i = 0; i < v.size(); i++)
        result *= v[i];
    return result;
}

int main(int argc, char** argv){
    int arg = 12;
    std::cout << " fac of " << arg << " is " << fac(arg) << std::endl;
    return 0;
}

不如递归优雅,但没有 stackoverflow 问题。从技术上讲,我们在这种情况下“模拟”递归。您可以认为 stackoverflow 是您必须处理的硬件限制。

Why do we, programmers, still have this StackOverflow problem?

Stack of fixed size is easy to implement, and is acceptable for 99% of programs.
"stack overflow" is a minor problem, that is somewhat rare. So there is no real reason to change things. Also, it is not a language problem, it is more related to platform/processor design, so you'll have to deal with it.

There is no way to write a recursive algorithm unless you are absolutely sure that the depth of recursion is tiny. Linear memory complexity of the recursive algorithm is often unacceptable.

Now this is incorrect. In recursive algorithm you can (almost?) always replace actual recursive call with some kind of container - list, std::vector, stack, array, FIFO queue, etc, that will act like stack. Calculation will "pop" arguments from the end of the container, and push new arguments into either end or beginning of container. Normally, the only limit on size of such container is total amount of RAM.

Here is a crude C++ example:

#include <deque>
#include <iostream>

size_t fac(size_t arg){
    std::deque<size_t> v;
    v.push_back(arg);
    while (v.back() > 2)
        v.push_back(v.back() - 1);
    size_t result = 1;
    for (size_t i = 0; i < v.size(); i++)
        result *= v[i];
    return result;
}

int main(int argc, char** argv){
    int arg = 12;
    std::cout << " fac of " << arg << " is " << fac(arg) << std::endl;
    return 0;
}

Less elegant than recursion, but no stackoverflow problem. Technically, we're "emulating" recursion in this case. You can think that stackoverflow is a hardware limitation you have to deal with.

好久不见√ 2024-09-16 01:33:57

我将总结到目前为止答案中的论点,因为我发现没有足够好的答案涵盖这个主题。

静态堆栈调查

动机

并不是每个人都需要它。

  • 大多数算法不使用深度递归或大量线程,因此没有很多人需要动态堆栈。
  • 动态堆栈会导致无限递归堆栈溢出,这是一个很容易犯的错误,也很难诊断。 (内存溢出虽然对当前进程来说与堆栈溢出一样致命,但对其他进程也很危险)
  • 每种递归算法都可以用类似的迭代算法来模拟。

实现困难

动态堆栈的实现并不像看起来那么简单。

  • 除非您有无限的地址空间,否则仅调整堆栈大小是不够的。有时您还需要重新定位堆栈。
  • 堆栈重定位需要更新指向堆栈上分配的数据结构的所有指针。虽然对于内存中的数据来说很简单(至少在托管语言中),但对于线程的 CPU 寄存器中的数据却没有简单的方法。
  • 一些 CPU(特别是微控制器)直接在硬件上实现堆栈,与主内存分开。

现有实现

有些语言或运行时库已经具有动态堆栈功能或类似的功能。

我​​想在这里看到更多示例。

我希望我没有忘记关于这个主题的任何重要信息。使其成为社区维基,以便任何人都可以添加新信息。

I am going to summarize the arguments in the answers so far because I find no answer covering this topic good enough.

Static stack investigation

Motivation

Not everyone needs it.

  • Most algorithms do not use deep recursion or a lot of threads, thus not a lot of people need dynamic stacks.
  • Dynamic stack would make an infinite-recursion stack overflow, which is an easy mistake to make, harder to diagnose. (memory overflow, while being as deadly as a stack overflow to the current process, is hazardous for other processess as well)
  • Every recursive algorithm can be emulated with a similar iterative one.

Implementation difficulties

Dynamic stack implementation turns out to be not as straightforward as it seems.

  • Stack resizing alone is not enough unless you have unlimited address space. You will sometimes need to relocate the stack as well.
  • Stack relocation would require updates for all the pointers to the data structures allocated on the stack. While it is straightforward (at least in managed languages) for the data in memory, there is no easy way to do the same for data in the CPU registers of the thread.
  • Some CPUs (microcontrollers in particular) implement the stack directly on hardware, separate from the main memory.

Existing implementations

There are some languages or runtime libraries that already have the dynamic stack feature or something similar to it.

  • Some runtime libraries (which?) do not pre-commit the entire block of memory allocated for stack. This can alleviate the problem, expecially for 64-bit systems, but not completely eliminate it.
  • Ira Baxter told us about PARLANSE, a language specifically designed for dealing with complex data structures with high degree of parallelism. It uses small heap-allocated "grains" of work instead of stack.
  • fuzzy lolipop told us that "Properly written Erlang doesn't have stackoverflows!"
  • Google Go programming language is said to have a dynamic stack. (a link would be nice)

I would like to see more examples here.

I hope I didn't forget any important pieces of information on this subject. Making this a community wiki so that anyone can add new information.

似最初 2024-09-16 01:33:57

我想几年后我们就会看到这个限制被取消。

固定大小的堆栈根本没有根本的技术原因。它们的存在有历史原因,而且因为编译器和虚拟机的程序员很懒,如果现在足够好,就不会进行优化。

但 GO 谷歌语言已经开始采用不同的方法。它将堆栈分配为 4K 小块​​。还有许多“无堆栈”编程语言扩展(例如无堆栈Python等)也在做同样的事情。

原因很简单,线程越多,浪费的地址空间就越多。对于使用 64 位指针的速度较慢的程序来说,这是一个严重的问题。在实践中,你不可能拥有比 hundert 线程更多的线程。如果您编写的服务器可能需要为 60000 个客户端提供服务,每个客户端都有一个线程(等待不久的将来 100 个核心/CPU 系统),那么这并不好。

在 64 位系统上,问题没有那么严重,但仍然需要更多资源。例如,页面的 TLB 条目对于良好的性能非常重要。如果您可以使用一个 TLB 条目满足 4000 个普通线程堆栈(给定 16MB 的页面大小和 4KB 活动堆栈空间),您可以看到差异。不要将 1020KB 浪费在您几乎从不使用的堆栈上。

小粒度多线程将是未来非常非常重要的技术。

I think we will see this restriction removed in a few years.

There is simply no fundamental technical reason for fixed size stackes. They exist for historical reasons and because the programmers of compilers and VM's are lazy and don't optimize if it is good enough right now.

But GO the google language already starts with a different approach. It allocates the stack in small 4K pieces. There are also many "stackless" programming language extensions like stackless python etc who are doing the same.

The reason for this is quite simple, the more threads you have the more address space is wasted. For programs which are slower with 64bit pointers it is a serious problem. You can't really have more then hundert threads in practice. This is not good if you write a server which might want to server 60000 clients with a thread for each one (wait for the 100 core/cpu systems in the near future).

On 64bit systems it's not so serious but it still requires more resources. For example TLB entries for pages are extremely serious for good performance. If you can satisfy 4000 normal thread stackes with one single TLB entry (given a page size of 16MB and 4KB active stack space) you can see the difference. Don't waste 1020KB just for stack that you almost never use.

Small grained multithreading will be a very very important technique in the future.

染墨丶若流云 2024-09-16 01:33:57

在无限递归的情况下,拥有几乎无限的堆栈空间会非常糟糕,因为它会将一个容易诊断的错误(堆栈溢出)变成一个更成问题的错误(内存不足)。对于堆栈溢出,查看堆栈跟踪将很快告诉您发生了什么。或者,当系统内存不足时,它可能会尝试其他方法来解决它,例如使用交换空间,从而导致严重的性能下降。

另一方面,我很少遇到由于递归而导致堆栈溢出障碍的问题。但是,我可以想到发生这种情况的几种情况。然而,转移到我自己的作为 std::vector 实现的堆栈是解决该问题的一个简单方法。

现在,如果该语言允许我将特定函数标记为“高度递归”,然后让它在自己的堆栈空间中运行,那就太好了。这样,当我的递归出现问题时,我通常会获得停止的优势,但当我愿意时,我仍然可以使用广泛的递归。

Having practically infinite stack space would be very bad in the case of a infinite recursion because it would turn an easily diagnosed error (stack overflow) into a much more problematic error (out of memory). With a stack overflow, a look at the stack trace will fairly quickly tell you what is going on. Alternately, when the system is out of memory, it may attempt other methods of solving it, such as using swap space, resulting in serious performance degradation.

On the other hand, I have rarely had issues with hitting the stack overflow barrier due to recursion. However, I can think of a couple of circumstance where it happened. However, moving to my own stack implemented as a std::vector was a simple solution to the problem.

Now, what would be neat is if the language would allow me to mark a particular function as "heavily recursive", and then have it operate in its own stack space. That way I'd generally get the advantage of stopping when my recursion is out of whack, but I could still make use of extensive recursion when I wanted to.

南…巷孤猫 2024-09-16 01:33:57

<块引用>

为什么在每种主要语言中,线程堆栈内存都必须在线程创建时静态分配?

堆栈大小和分配不一定与您使用的语言相关。这更多的是处理器和架构的问题。

当前 Intel 处理器上的堆栈段限制为 4GB。

以下链接值得一读,它可能会为您提供一些您想要的答案。

http://www.intel.com/Assets/PDF/manual/253665。 pdf - 第 6.2 章

Why in every major language does the thread stack memory have to be statically allocated on thread creation?

Stack size and allocation is not necessarily related to the language you are using. It is more a question of processor and architecture.

Stack Segments are limited to 4GB on current Intel processors.

This following link is a good read, that may give you some of the answers you seek.

http://www.intel.com/Assets/PDF/manual/253665.pdf - Chapter 6.2

心奴独伤 2024-09-16 01:33:57

旧语言实现具有静态堆栈大小,因此大多数新流行语言(只是复制旧语言,并破坏/修复他们想要的任何内容)都存在相同的问题。

除非您处于正式方法设置中,否则没有逻辑上的理由需要静态堆栈大小。为什么在代码正确的地方引入错误呢?例如,Erlang 就不会这样做,因为它会处理错误,就像任何正常的部分编程语言应该做的那样。

Old languages implementations have static stack size, thus most new popular languages (that just copied old languages, and broke/fixed whatever they felt like) have the same issue.

There is no logical reason to have a static stack size unless you are in a formal methods setting. Why introduce faults where the code is correct? Erlang for example doesn't do this, because it handles faults, like any sane partial programming language should do.

指尖上得阳光 2024-09-16 01:33:57

无论如何,任何会导致典型静态长度堆栈上的堆栈溢出的代码都是错误的。

  • 您可以将堆栈设置为类似 std::vector 的对象,但是当它决定调整大小时,您的性能将极其不可预测——而且无论如何,它很可能会继续这样做,直到所有堆也耗尽为止,这就是更烦人。
  • 你可以把它做成一个 std::list ,它以 O(1) 的速度增长。然而,静态堆栈上使用的指针算术对于程序性能的各个方面都非常重要,以至于它会变得毫无用处的慢。语言被发明为具有一个返回值和任意数量的输入参数,因为这适合静态堆栈/指针算术范例。

因此,动态调整大小的堆栈将是 A)性能噩梦,B)无论如何都没有价值,因为您的堆栈不应该变得那么深。

Any code that would cause a stack overflow on a typical static-length stack is wrong anyway.

  • You could make the stack a std::vector-like object, but you'd have extremely unpredictable performance when it decided to resize -- and anyway, it would most likely just keep doing it until all the heap was exhausted too, and that's more annoying.
  • You could make it like a std::list, where it grew at O(1). However, the pointer arithmetic used on a static stack is so totally critical in every way to program performance that it would be uselessly slow. Languages were invented to have one return value and arbitrary numbers of input parameters because that's what fit the static stack/pointer arithmetic paradigm.

So a dynamically resizable stack would be A) a performance nightmare and B) of no value anyway, since your stack shouldn't have gotten that deep.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文