最大递归深度是多少?如何增加它?
我这里有一个尾递归函数:
def recursive_function(n, sum):
if n < 1:
return sum
else:
return recursive_function(n-1, sum+n)
c = 998
print(recursive_function(c, 0))
它最多可以工作到n=997
,然后它就会中断并抛出一个RecursionError:比较中超出了最大递归深度
。这只是堆栈溢出吗?有办法绕过它吗?
I have this tail recursive function here:
def recursive_function(n, sum):
if n < 1:
return sum
else:
return recursive_function(n-1, sum+n)
c = 998
print(recursive_function(c, 0))
It works up to n=997
, then it just breaks and spits out a RecursionError: maximum recursion depth exceeded in comparison
. Is this just a stack overflow? Is there a way to get around it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(19)
是的,它可以防止堆栈溢出。 Python(或者更确切地说,CPython 实现)不会优化尾递归,无节制的递归会导致堆栈溢出。您可以使用
sys.getrecursionlimit
:并使用
sys.setrecursionlimit
更改递归限制:但这样做是危险的——标准限制有点保守,但 Python 堆栈框架可能相当大。
Python 不是一种函数式语言,尾递归也不是一种特别有效的技术。如果可能的话,迭代地重写算法通常是一个更好的主意。
It is a guard against a stack overflow, yes. Python (or rather, the CPython implementation) doesn't optimize tail recursion, and unbridled recursion causes stack overflows. You can check the recursion limit with
sys.getrecursionlimit
:and change the recursion limit with
sys.setrecursionlimit
:but doing so is dangerous -- the standard limit is a little conservative, but Python stackframes can be quite big.
Python isn't a functional language and tail recursion is not a particularly efficient technique. Rewriting the algorithm iteratively, if possible, is generally a better idea.
看起来您只需要设置更高的递归深度:
Looks like you just need to set a higher recursion depth:
如果您经常需要更改递归限制(例如,在解决编程难题时),您可以定义一个简单的 上下文管理器如下所示:
然后要调用具有自定义限制的函数,您可以执行以下操作:
从
with
语句主体退出时,递归限制将恢复为默认值。PS 您可能还想增加 Python 进程的堆栈大小以获得大的递归限制值。例如,这可以通过内置的
ulimit
shell 或limits.conf(5)
文件来完成。If you often need to change the recursion limit (e.g. while solving programming puzzles) you can define a simple context manager like this:
Then to call a function with a custom limit you can do:
On exit from the body of the
with
statement the recursion limit will be restored to the default value.P.S. You may also want to increase the stack size of the Python process for big values of the recursion limit. That can be done via the
ulimit
shell builtin orlimits.conf(5)
file, for example.这是为了避免堆栈溢出。 Python解释器限制递归的深度,以帮助您避免无限递归,从而导致堆栈溢出。
尝试增加递归限制 (
sys.setrecursionlimit
) 或在不使用递归的情况下重写代码。来自 Python 文档:
It's to avoid a stack overflow. The Python interpreter limits the depths of recursion to help you avoid infinite recursions, resulting in stack overflows.
Try increasing the recursion limit (
sys.setrecursionlimit
) or re-writing your code without recursion.From the Python documentation:
resource.setrlimit
还必须用于增加堆栈大小并防止段错误Linux 内核 限制进程堆栈。
Python将局部变量存储在解释器的堆栈上,因此递归会占用解释器的堆栈空间。
如果 Python 解释器尝试超过堆栈限制,Linux 内核会使其出现分段错误。
堆栈限制大小由
getrlimit
和setrlimit
系统调用控制。Python 通过
resource
模块提供对这些系统调用的访问。sys.setrecursionlimit
例如在 https://stackoverflow.com/a/3323013/895245 中提到仅仅增加了Python解释器自身对其自身堆栈大小施加的限制,但并没有触及Linux内核对Python进程施加的限制。示例程序:
main.py
当然,如果你不断增加
setrlimit
,你的 RAM 最终会耗尽,这会导致你的计算机因交换疯狂而停止运行,或者通过 OOM 杀手。在 bash 中,您可以使用以下命令查看并设置堆栈限制(以 kb 为单位):
我的默认值是 8Mb。
另请参阅:
在 Ubuntu 16.10、Python 2.7.12 上测试。
resource.setrlimit
must also be used to increase the stack size and prevent segfaultThe Linux kernel limits the stack of processes.
Python stores local variables on the stack of the interpreter, and so recursion takes up stack space of the interpreter.
If the Python interpreter tries to go over the stack limit, the Linux kernel makes it segmentation fault.
The stack limit size is controlled with the
getrlimit
andsetrlimit
system calls.Python offers access to those system calls through the
resource
module.sys.setrecursionlimit
mentioned e.g. at https://stackoverflow.com/a/3323013/895245 only increases the limit that the Python interpreter self imposes on its own stack size, but it does not touch the limit imposed by the Linux kernel on the Python process.Example program:
main.py
Of course, if you keep increasing
setrlimit
, your RAM will eventually run out, which will either slow your computer to a halt due to swap madness, or kill Python via the OOM Killer.From bash, you can see and set the stack limit (in kb) with:
The default value for me is 8Mb.
See also:
Tested on Ubuntu 16.10, Python 2.7.12.
使用保证尾部调用优化的语言。或者使用迭代。或者,使用装饰器变得可爱。
Use a language that guarantees tail-call optimisation. Or use iteration. Alternatively, get cute with decorators.
我意识到这是一个老问题,但对于那些阅读的人来说,我建议不要使用递归来解决此类问题 - 列表速度更快并且完全避免递归。我将其实现为:(
如果您从 0 而不是 1 开始计算斐波那契数列,请在 xrange 中使用 n+1。)
I realize this is an old question but for those reading, I would recommend against using recursion for problems such as this - lists are much faster and avoid recursion entirely. I would implement this as:
(Use n+1 in xrange if you start counting your fibonacci sequence from 0 instead of 1.)
我遇到了类似的问题,错误为“超出最大递归深度”。我发现该错误是由我使用 os.walk 循环的目录中的损坏文件触发的。如果您在解决此问题时遇到困难并且正在使用文件路径,请务必缩小范围,因为它可能是损坏的文件。
I had a similar issue with the error "Max recursion depth exceeded". I discovered the error was being triggered by a corrupt file in the directory I was looping over with
os.walk
. If you have trouble solving this issue and you are working with file paths, be sure to narrow it down, as it might be a corrupt file.当然,斐波那契数可以通过应用 Binet 公式:
正如评论者指出的那样,它不是 O(1) 而是 O(n),因为
2**n
。还有一个区别是,您只能获得一个值,而通过递归,您可以获得 Fibonacci(n) 的所有值(直到该值)。Of course Fibonacci numbers can be computed in O(n) by applying the Binet formula:
As the commenters note it's not O(1) but O(n) because of
2**n
. Also a difference is that you only get one value, while with recursion you get all values ofFibonacci(n)
up to that value.如果只想得到几个斐波那契数列,可以使用矩阵法。
它的速度很快,因为 numpy 使用快速求幂算法。您可以在 O(log n) 内得到答案。它比比奈公式更好,因为它只使用整数。但如果你想要n以内的所有斐波那契数,那么最好通过记忆来完成。
If you want to get only few Fibonacci numbers, you can use matrix method.
It's fast as numpy uses fast exponentiation algorithm. You get answer in O(log n). And it's better than Binet's formula because it uses only integers. But if you want all Fibonacci numbers up to n, then it's better to do it by memorisation.
我们可以使用 @lru_cache 装饰器和 setrecursionlimit() 方法来做到这一点:
输出
源
functools lru_cache
We can do that using
@lru_cache
decorator andsetrecursionlimit()
method:Output
Source
functools lru_cache
RecursionError: 比较中超出最大递归深度
解决方案:
首先,最好知道当您在 Python 中对大输入(> 10^4)执行递归函数时,您可能会遇到“超出最大递归深度错误”。
Python 中的 sys 模块有一个函数 getrecursionlimit() 可以显示您的 Python 版本中的递归限制。
Python 的某些版本中的默认值为 1000,而其他版本中的默认值为 1500。
您可以更改此限制,但重要的是要知道如果将其增加太多,则会出现内存溢出错误。
所以增加之前要小心。您可以在 Python 中使用 setrecursionlimit() 来增加此限制。
请点击此链接了解有关导致此问题的原因的更多信息:
https://elvand.com/quick -排序二进制搜索/
RecursionError: maximum recursion depth exceeded in comparison
Solution :
First it’s better to know when you execute a recursive function in Python on a large input ( > 10^4), you might encounter a “maximum recursion depth exceeded error”.
The sys module in Python have a function getrecursionlimit() can show the recursion limit in your Python version.
The default in some version of Python is 1000 and in some other it was 1500
You can change this limitation but it’s very important to know if you increase it very much you will have memory overflow error.
So be careful before increase it. You can use setrecursionlimit() to increase this limitation in Python.
Please follow this link for more information about somethings cause this issue :
https://elvand.com/quick-sort-binary-search/
正如 @alex 建议,您可以使用 生成器函数 按顺序而不是递归地执行此操作。
这是您问题中的等效代码:
As @alex suggested, you could use a generator function to do this sequentially instead of recursively.
Here's the equivalent of the code in your question:
编辑:6年后,我意识到我的“使用生成器”是轻率的并且没有回答问题。抱歉。
我想我的第一个问题是:您真的需要更改递归限制吗?如果不是,那么也许我的或任何其他不涉及更改递归限制的答案都将适用。否则,如上所述,使用 sys.getrecursionlimit(n) 覆盖递归限制。
使用发电机?
以上
fib()
函数改编自 Python 生成器简介。Edit: 6 years later I realized my "Use generators" was flippant and didn't answer the question. My apologies.
I guess my first question would be: do you really need to change the recursion limit? If not, then perhaps my or any of the other answers that don't deal with changing the recursion limit will apply. Otherwise, as noted, override the recursion limit using
sys.getrecursionlimit(n)
.Use generators?
Above
fib()
function adapted from Introduction to Python Generators.许多人建议增加递归限制是一个很好的解决方案,但这并不是因为总会有限制。相反,使用迭代解决方案。
Many recommend that increasing recursion limit is a good solution however it is not because there will be always limit. Instead use an iterative solution.
我想给你一个使用记忆来计算斐波那契数的例子,因为这将允许你使用递归计算更大的数字:
这仍然是递归的,但使用一个简单的哈希表,允许重用以前计算的斐波那契数,而不是再次执行它们。
I wanted to give you an example for using memoization to compute Fibonacci as this will allow you to compute significantly larger numbers using recursion:
This is still recursive, but uses a simple hashtable that allows the reuse of previously calculated Fibonacci numbers instead of doing them again.
我不确定我是否在重复某人,但前段时间,一些好心人为递归调用函数编写了 Y 运算符,例如:
然后递归函数需要形式:
对于斐波那契数,您的函数如下所示:
输出:(
实际上是数字的音调)
I'm not sure I'm repeating someone but some time ago some good soul wrote Y-operator for recursively called function like:
and then recursive function needs form:
for Fibonacci numbers your function looks like this:
output:
(actually tones of digits)
我们还可以使用动态编程自下而上方法的变体
We could also use a variation of dynamic programming bottom up approach