如何避免堆栈空间溢出?
如果我需要获取包含内存密集型元素的大列表的值,我对 GHC 抛出堆栈溢出感到有点惊讶。 我确实预计 GHC 有 TCO,所以我永远不会遇到这种情况。
为了最大程度地简化情况,请查看以下返回斐波那契数的函数的简单实现(取自 HaskellWiki)。目标是显示第一百万个数字。
import Data.List
# elegant recursive definition
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
# a bit tricky using unfoldr from Data.List
fibs' = unfoldr (\(a,b) -> Just (a,(b,a+b))) (0,1)
# version using iterate
fibs'' = map fst $ iterate (\(a,b) -> (b,a+b)) (0,1)
# calculate number by definition
fib_at 0 = 0
fib_at 1 = 1
fib_at n = fib_at (n-1) + fib_at (n-2)
main = do
{-- All following expressions abort with
Stack space overflow: current size 8388608 bytes.
Use `+RTS -Ksize -RTS' to increase it.
--}
print $ fibs !! (10^6)
print . last $ take (10^6) fibs
print $ fibs' !! (10^6)
print $ fibs'' !! (10^6)
-- following expression does not finish after several
-- minutes
print $ fib_at (10^6)
源代码是使用 ghc -O2 编译的。
我做错了什么?我想避免使用增加的堆栈大小或其他特定编译器选项进行重新编译。
I've been a bit surprised by GHC throwing stack overflows if I'd need to get value of large list containing memory intensive elements.
I did expected GHC has TCO so I'll never meet such situations.
To most simplify the case look at the following straightforward implementations of functions returning Fibonacci numbers (taken from HaskellWiki). The goal is to display millionth number.
import Data.List
# elegant recursive definition
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
# a bit tricky using unfoldr from Data.List
fibs' = unfoldr (\(a,b) -> Just (a,(b,a+b))) (0,1)
# version using iterate
fibs'' = map fst $ iterate (\(a,b) -> (b,a+b)) (0,1)
# calculate number by definition
fib_at 0 = 0
fib_at 1 = 1
fib_at n = fib_at (n-1) + fib_at (n-2)
main = do
{-- All following expressions abort with
Stack space overflow: current size 8388608 bytes.
Use `+RTS -Ksize -RTS' to increase it.
--}
print $ fibs !! (10^6)
print . last $ take (10^6) fibs
print $ fibs' !! (10^6)
print $ fibs'' !! (10^6)
-- following expression does not finish after several
-- minutes
print $ fib_at (10^6)
The source is compiled with ghc -O2
.
What am I doing wrong ? I'd like to avoid recompiling with increased stack size or other specific compiler options.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
这些 链接 这里将很好地介绍您的thunk太多(空间泄漏)问题。
如果您知道要注意什么(并且有一个不错的惰性求值模型),那么解决它们就很容易了,例如:
These links here will give you a good introduction to your problem of too many thunks (space leaks).
If you know what to look out for (and have a decent model of lazy evaluation), then solving them is quite easy, for example:
所有的定义(除了无用的 fib_at )都会延迟所有 + 操作,这意味着当您选择了第 100 万个元素时,它是一个带有 100 万个延迟添加的 thunk。你应该尝试一些更严格的东西。
All of the definitions (except the useless fib_at) will delay all the + operations, which means that when you have selected the millionth element it is a thunk with a million delayed additions. You should try something stricter.
正如其他人指出的那样,Haskell 很懒,您必须强制对 thunk 求值以避免堆栈溢出。
在我看来,这个版本的 fibs' 应该工作到 10^6:
我建议研究这个 Folds 的 wiki 页面 并查看 seq 函数。
As other have pointed out, Haskell being lazy you have to force evaluation of the thunks to avoid stack overflow.
It appears to me that this version of fibs' should work up to 10^6:
I recommend to study this wiki page on Folds and have a look at the seq function.