Clojure:避免埃拉托斯特尼筛中的堆栈溢出?
这是我在 Clojure 中实现的埃拉托斯特尼筛法(基于关于流的 SICP 课程):
(defn nats-from [n]
(iterate inc n))
(defn divide? [p q]
(zero? (rem q p)))
(defn sieve [stream]
(lazy-seq (cons (first stream)
(sieve (remove #(divide? (first stream) %)
(rest stream))))))
(def primes (sieve (nats-from 2)))
现在,当我取前 100 个素数时一切正常:
(take 100 primes)
但是,如果我尝试取前 1000 个素数,程序就会中断因为堆栈溢出。 我想知道是否有可能以某种方式改变函数筛以成为尾递归,并且仍然保留算法的“流”?
有什么帮助吗???
Here's my implementation of Sieve of Erathosthene in Clojure (based on SICP lesson on streams):
(defn nats-from [n]
(iterate inc n))
(defn divide? [p q]
(zero? (rem q p)))
(defn sieve [stream]
(lazy-seq (cons (first stream)
(sieve (remove #(divide? (first stream) %)
(rest stream))))))
(def primes (sieve (nats-from 2)))
Now, it's all OK when i take first 100 primes:
(take 100 primes)
But, if i try to take first 1000 primes, program breaks because of stack overflow.
I'm wondering if is it possible to change somehow function sieve to become tail-recursive and, still, to preserve "streamnes" of algorithm?
Any help???
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
首先,这不是埃拉托色尼筛......有关详细信息,请参阅我的评论。
其次,对势均力敌的投票表示歉意,因为你的问题并不是我指出的问题的实际重复......我的错。
所发生情况的解释
当然,不同之处在于您正在尝试构建一个增量筛,其中
remove
调用的工作范围是无限的,因此仅仅用一个doall
包裹它是不可能的。解决方案是实施我最近经常链接到的论文中的“真正的”增量 SoE 之一——Melissa E. O'Neill 的 真正的埃拉托色尼筛。Christophe Grand 编写了这种特别漂亮的 Clojure sieve 实现,并且可以使用 这里供所有可能感兴趣的人欣赏。强烈推荐阅读。
至于问题的根源,我最初认为您的问题是包含对您有用的解释的重复:请参阅 此处和此处。再次对仓促结束的投票表示歉意。
为什么尾递归无济于事
由于问题特别提到将筛选函数尾递归作为可能的解决方案,我想我应该在这里解决这个问题:转换惰性序列的函数通常不应该是尾递归< /强>。
这是需要牢记的非常重要的一点,也是许多没有经验的 Clojure(或 Haskell)程序员会犯的错误。原因是尾递归函数只有在“准备好”时(在计算的最后)才返回其值。 (迭代过程可以在任何特定迭代结束时返回一个值或继续进行下一次迭代。)相比之下,生成惰性序列的函数应该立即返回一个惰性序列对象,该对象封装了一些代码位可以在需要时要求生成序列的头部或尾部。
因此,堆叠惰性转换问题的答案不是使任何内容成为尾递归,而是合并转换。在这种特殊情况下,可以通过使用自定义方案来融合基于优先级队列或映射的过滤操作来获得最佳性能(请参阅上述文章了解详细信息)。
Firstly, this is not the Sieve of Eratosthenes... see my comment for details.
Secondly, apologies for the close vote, as your question is not an actual duplicate of the one I pointed to... My bad.
Explanation of what is happening
The difference lies of course in the fact that you are trying to build an incremental sieve, where the range over which the
remove
call works is infinite and thus it's impossible to just wrap adoall
around it. The solution is to implement one of the "real" incremental SoEs from the paper I seem to link to pretty frequently these days -- Melissa E. O'Neill's The Genuine Sieve of Eratosthenes.A particularly beatiful Clojure sieve implementation of this sort has been written by Christophe Grand and is available here for the admiration of all who might be interested. Highly recommended reading.
As for the source of the issue, the questions I originally thought yours was a duplicate of contain explanations which should be useful to you: see here and here. Once again, sorry for the rash vote to close.
Why tail recursion won't help
Since the question specifically mentions making the sieving function tail-recursive as a possible solution, I thought I would address that here: functions which transform lazy sequences should not, in general, be tail recursive.
This is quite an important point to keep in mind and one which trips up many an unexperienced Clojure (or Haskell) programmer. The reason is that a tail recursive function of necessity only returns its value once it is "ready" -- at the very end of the computation. (An iterative process can, at the end of any particular iteration, either return a value or continue on to the next iteration.) In constrast, a function which generates a lazy sequence should immediately return a lazy sequence object which encapsulates bits of code which can be asked to produce the head or tail of the sequence whenever that's desired.
Thus the answer to the problem of stacking lazy transformations is not to make anything tail recursive, but to merge the transformations. In this particular case, the best performance can be obtained by using a custom scheme to fuse the filtering operations, based on priority queues or maps (see the aforementioned article for details).