斐波那契的表现

发布于 2024-09-30 22:05:01 字数 159 浏览 1 评论 0原文

f[0] = 0;
f[1] = 1;
f[x_] := f[x-1] + f[x-2]

该函数在 Mathematica 中运行缓慢,我需要提高速度。我必须使用函数式编程和递归。我不确定为什么它运行得这么慢,即使是最轻微的想法如何改进它也会有所帮助。

f[0] = 0;
f[1] = 1;
f[x_] := f[x-1] + f[x-2]

This function is running slow in Mathematica and I need to increase the speed. I have to use functional programming and recursion. I'm not sure why this is running so slow, and even the slightest idea how to improve this would be helpful.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

眼前雾蒙蒙 2024-10-07 22:05:01

编写更快的递归函数的一个好方法是让它记住以前的值。当然,这确实会以内存为代价,但在这种情况下它会有所帮助。为了计算 f[x],您需要计算 f[x-1]f[x-2] - 然后计算f[x-1],再次计算f[x-2];你最终会多次重新计算很多值。 (请原谅我的不精确!)

要随时存储内容,您可以使用以下习惯用法:

f[x_] := ( f[x] = (* calculation of f[x] goes here *)  )

编辑:我在这台机器上没有mathematica,但我很确定不可能计算错误的值。

f[0] = 0;
f[1] = 1;
f[x_] := ( f[x] = f[x-1] + f[x-2] );

f[256]

就像我在下面的评论中所说的那样,如果您有 f 的其他定义,您可能需要先使用 Clear[f] 删除它们。

感谢 rcollyer:小心 $RecursionLimit!默认为 256。(当然,这是有充分理由的。真正的深度递归通常不是一个好主意。)

A good way to write a faster recursive function is to have it memorize previous values. This does come at the cost of memory, of course, but it can help in cases like this. In order to calculate f[x], you calculate f[x-1] and f[x-2] - and then to calculate f[x-1], you calculate f[x-2] again; you end up recalculating a lot of values a lot of times. (Forgive my imprecision!)

To store things as you go, you can use this idiom:

f[x_] := ( f[x] = (* calculation of f[x] goes here *)  )

Edit: I don't have mathematica on this machine, but I'm pretty sure there's no way this computes the wrong value.

f[0] = 0;
f[1] = 1;
f[x_] := ( f[x] = f[x-1] + f[x-2] );

f[256]

Like I said in a comment below, if you have other definitions of f, you might want to wipe them out first with Clear[f].

Thanks to rcollyer: Be careful about $RecursionLimit! It defaults to 256. (Of course, this is with good reason. Really deep recursion is usually a bad idea.)

恋竹姑娘 2024-10-07 22:05:01

杰弗罗米是对的。查看维基百科上的Memoization。他们使用阶乘的例子以及如何通过记忆来加速它。

Jefromi is right. Look at Memoization on wikipedia. They use the example of factorial and how to speed it up with memoization.

要走就滚别墨迹 2024-10-07 22:05:01

记忆化是编写更快的递归函数的好方法。然而,在这种情况下,有一个递归替代方案,其运行速度比原始函数快得多,而且不需要记忆。

关键的观察是看到原始定义执行了大量冗余计算。考虑一下如果我们计算 fib[4] 会发生什么:

fib[4] = fib[3] + fib[2]
  fib[3] = fib[2] + fib[1]
    fib[2] = fib[1] + fib[0]
      fib[1] = 1
      fib[0] = 1
    ∴ fib[2] = 1 + 1 = 2
    fib[1] = 1
  ∴ fib[3] = 2 + 1 = 3
  fib[2] = fib[1] + fib[0]
    fib[1] = 1
    fib[0] = 1
  ∴ fib[2] = 1 + 1 = 2
∴ fib[4] = 2 + 1 = 3

在此过程中,fib[2]fib[0] 各计算两次,并且fib[1] 计算了三次。对于更大的计算,浪费急剧增加——事实上呈指数级增长。

如果要手动计算相同的斐波那契数,可能会进行如下操作:

0: given   0
1: given   1
2: 0 + 1 = 1
3: 1 + 1 = 2
4: 1 + 2 = 3

没有多余的计算。在任何给定点,只需考虑前两个结果。后一种方法可以递归地表达为:

fib2[0] = 0;
fib2[n_] :=
  Module[{f},
    f[n, p1_, _] := p1;
    f[x_, p1_, p2_] := f[x + 1, p1 + p2, p1];
    f[1, 1, 0]
  ]

Block[{$IterationLimit = Infinity}, fib2[100000]]

毫无疑问,这种形式不如原始形式那么容易阅读。另一方面,原始函数在我的机器上花了 35 秒来计算 fib[35],而修改后的函数的运行时间报告为零。此外,修改后的函数在 0.281 秒内计算出 fib2[100000],无需任何额外的记忆存储。 fib[100000] 超出了原始函数的范围,并且 记忆版本使我的 Mathematica 7.01 内核崩溃——也许记忆规则太多?

请注意,默认情况下,Mathematica 迭代函数的次数不超过 4096 次。要提高该限制,您必须为 $IterationLimit 指定一个更高的值,如上例所示。

当然,在 Mathematica 中,有很多非递归方法来计算斐波那契数,包括内置的斐波那契函数。但这不是本次练习的重点。

尾部调用优化?

使用尾调用来表达递归函数总是可取的。这允许通过简单迭代来执行递归,而无需在堆栈上保留中间结果的开销。 fib2 是尾递归的。某些语言(例如Scheme)要求尾部调用优化。其他语言,例如 Java,可以支持它,但不支持(或赢了) 't,如 Python 的情况)。

就 Mathematica 而言,尚不清楚尾调用优化执行到什么程度。有关这一点的进一步讨论,请参阅另一个SO问题

Memoization is a good way to write a faster recursive function. However, in this case there is a recursive alternative that runs tremendously faster than original function, without requiring memoization.

The key observation is to see that the original definition performs a lot of redundant calculations. Consider what happens if we calculate fib[4]:

fib[4] = fib[3] + fib[2]
  fib[3] = fib[2] + fib[1]
    fib[2] = fib[1] + fib[0]
      fib[1] = 1
      fib[0] = 1
    ∴ fib[2] = 1 + 1 = 2
    fib[1] = 1
  ∴ fib[3] = 2 + 1 = 3
  fib[2] = fib[1] + fib[0]
    fib[1] = 1
    fib[0] = 1
  ∴ fib[2] = 1 + 1 = 2
∴ fib[4] = 2 + 1 = 3

In this process, fib[2] and fib[0] were computed twice each and fib[1] was computed thrice. For larger computations, the waste grows dramatically -- exponentially in fact.

If one were to calculate the same Fibonacci number by hand, one might proceed something like this:

0: given   0
1: given   1
2: 0 + 1 = 1
3: 1 + 1 = 2
4: 1 + 2 = 3

There are no redundant calculations. At any given point, one only needs to consider the previous two results. This latter approach can be expressed recursively thus:

fib2[0] = 0;
fib2[n_] :=
  Module[{f},
    f[n, p1_, _] := p1;
    f[x_, p1_, p2_] := f[x + 1, p1 + p2, p1];
    f[1, 1, 0]
  ]

Block[{$IterationLimit = Infinity}, fib2[100000]]

No doubt, this form is not as easy to read as the original. On the other hand, the original function took 35 seconds to compute fib[35] on my machine whereas the revised function's runtime for same was reported as zero. Furthermore, the revised function computes fib2[100000] in 0.281 seconds, without requiring any of the extra storage of memoization. fib[100000] is quite out of reach of the original function and the memoized version crashed my Mathematica 7.01 kernel -- too many memoized rules perhaps?

Note that Mathematica, by default, will iterate a function no more than 4096 times. To raise that limit, you must assign a higher value to $IterationLimit as illustrated in the example above.

Of course, in Mathematica there are plenty of non-recursive ways to calculate Fibonacci numbers, up to and including the built-in Fibonacci function. But that is not the point of this exercise.

Tail Call Optimization?

It is always desirable to express recursive functions using tail calls. This permits the recursion to be executed by simple iteration, without the overhead of retaining intermediate results on the stack. fib2 is tail recursive. Some languages, like Scheme, mandate tail call optimization. Other languages, like Java, could support it but don't (or won't, as in the case of Python).

In the case of Mathematica, it is not clear to what extent tail call optimization is performed. For further discussion of this point, see another SO question.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文