递归阶乘程序的复杂性

发布于 2024-08-23 00:04:36 字数 69 浏览 5 评论 0原文

求一个数字n的阶乘的递归程序的复杂度是多少?我的预感是它可能是O(n)

What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

幸福丶如此 2024-08-30 00:04:36

如果您将乘法视为O(1),那么是的,O(N) 是正确的。但是,请注意,在有限硬件上,将任意长度的两个数字相乘x不是O(1)——如x 趋于无穷大,乘法所需的时间会增长(例如,如果您使用 Karatsuba 乘法,则O(x ** 1.585))。

理论上,使用 Schönhage-Strassen 可以做得更好,但是我承认我对此没有任何现实经验。 x,“长度”或“位数”(无论以何种基数,对于 N 的 big-O 都无关紧要,随着 O(log N) 增长当然,

如果您的意思是将问题限制为足够短的数字的阶乘,以便在 O(1) 中相乘,那么 N 不可能“趋于无穷大”。 " 因此大 O 表示法是不合适的。

If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).

You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.

If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.

九八野马 2024-08-30 00:04:36

假设您正在谈论有史以来最简单的阶乘算法:

factorial (n):
  if (n = 0) then return 1
  otherwise return n * factorial(n-1)

是的,该算法是线性的,运行时间为 O(n) 。之所以如此,是因为它每次递减值 n 时都会执行一次,并且递减值 n 直到达到 0,这意味着函数被递归调用n次。当然,这是假设减法和乘法都是常数运算。

当然,如果您以其他方式实现阶乘(例如,递归地使用加法而不是乘法),您最终可能会得到时间复杂得多的算法。不过,我不建议使用这样的算法。

Assuming you're talking about the most naive factorial algorithm ever:

factorial (n):
  if (n = 0) then return 1
  otherwise return n * factorial(n-1)

Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.

Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.

随梦而飞# 2024-08-30 00:04:36

当您表达算法的复杂性时,它始终是输入大小的函数。仅当您相乘的数字具有固定大小时,才可以认为乘法是 O(1) 运算。例如,如果您想要确定计算矩阵乘积的算法的复杂性,您可以假设矩阵的各个分量具有固定大小。然后,假设两个单独矩阵分量的乘法为 O(1) 是有效的,并且您可以根据每个矩阵中的条目数计算复杂性。

但是,当您想要计算计算 N! 的算法的复杂性时,您必须假设 N 可以任意大,因此假设乘法是一个 O(1) 运算。

如果您想将 n 位数字与 m 位数字相乘,简单的算法(您手动执行的那种)需要时间 O(mn),但还有更快的算法。

如果您想分析计算 N! 的简单算法的复杂性,

factorial(N)
  f=1
  for i = 2 to N
    f=f*i

  return f

那么在 for 循环的第 k 步,您将乘以 (k-1)!k。用于表示(k-1)!的位数为O(k log k),用于表示k的位数是O(log k)。因此,(k-1)!k 相乘所需的时间是 O(k (log k)^2) (假设您使用朴素的乘法算法)。那么算法所花费的总时间就是每一步所花费的时间之和:

sum k = 1 to N [k (log k)^2] <= (log N)^2 * (总和 k = 1 到 N [k]) =
O(N^2 (log N)^2)

您可以通过使用更快的乘法算法来提高此性能,例如 Schönhage-Strassen,该算法需要时间 O( n*log(n)*log(log(n))) 用于 2 个 n 位数字。

提高性能的另一种方法是使用更好的算法来计算N!。据我所知,最快的方法首先计算 N! 的质因数分解,然后将所有质因数相乘。

When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.

However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.

If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.

If you want to analyze the complexity of the easy algorithm for computing N!

factorial(N)
  f=1
  for i = 2 to N
    f=f*i

  return f

then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:

sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)

You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.

The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.

司马昭之心 2024-08-30 00:04:36

递归阶乘的时间复杂度为:

factorial (n) {    
    if (n = 0) 
        return 1
    else
        return n * factorial(n-1)
}

因此,

一次递归调用的时间复杂度为:

T(n) = T(n-1) + 3   (3 is for As we have to do three constant operations like 
                 multiplication,subtraction and checking the value of n in each recursive 
                 call)

     = T(n-2) + 6  (Second recursive call)
     = T(n-3) + 9  (Third recursive call)
     .
     .
     .
     .
     = T(n-k) + 3k
     till, k = n

     Then,

     = T(n-n) + 3n
     = T(0) + 3n
     = 1 + 3n

用 Big-Oh 表示法表示,

T(N) 与 n 成正比,

因此,
递归阶乘的时间复杂度为O(n)。
由于递归调用时没有占用额外的空间,所以空间复杂度为O(N)。

The time-complexity of recursive factorial would be:

factorial (n) {    
    if (n = 0) 
        return 1
    else
        return n * factorial(n-1)
}

So,

The time complexity for one recursive call would be:

T(n) = T(n-1) + 3   (3 is for As we have to do three constant operations like 
                 multiplication,subtraction and checking the value of n in each recursive 
                 call)

     = T(n-2) + 6  (Second recursive call)
     = T(n-3) + 9  (Third recursive call)
     .
     .
     .
     .
     = T(n-k) + 3k
     till, k = n

     Then,

     = T(n-n) + 3n
     = T(0) + 3n
     = 1 + 3n

To represent in Big-Oh notation,

T(N) is directly proportional to n,

Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文