递归阶乘程序的复杂性
求一个数字n
的阶乘的递归程序的复杂度是多少?我的预感是它可能是O(n)
。
What's the complexity of a recursive program to find factorial of a number n
? My hunch is that it might be O(n)
.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您将乘法视为
O(1)
,那么是的,O(N)
是正确的。但是,请注意,在有限硬件上,将任意长度的两个数字相乘x
不是O(1)
——如x 趋于无穷大,乘法所需的时间会增长(例如,如果您使用 Karatsuba 乘法,则
O(x ** 1.585)
)。理论上,使用 Schönhage-Strassen 可以做得更好,但是我承认我对此没有任何现实经验。
x
,“长度”或“位数”(无论以何种基数,对于 N 的 big-O 都无关紧要,随着O(log N)
增长当然,如果您的意思是将问题限制为足够短的数字的阶乘,以便在
O(1)
中相乘,那么N
不可能“趋于无穷大”。 " 因此大 O 表示法是不合适的。If you take multiplication as
O(1)
, then yes,O(N)
is correct. However, note that multiplying two numbers of arbitrary lengthx
is notO(1)
on finite hardware -- asx
tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it'sO(x ** 1.585)
).You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one.
x
, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows withO(log N)
, of course.If you mean to limit your question to factorials of numbers short enough to be multiplied in
O(1)
, then there's no wayN
can "tend to infinity" and therefore big-O notation is inappropriate.假设您正在谈论有史以来最简单的阶乘算法:
是的,该算法是线性的,运行时间为 O(n) 。之所以如此,是因为它每次递减值
n
时都会执行一次,并且递减值n
直到达到0
,这意味着函数被递归调用n
次。当然,这是假设减法和乘法都是常数运算。当然,如果您以其他方式实现阶乘(例如,递归地使用加法而不是乘法),您最终可能会得到时间复杂得多的算法。不过,我不建议使用这样的算法。
Assuming you're talking about the most naive factorial algorithm ever:
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value
n
, and it decrements the valuen
until it reaches0
, meaning the function is called recursivelyn
times. This is assuming, of course, that both decrementation and multiplication are constant operations.Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
当您表达算法的复杂性时,它始终是输入大小的函数。仅当您相乘的数字具有固定大小时,才可以认为乘法是
O(1)
运算。例如,如果您想要确定计算矩阵乘积的算法的复杂性,您可以假设矩阵的各个分量具有固定大小。然后,假设两个单独矩阵分量的乘法为O(1)
是有效的,并且您可以根据每个矩阵中的条目数计算复杂性。但是,当您想要计算计算
N!
的算法的复杂性时,您必须假设N
可以任意大,因此假设乘法是一个O(1)
运算。如果您想将 n 位数字与 m 位数字相乘,简单的算法(您手动执行的那种)需要时间
O(mn)
,但还有更快的算法。如果您想分析计算
N!
的简单算法的复杂性,那么在 for 循环的第 k 步,您将乘以
(k-1)!
由k
。用于表示(k-1)!
的位数为O(k log k)
,用于表示k
的位数是O(log k)
。因此,(k-1)!
和k
相乘所需的时间是O(k (log k)^2)
(假设您使用朴素的乘法算法)。那么算法所花费的总时间就是每一步所花费的时间之和:sum k = 1 to N [k (log k)^2] <= (log N)^2 * (总和 k = 1 到 N [k]) =
O(N^2 (log N)^2)
您可以通过使用更快的乘法算法来提高此性能,例如 Schönhage-Strassen,该算法需要时间
O( n*log(n)*log(log(n))) 用于 2 个 n 位数字。
提高性能的另一种方法是使用更好的算法来计算
N!
。据我所知,最快的方法首先计算N!
的质因数分解,然后将所有质因数相乘。When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an
O(1)
operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components wasO(1)
, and you would compute the complexity according to the number of entries in each matrix.However, when you want to figure out the complexity of an algorithm to compute
N!
you have to assume thatN
can be arbitrarily large, so it is not valid to assume that multiplication is anO(1)
operation.If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time
O(mn)
, but there are faster algorithms.If you want to analyze the complexity of the easy algorithm for computing
N!
then at the k-th step in the for loop, you are multiplying
(k-1)!
byk
. The number of bits used to represent(k-1)!
isO(k log k)
and the number of bits used to representk
isO(log k)
. So the time required to multiply(k-1)!
andk
isO(k (log k)^2)
(assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time
O(n*log(n)*log(log(n)))
for 2 n-bit numbers.The other way to improve performance is to use a better algorithm to compute
N!
. The fastest one that I know of first computes the prime factorization ofN!
and then multiplies all the prime factors.递归阶乘的时间复杂度为:
因此,
一次递归调用的时间复杂度为:
用 Big-Oh 表示法表示,
T(N) 与 n 成正比,
因此,
递归阶乘的时间复杂度为O(n)。
由于递归调用时没有占用额外的空间,所以空间复杂度为O(N)。
The time-complexity of recursive factorial would be:
So,
The time complexity for one recursive call would be:
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).