“加法”的大O表示法
我正在学习有关Coursera上的大o符号的课程。我观看了一个有关斐波那契算法的大o的视频(非恢复方法),这是这样的:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
我理解除f [i]&lt; -f [i-1] + f [i以外的每个部分-2] o(n)
=&gt;我不明白这条线,不是o(1),因为这只是一个简单的添加吗?与f [i]&lt; - 1+1
相同吗?
他们给我的解释是:“ ,但加法更糟。通常增加时间是恒定的时间。但是,这些是很大的。很大,而且它们通常不适合机器字。”
”添加成千上万的数字,您携带等等。 因此,您所做的工作量应与数字数量成正比。 因此这应该花费时间来运行该代码线“。
在这种情况下,数字的数量与n成正比, ?
时间复杂性 a href =“ https://www.coursera.org/learn/algorithmic-toolbox/lecture/zclml/ using-big-o o” 4:56至6:26)
I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n)
=> I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1
?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1
is O(1) while a = n^50+n^50
isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Big-O 只是一种用于跟踪数量级的符号。但是当我们将其应用到算法中时,我们必须记住“什么的数量级”?在这种情况下,它是“花费的时间”。
CPU 被设置为在恒定时间内执行基本算术类型的基本算术。对于大多数目的,我们可以假设我们正在处理这些基本类型。
但是,如果 n 是一个非常大的正整数,我们就不能这样假设。一个非常大的整数需要 O(log(n)) 位来表示。无论我们将其存储为位、字节等,都需要一个
O(log(n))
内容的数组来存储。 (我们需要的字节数少于位,但这只是一个常数因子。)当我们进行计算时,我们必须考虑我们实际上将如何处理该数组。现在假设我们正在尝试计算
n+m
。我们需要生成大小为 O(log(n+m)) 的结果,这必须至少花费该时间来分配。幸运的是,小学的长加法方法(添加数字并跟踪进位)可以适用于大整数库,并且跟踪时间复杂度为 O(log(n+m))。因此,当您考虑加法时,答案大小的对数才是重要的。由于
log(50^n) = n * log(50)
这意味着50^n
的运算至少为O(n)
。 (获取50^n
可能需要更长的时间...)这意味着计算n+1
需要时间O(log(n))
。现在,在斐波那契数列的情况下,
F(n)
大致为φ^n
,其中φ = (1 + sqrt(5))/2
代码> 所以log(F(n)) = O(n)
。Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if
n
is a very large positive integer, we can't assume that. A very large integer will needO(log(n))
bits to represent. Which, whether we store it as bits, bytes, etc, will need an array ofO(log(n))
things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.Now suppose that we're trying to calculate
n+m
. We're going to need to generate a result of sizeO(log(n+m))
, which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and isO(log(n+m))
to track.So when you're looking at addition, the log of the size of the answer is what matters. Since
log(50^n) = n * log(50)
that means that operations with50^n
are at leastO(n)
. (Getting50^n
might take longer...) And it means that calculatingn+1
takes timeO(log(n))
.Now in the case of the Fibonacci sequence,
F(n)
is roughlyφ^n
whereφ = (1 + sqrt(5))/2
solog(F(n)) = O(n)
.