关于大O和大Omega的问题
我认为这可能是一个关于大 O 表示法的初学者问题。举例来说,我有一个算法,可以递归地分解整个列表 (O(n)),然后将其重新组合在一起 (O(n))。我假设这意味着效率为 O(n) + O(n)。这是否可以简化为 2O(n)、O(2n) 或 O(n)?根据我对这种表示法的了解,它的复杂度为 O(2n),并且使用渐近表示法的规则,您可以去掉 2,从而得到 O(n) 的效率。
但是,如果我们试图找到下限,这条规则仍然适用吗?如果 Ω(n) + Ω(n) = Ω(2n),还能去掉 2 吗?我认为不会,因为你实际上会降低下限(因为 n < 2n)。
I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I assume that this means that the efficiency is O(n) + O(n). Does this simplify to 2O(n), O(2n), or O(n)? From what I know about this notation, it would be O(2n) and using the rules of asymptotic notation you can drop the 2, giving an efficiency of O(n).
If we were trying to find a lower bound, though, can this rule still apply? If Ω(n) + Ω(n) = Ω(2n), can you still drop the 2? I would think not, since you would actually be lowering the lower bound (since n < 2n).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
以上所有,但前两个过于复杂。O(n) 将是规范表示法。2
*N 与N,因此如果对于足够大的 N ( O(2*N) ),最大成本不超过与 2*N 成正比,对于足够大的 N ( O(N) ,最大成本也不超过与 N 成正比) )
。
2*N 与 N 成正比,因此如果对于足够大的 N ( Ω(2),最小成本不小于 2*N 的比例 *N) ),对于足够大的 N ( Ω(N) ),最小成本也不小于 N 的比例,
例如,假设您有一个算法需要 300 ms + 100*N ms 执行。其速度的界限是 θ(N),因此是 Ω(N)。
如果算法执行时间是原来的两倍怎么办?即使执行时间为 600 ms + 200*N ms,其速度的下限仍然是 θ(N)。 ),因此
理解 θ(N) 等最重要的一点是,这些符号用于研究某些东西的缩放程度。该算法花费的时间是另一种算法的两倍,并没有说明任何一种算法的扩展程度,因此它们的速度下限可以具有相同的 Ω() 也就不足为奇了。
All of the above, but the first two are overly complex. O(n) would be the canonical notation.
2*N is proportional to N, so if the maximum cost is no more than proportional to 2*N for a sufficiently large N ( O(2*N) ), the maximum cost is also no more than proportional to N for a sufficiently large N ( O(N) ).
Most definitely yes.
2*N is proportional to N, so if the minumum cost is no less than proportional to 2*N for a sufficiently large N ( Ω(2*N) ), the minimum cost is also no less than proportional to N for a sufficiently large N ( Ω(N) ).
For example, say you have an algorithm that takes 300 ms + 100*N ms to execute. The lower bound of its speed is Θ(N) and thus Ω(N).
What if the algorithm were to take twice as long to execute? Even if it took 600 ms + 200*N ms to execute, the lower bound of its speed is still Θ(N) and thus Ω(N).
The most important thing to realise to understand Θ(N) and the like is that these notations are used to study how well something scales. That one algorithm takes twice as long as another doesn't say anything about how well either scales, so it shouldn't be a surprise that they can have the same Ω() for the lower bound of their speed.
它会简化——通常为 O(n),表明解决问题所需的时间与问题的大小成正比。在这种情况下,写成 2O(n) 可能更合适,尽管它的意思是一样的。
It would simplify -- usually to O(n), indicating that the amount of time taken to solve the problem is proportional to the problem size. In this case, it may be more appropriate to write 2O(n), though it means the same thing.
我相信根据 Big-O 的定义:
如果一个函数 f(n) 对于某个常数 c 和足够大的 n 值 <= cg(n) ,那么可以说 f(n) = O( g(n))。在您的示例中,g(n) = n。
因此,例如,如果可以为 c 选择某个值,使得 f(n) <= cn 对于足够大的 n,那么您可以说 f(n) = O(n)。
Big Omega 的定义类似。
I believe according to the definition of Big-O:
If a function f(n) is <= cg(n) for some constant c and sufficiently large values of n, then it can be said that f(n) = O(g(n)). In your example, g(n) = n.
So, for example, if it is possible to pick some value for c which makes f(n) <= cn for sufficiently large n, then you can say that f(n) = O(n).
Big Omega is defined similarly.
已经有一段时间了,但我认为你在这两种情况下都是对的。
更新
实际上看起来像 Ω(n) + Ω(n) == Ω(n)
It's been a while but I think you're right in both cases.
UPDATE
Actually looks like Ω(n) + Ω(n) == Ω(n)