Θ(n) 和 O(n) 之间有什么区别?
有时我会看到 θ(n) 带有奇怪的 θ 符号,中间有一些东西,有时只是 O(n)。 这只是因为没有人知道如何输入这个符号而懒得打字,还是它有不同的含义?
Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
简短说明:
通常,即使人们谈论 O(g(n)),他们实际上指的是 θ(g(n)),但从技术上讲,这是有区别的。
从技术上来说:
O(n) 代表上限。 θ(n) 表示紧界。
Ω(n) 表示下限。
基本上,当我们说一个算法是 O(n) 时,它也是 O(n2), O( n1000000), O(2n), ...但是 θ(n) 算法不是 θ(n2 )。
事实上,由于 f(n) = θ(g(n)) 意味着 n 值足够大,因此 f(n) 可以限制在 c1g(n) 和 c 内2g(n) 对于 c1 和 c2 的某些值,即 f 的增长率渐近等于 g:g 可以是一个更低的值界和以及f的上限。 这直接意味着 f 也可以是 g 的下界和上限。 最后,
类似地,要证明 f(n) = θ(g(n)),只需证明 g 是上数f 的界限(即 f(n) = O(g(n))),f 是 g 的下界(即 f(n) = Ω(g(n)),这与 g(n) 完全相同) = O(f(n)))。 简而言之,
还有little-oh 和little-omega (< code>ω) 表示函数的松散上限和松散下界的符号。
总结一下:
如需更详细的讨论,您可以阅读定义在维基百科上或查阅经典教科书,例如 Cormen 等人的《算法导论》。
Short explanation:
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
There are also little-oh and little-omega (
ω
) notations representing loose upper and loose lower bounds of a function.To summarize:
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
有一个简单的方法(我猜是一个技巧)来记住哪个符号意味着什么。
所有 Big-O 表示法都可以被视为有一个小节。
当查看 Ω 时,条形图位于底部,因此它是(渐近)下界。
当查看 θ 时,条形显然位于中间。 所以它是一个(渐近)紧界。
手写 O 时,您通常在顶部完成,并画一条曲线。 因此 O(n) 是函数的上限。 公平地说,这个字体不适用于大多数字体,但它是这些名称的最初理由。
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
一个是大“O”,
一个是大西塔
http://en.wikipedia.org/wiki/Big_O_notation
Big O 意味着您的算法执行的步骤不会多于给定表达式 (n^2)
Big Omega 表示您的算法执行的步骤不会少于给定表达式 (n^2)
当两个条件都为 true 时同样的表达式,你可以使用大theta符号......
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
我不会提供理论定义(这里已经对其进行了精美的总结),而是给出一个简单的示例:
假设
f(i)
的运行时间为O(1)
。 下面是一个代码片段,其渐近运行时间为θ(n)
。 它总是调用函数f(...)
n
次。 下限和上限均为 n。下面的第二个代码片段的渐近运行时间为
O(n)
。 它调用函数f(...)
最多n
次。 上限为 n,但下限可能为Ω(1)
或Ω(log(n))
,具体取决于f2(i) 内部发生的情况)
。Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of
f(i)
isO(1)
. Below is a code fragment whose asymptotic runtime isΘ(n)
. It always calls the functionf(...)
n
times. Both the lower and the upper bound is n.The second code fragment below has the asymptotic runtime of
O(n)
. It calls the functionf(...)
at mostn
times. The upper bound is n, but the lower bound could beΩ(1)
orΩ(log(n))
, depending on what happens insidef2(i)
.Theta 是指特殊情况的简写方式,其中
大O和欧米茄是一样的。
因此,如果有人声称 Theta 是表达式 q,那么他们也必然声称 Big O 是表达式 q,而 Omega 是表达式 q。
粗略的类比:
如果:
Theta 声称:“那只动物有 5 条腿。”
那么可以得出:
大 O 是正确的(“该动物的腿少于或等于 5 条。”)
和
Omega 是正确的(“该动物有超过或等于 5 条腿。”)
这只是一个粗略的类比,因为表达式不一定是具体数字,而是不同数量级的函数,例如 log(n)、n、 n^2,(等等)。
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims
The Theta is expression q
, then they are also necessarily claiming thatBig O is expression q
andOmega is expression q
.Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
图表可以使前面的答案更容易理解:
θ -符号-相同的顺序| O 表示法 - 上限
在英语中,
在左侧,请注意有一个上限和一个下限,它们的数量级相同(即 g(n) )。 忽略常数,如果上限和下限具有相同的数量级,则可以有效地说 f(n) = θ(g(n)) 或 f(n)位于 g(n) 的大 theta 中。
从右边开始,这是一个更简单的例子,它表示上限 g(n) 只是数量级,并忽略常数 c (就像所有 g(n) >big O 符号可以)。
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
如果存在正
k
作为f(n)<=k*n<,则
f(n)
属于O(n)
/code>如果存在正
k1
、k2
作为,
f(n)
属于θ(n)
>k1*n<=f(n)<=k2*n关于 Big 的维基百科文章O 表示法
f(n)
belongs toO(n)
if exists positivek
asf(n)<=k*n
f(n)
belongs toΘ(n)
if exists positivek1
,k2
ask1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
使用限制
让我们考虑
f(n) > 0 和 g(n) > 0
代表所有n
。 考虑这一点是可以的,因为最快的真正算法至少有一个操作,并在开始后完成其执行。 这将简化微积分,因为我们可以使用值 (f(n)
) 而不是绝对值 (|f(n)|
)。f(n) = O(g(n))
一般:
<前><代码> f(n)
0≤lim────────< 无穷大
n➜∞ g(n)
对于
g(n) = n
:<前><代码> f(n)
0≤lim────────< 无穷大
n➜∞ n
示例:
反例:
f(n) = θ(g(n))
一般:
<前><代码> f(n)
0< 林 ──────── < 无穷大
n➜∞ g(n)
对于
g(n) = n
:<前><代码> f(n)
0< 林 ──────── < 无穷大
n➜∞ n
示例:
反例:
Using limits
Let's consider
f(n) > 0
andg(n) > 0
for alln
. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)
) instead of the absolute value (|f(n)|
).f(n) = O(g(n))
General:
For
g(n) = n
:Examples:
Counterexamples:
f(n) = Θ(g(n))
General:
For
g(n) = n
:Examples:
Counterexamples:
T(n)= n^2 + n,我们可以说T(n)=O(n^2), T(n)=O(n^3),T(n)=O(n^4)。 但对于小o,T(n)=o(n^2)不满足小o的定义。 所以对于小 o,T(n)=o(n^3)、T(n)=o(n^4) 是正确的。 冗余的T(n)=O(n^2)是什么? θ 太大了!
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!