Θ(n) 和 O(n) 之间有什么区别?

发布于 2024-07-11 16:12:07 字数 84 浏览 9 评论 0原文

有时我会看到 θ(n) 带有奇怪的 θ 符号,中间有一些东西,有时只是 O(n)。 这只是因为没有人知道如何输入这个符号而懒得打字,还是它有不同的含义?

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

╄→承喏 2024-07-18 16:12:07

简短说明:

如果算法的大小为 θ(g(n)),则意味着随着 n(输入大小)变大,算法的运行时间与 g(n) 成正比。

如果算法的复杂度为 O(g(n)),则意味着随着 n 变大,算法的运行时间至多与 g(n) 成正比。

通常,即使人们谈论 O(g(n)),他们实际上指的是 θ(g(n)),但从技术上讲,这是有区别的。


从技术上来说:

O(n) 代表上限。 θ(n) 表示紧界。
Ω(n) 表示下限。

<块引用>

f(x) = θ(g(x)) 当且仅当 f(x) =
O(g(x)) 和 f(x) = Ω(g(x))

基本上,当我们说一个算法是 O(n) 时,它也是 O(n2), O( n1000000), O(2n), ...但是 θ(n) 算法不是 θ(n2 )。

事实上,由于 f(n) = θ(g(n)) 意味着 n 值足够大,因此 f(n) 可以限制在 c1g(n) 和 c 内2g(n) 对于 c1 和 c2 的某些值,即 f 的增长率渐近等于 g:g 可以是一个更低的值界以及f的上限。 这直接意味着 f 也可以是 g 的下界和上限。 最后,

f(x) = θ(g(x)) 当且仅当 g(x) = θ(f(x))

类似地,要证明 f(n) = θ(g(n)),只需证明 g 是上数f 的界限(即 f(n) = O(g(n))),f 是 g 的下界(即 f(n) = Ω(g(n)),这与 g(n) 完全相同) = O(f(n)))。 简而言之,

f(x) = θ(g(x)) 当且仅当 f(x) = O(g(x)) 且 g(x) = O(f(x))


还有little-oh 和little-omega (< code>ω) 表示函数的松散上限和松散下界的符号。

总结一下:

f(x) = O(g(x)) (big-oh) 意味着
f(x) 的增长率为
渐近小于或等于
g(x)的增长率。

f(x) = Ω(g(x)) (大欧米伽)意味着
f(x) 的增长率为
渐近大于或
等于
g(x)

的增长率

f(x) = o(g(x))(小哦)意味着
f(x) 的增长率为
渐近小于
g(x) 的增长率。

f(x) = ω(g(x)) (小欧米伽)意味着
f(x) 的增长率为
渐近大于
g(x)

的增长率

f(x) = θ(g(x)) (theta) 意味着
f(x) 的增长率为
渐近等于
g(x)

的增长率

如需更详细的讨论,您可以阅读定义在维基百科上或查阅经典教科书,例如 Cormen 等人的《算法导论》。

Short explanation:

If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).

If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).

Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.


More technically:

O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.

f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))

Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).

In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,

f(x) = Θ(g(x)) iff g(x) = Θ(f(x))

Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,

f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))


There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.

To summarize:

f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to
to the growth rate of g(x).

f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to
the growth rate of g(x)

f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).

f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)

f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)

For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.

偏爱自由 2024-07-18 16:12:07

有一个简单的方法(我猜是一个技巧)来记住哪个符号意味着什么。

所有 Big-O 表示法都可以被视为有一个小节。

当查看 Ω 时,条形图位于底部,因此它是(渐近)下界。

当查看 θ 时,条形显然位于中间。 所以它是一个(渐近)紧界。

手写 O 时,您通常在顶部完成,并画一条曲线。 因此 O(n) 是函数的上限。 公平地说,这个字体不适用于大多数字体,但它是这些名称的最初理由。

There's a simple way (a trick, I guess) to remember which notation means what.

All of the Big-O notations can be considered to have a bar.

When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.

When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.

When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.

一曲琵琶半遮面シ 2024-07-18 16:12:07

一个是大“O”,

一个是大西塔

http://en.wikipedia.org/wiki/Big_O_notation

Big O 意味着您的算法执行的步骤不会多于给定表达式 (n^2)

Big Omega 表示您的算法执行的步骤不会少于给定表达式 (n^2)

当两个条件都为 true 时同样的表达式,你可以使用大theta符号......

one is Big "O"

one is Big Theta

http://en.wikipedia.org/wiki/Big_O_notation

Big O means your algorithm will execute in no more steps than in given expression(n^2)

Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)

When both condition are true for the same expression, you can use the big theta notation....

妖妓 2024-07-18 16:12:07

我不会提供理论定义(这里已经对其进行了精美的总结),而是给出一个简单的示例:

假设 f(i) 的运行时间为 O(1) 。 下面是一个代码片段,其渐近运行时间为θ(n)。 它总是调用函数f(...) n 次。 下限和上限均为 n。

for(int i=0; i<n; i++){
    f(i);
}

下面的第二个代码片段的渐近运行时间为O(n)。 它调用函数 f(...) 最多 n 次。 上限为 n,但下限可能为 Ω(1)Ω(log(n)),具体取决于 f2(i) 内部发生的情况)

for(int i=0; i<n; i++){
    if( f2(i) ) break;
    f(i);
}

Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:

Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.

for(int i=0; i<n; i++){
    f(i);
}

The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).

for(int i=0; i<n; i++){
    if( f2(i) ) break;
    f(i);
}
筑梦 2024-07-18 16:12:07

Theta 是指特殊情况的简写方式,其中
大O和欧米茄是一样的。

因此,如果有人声称 Theta 是表达式 q,那么他们也必然声称 Big O 是表达式 q,而 Omega 是表达式 q。


粗略的类比:

如果:
Theta 声称:“那只动物有 5 条腿。”
那么可以得出:
大 O 是正确的(“该动物的腿少于或等于 5 条。”)

Omega 是正确的(“该动物有超过或等于 5 条腿。”)

这只是一个粗略的类比,因为表达式不一定是具体数字,而是不同数量级的函数,例如 log(n)、n、 n^2,(等等)。

Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.

Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.


Rough analogy:

If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")

It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).

白况 2024-07-18 16:12:07

图表可以使前面的答案更容易理解:

θ -符号-相同的顺序| O 表示法 - 上限

θ(n) - 相同顺序 O(n) - 上限

在英语中,

在左侧,请注意有一个上限和一个下限,它们的数量级相同(即 g(n) )。 忽略常数,如果上限和下限具有相同的数量级,则可以有效地说 f(n) = θ(g(n))f(n)位于 g(n) 的大 theta 中

从右边开始,这是一个更简单的例子,它表示上限 g(n) 只是数量级,并忽略常数 c (就像所有 g(n) >big O 符号可以)。

A chart could make the previous answers easier to understand:

Θ-Notation - Same order | O-Notation - Upper bound

Θ(n) - Same order O(n) - Upper bound

In English,

On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).

Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).

吾家有女初长成 2024-07-18 16:12:07

如果存在正 k 作为 f(n)<=k*n<,则 f(n) 属于 O(n) /code>

如果存在正 k1k2 作为 f(n) 属于 θ(n) >k1*n<=f(n)<=k2*n

关于 Big 的维基百科文章O 表示法

f(n) belongs to O(n) if exists positive k as f(n)<=k*n

f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n

Wikipedia article on Big O Notation

长伴 2024-07-18 16:12:07

使用限制

让我们考虑f(n) > 0 和 g(n) > 0 代表所有 n。 考虑这一点是可以的,因为最快的真正算法至少有一个操作,并在开始后完成其执行。 这将简化微积分,因为我们可以使用值 (f(n)) 而不是绝对值 (|f(n)|)。

  1. f(n) = O(g(n))

    一般:

    <前><代码> f(n)
    0≤lim────────< 无穷大
    n➜∞ g(n)

    对于g(n) = n

    <前><代码> f(n)
    0≤lim────────< 无穷大
    n➜∞ n

    示例:

     表达式 限制值 
      ------------------------------------------------ 
      n = O(n) 1 
      1/2*n = O(n) 1/2 
      2*n = O(n) 2 
      n+log(n) = O(n) 1 
      n = O(n*log(n)) 0 
      n = O(n²) 0 
      n = O(nⁿ) 0 
      

    反例:

     表达式 限制值 
      ------------------------------------------------- 
      n ≠ O(log(n)) ∞ 
      1/2*n ≠ O(sqrt(n)) ∞ 
      2*n ≠ O(1) ∞ 
      n+log(n) ≠ O(log(n)) ∞ 
      
  2. f(n) = θ(g(n))

    一般:

    <前><代码> f(n)
    0< 林 ──────── < 无穷大
    n➜∞ g(n)

    对于g(n) = n

    <前><代码> f(n)
    0< 林 ──────── < 无穷大
    n➜∞ n

    示例:

     表达式 限制值 
      ------------------------------------------------ 
      n = θ(n) 1 
      1/2*n = θ(n) 1/2 
      2*n = θ(n) 2 
      n+log(n) = θ(n) 1 
      

    反例:

     表达式 限制值 
      ------------------------------------------------- 
      n ≠ θ(log(n)) ∞ 
      1/2*n ≠ θ(sqrt(n)) ∞ 
      2*n ≠ θ(1) ∞ 
      n+log(n) ≠ θ(log(n)) ∞ 
      n ≠ θ(n*log(n)) 0 
      n ≠ θ(n²) 0 
      n ≠ θ(nⁿ) 0 
      

Using limits

Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).

  1. f(n) = O(g(n))

    General:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

              f(n)     
    0 ≤ lim ──────── < ∞
        n➜∞    n
    

    Examples:

        Expression               Value of the limit
    ------------------------------------------------
    n        = O(n)                      1
    1/2*n    = O(n)                     1/2
    2*n      = O(n)                      2
    n+log(n) = O(n)                      1
    n        = O(n*log(n))               0
    n        = O(n²)                     0
    n        = O(nⁿ)                     0
    

    Counterexamples:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ O(log(n))                 ∞
    1/2*n    ≠ O(sqrt(n))                ∞
    2*n      ≠ O(1)                      ∞
    n+log(n) ≠ O(log(n))                 ∞
    
  2. f(n) = Θ(g(n))

    General:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞   g(n)
    

    For g(n) = n:

              f(n)     
    0 < lim ──────── < ∞
        n➜∞    n
    

    Examples:

        Expression               Value of the limit
    ------------------------------------------------
    n        = Θ(n)                      1
    1/2*n    = Θ(n)                     1/2
    2*n      = Θ(n)                      2
    n+log(n) = Θ(n)                      1
    

    Counterexamples:

        Expression                Value of the limit
    -------------------------------------------------
    n        ≠ Θ(log(n))                 ∞
    1/2*n    ≠ Θ(sqrt(n))                ∞
    2*n      ≠ Θ(1)                      ∞
    n+log(n) ≠ Θ(log(n))                 ∞
    n        ≠ Θ(n*log(n))               0
    n        ≠ Θ(n²)                     0
    n        ≠ Θ(nⁿ)                     0
    
咆哮 2024-07-18 16:12:07

结论:我们将大O、大θ和大Ω视为同一事物。

为什么? 下面我就来说说原因:

首先,我要澄清一个错误的说法,有些人认为我们只关心最坏的时间复杂度,所以我们总是使用大O而不是大θ。 我会说这个人胡说八道。 上下界用于描述一个函数,而不是用于描述时间复杂度。 最坏时间函数有其上限和下限; 最佳时间函数也有上限和下限。

为了解释清楚大O和大θ的关系,我先解释一下大O和小o的关系。 从定义中我们很容易知道小o是大O的子集。例如:

T(n)= n^2 + n,我们可以说T(n)=O(n^2), T(n)=O(n^3),T(n)=O(n^4)。 但对于小o,T(n)=o(n^2)不满足小o的定义。 所以对于小 o,T(n)=o(n^3)、T(n)=o(n^4) 是正确的。 冗余的T(n)=O(n^2)是什么? θ 太大了!

一般我们说大O就是O(n^2),很难说T(n)=O(n^3),T(n)=O(n^4)。 为什么? 因为我们潜意识里把大O当作大θ。

同样,我们潜意识里也会将大Ω视为大θ。

总而言之,大O、大θ、大Ω从定义上看并不是一回事,但在我们的嘴里和大脑里却是同一件事。

Conclusion: we regard big O, big θ and big Ω as the same thing.

Why? I will tell the reason below:

Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.

In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:

T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!

Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.

Similarly, we also regard big Ω as big θ subconsciously.

In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文