Big-oh 与 big-theta

发布于 2024-09-09 02:47:32 字数 496 浏览 5 评论 0原文

可能的重复:
θ(n) 和 O(n) 之间有什么区别?< /a>

在我看来,当人们非正式地谈论算法复杂性时,他们谈论的是big-oh。但在正式场合,我经常看到大西塔(big-theta),偶尔也会出现大哦(big-oh)。 我知道在数学上两者之间有什么区别,但是在英语中,当您的意思是 big-theta 不正确时,在什么情况下会使用 big-oh,反之亦然(将不胜感激示例算法)?

额外提示:为什么人们在非正式交谈时似乎总是使用“big-oh”?

Possible Duplicate:
What is the difference between Θ(n) and O(n)?

It seems to me like when people talk about algorithm complexity informally, they talk about big-oh. But in formal situations, I often see big-theta with the occasional big-oh thrown in.
I know mathematically what the difference is between the two, but in English, in what situation would using big-oh when you mean big-theta be incorrect, or vice versa (an example algorithm would be appreciated)?

Bonus: why do people seemingly always use big-oh when talking informally?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

白云不回头 2024-09-16 02:47:32

Big-O 是一个上限。

Big-Theta 是一个紧界,即上界和下界。

当人们只担心可能发生的最坏情况时,大O就足够了;即它说“没有比这更糟糕的了”。当然,边界越紧越好,但紧边界并不总是容易计算。

另请参阅

相关问题


以下来自维基百科的引用也说明了一些情况:

非正式地,特别是在计算机科学中,大 O 表示法通常是
允许在某种程度上被滥用来描述渐近紧界
实际上,使用 Big Theta 表示法可能更合适
给定上下文。

例如,考虑函数T(n) = 73n3+ 22n2+ 58,以下所有内容通常都是可以接受的,但严格的约束(即下面的项目符号 2 和 3)通常比宽松的约束条件(即项目符号 1)更受青睐
如下)。

  1. T(n) = O(n100),与 T(n) ∈ O(n100)
  2. T(n) = O(n3),与 T(n) ∈ O(n3)
  3. T(n) = θ(n3),与 T(n) ∈ θ(n3)

等效的英文语句分别为:

  1. T(n) 渐近增长速度不会快于 n100
  2. T(n) 渐近增长速度不快于 n3
  3. T(n) 渐近增长速度与 n3 一样快。

因此,虽然所有三个陈述都是正确的,但逐渐包含更多信息
每个。然而,在某些领域,大 O 表示法(上面列表中的第 2 号项目符号)
比 Big Theta 表示法更常用(项目符号中的第 3 号)
上面列出的)因为增长更慢的函数更受欢迎。

Big-O is an upper bound.

Big-Theta is a tight bound, i.e. upper and lower bound.

When people only worry about what's the worst that can happen, big-O is sufficient; i.e. it says that "it can't get much worse than this". The tighter the bound the better, of course, but a tight bound isn't always easy to compute.

See also

Related questions


The following quote from Wikipedia also sheds some light:

Informally, especially in computer science, the Big O notation often is
permitted to be somewhat abused to describe an asymptotic tight bound
where using Big Theta notation might be more factually appropriate in a
given context.

For example, when considering a function T(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tightness of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., bullet 1
below).

  1. T(n) = O(n100), which is identical to T(n) ∈ O(n100)
  2. T(n) = O(n3), which is identical to T(n) ∈ O(n3)
  3. T(n) = Θ(n3), which is identical to T(n) ∈ Θ(n3)

The equivalent English statements are respectively:

  1. T(n) grows asymptotically no faster than n100
  2. T(n) grows asymptotically no faster than n3
  3. T(n) grows asymptotically as fast as n3.

So while all three statements are true, progressively more information is contained in
each. In some fields, however, the Big O notation (bullets number 2 in the lists above)
would be used more commonly than the Big Theta notation (bullets number 3 in the
lists above) because functions that grow more slowly are more desirable.

萌化 2024-09-16 02:47:32

我是一名数学家,我已经看到并需要 big-O O(n)、big-Theta θ(n) 和 big-Omega Ω( n) 一次又一次地使用符号,而不仅仅是为了算法的复杂性。正如人们所说,big-Theta 是一个双向边界。严格来说,当您想要解释算法可以做得有多好,以及该算法不能做得更好或没有算法可以做得更好时,应该使用它。例如,如果您说“排序需要对最坏情况输入进行 θ(n(log n)) 比较”,那么您就是在解释存在一种对任何输入使用 O(n(log n)) 比较的排序算法;对于每个排序算法,都有一个输入强制其进行 Ω(n(log n)) 比较。

现在,人们使用 O 而不是 Ω 的一个狭隘原因是放弃关于最坏或平均情况的免责声明。如果你说“排序需要 O(n(log n)) 次比较”,那么对于有利的输入,该语句仍然成立。另一个狭窄的原因是,即使一种算法完成 X 需要时间 θ(f(n)),另一种算法可能会做得更好,所以你只能说 X 本身的复杂度是 O(f(n))。

然而,人们非正式地使用 O 有一个更广泛的原因。从人类的角度来看,当相反的一面从上下文中“显而易见”时,总是做出两面性的陈述是一件痛苦的事情。因为我是一名数学家,所以理想情况下我总是会小心地说“当且仅当下雨时我会带伞”或“我可以玩 4 个球但不能玩 5 个球”,而不是“如果下雨我会带伞”下雨”或“我可以玩 4 个球”。但此类陈述的另一半通常显然是有意或无意的。对显而易见的事情马虎是人的本性。头发分叉很混乱。

不幸的是,在数学或算法理论等严格领域,不吹毛求疵也很令人困惑。当人们应该说 Ω 或 θ 时,他们不可避免地会说 O。因为细节“显而易见”而忽略它们总是会导致误解。对此没有解决方案。

I'm a mathematician and I have seen and needed big-O O(n), big-Theta Θ(n), and big-Omega Ω(n) notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can't do better or that no algorithm can do better. For instance, if you say "Sorting requires Θ(n(log n)) comparisons for worst-case input", then you're explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make Ω(n(log n)) comparisons.

Now, one narrow reason that people use O instead of Ω is to drop disclaimers about worst or average cases. If you say "sorting requires O(n(log n)) comparisons", then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time Θ(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).

However, there is a broader reason that people informally use O. At a human level, it's a pain to always make two-sided statements when the converse side is "obvious" from context. Since I'm a mathematician, I would ideally always be careful to say "I will take an umbrella if and only if it rains" or "I can juggle 4 balls but not 5", instead of "I will take an umbrella if it rains" or "I can juggle 4 balls". But the other halves of such statements are often obviously intended or obviously not intended. It's just human nature to be sloppy about the obvious. It's confusing to split hairs.

Unfortunately, in a rigorous area such as math or theory of algorithms, it's also confusing not to split hairs. People will inevitably say O when they should have said Ω or Θ. Skipping details because they're "obvious" always leads to misunderstandings. There is no solution for that.

傲影 2024-09-16 02:47:32

因为我的键盘有一个 O 键。
它没有 θ 或 Ω 键。

我怀疑大多数人也同样懒惰,在表示 θ 时使用 O,因为这样更容易输入。

Because my keyboard has an O key.
It does not have a Θ or an Ω key.

I suspect most people are similarly lazy and use O when they mean Θ because it's easier to type.

优雅的叶子 2024-09-16 02:47:32

大 O 被如此频繁使用的原因之一是它被如此频繁地使用。很多人看到这个符号并认为他们知道它的含义,然后自己(错误地)使用它。对于接受过正规教育的程序员来说,这种情况经常发生——我自己也曾经感到内疚。

另一个原因是,在大多数非希腊语键盘上输入大 O 比输入大 θ 更容易。

但我认为很大程度上是因为一种偏执。我从事过一些与国防相关的编程工作(当时对算法分析知之甚少)。在这种情况下,最坏情况的表现总是人们感兴趣的,因为最坏的情况可能只是在错误的时间发生。例如,即使这种情况发生的实际概率远小于所有船员在同一时刻突发侥幸性心脏病的概率,也没关系 - 它仍然有可能发生。

当然,许多算法在非常常见的情况下都有最坏的情况 - 经典的例子是按顺序插入二叉树以获得有效的单链表。对平均绩效的“真实”评估需要考虑不同类型输入的相对频率。

One reason why big O gets used so much is kind of because it gets used so much. A lot of people see the notation and think they know what it means, then use it (wrongly) themselves. This happens a lot with programmers whose formal education only went so far - I was once guilty myself.

Another is because it's easier to type a big O on most non-Greek keyboards than a big theta.

But I think a lot is because of a kind of paranoia. I worked in defence-related programming for a bit (and knew very little about algorithm analysis at the time). In that scenario, the worst case performance is always what people are interested in, because that worst case might just happen at the wrong time. It doesn't matter if the actually probability of that happening is e.g. far less than the probability of all members of a ships crew suffering a sudden fluke heart attack at the same moment - it could still happen.

Though of course a lot of algorithms have their worst case in very common circumstances - the classic example being inserting in-order into a binary tree to get what's effectively a singly-linked list. A "real" assessment of average performance needs to take into account the relative frequency of different kinds of input.

骷髅 2024-09-16 02:47:32

额外奖励:为什么人们在非正式交谈时似乎总是使用大哦?

因为在 big-oh 中,这个循环:

for i = 1 to n do
    something in O(1) that doesn't change n and i and isn't a jump

O(n), O(n^2), O(n^3), O(n^1423424)。 big-oh 只是一个上限,这使得计算更容易,因为您不必找到紧界。

然而,上面的循环big-theta(n)

埃拉托斯特尼筛的复杂程度是多少?如果你说 O(n log n) 你不会错,但这也不是最好的答案。如果你说的是big-theta(n log n),那你就错了。

Bonus: why do people seemingly always use big-oh when talking informally?

Because in big-oh, this loop:

for i = 1 to n do
    something in O(1) that doesn't change n and i and isn't a jump

is O(n), O(n^2), O(n^3), O(n^1423424). big-oh is just an upper bound, which makes it easier to calculate because you don't have to find a tight bound.

The above loop is only big-theta(n) however.

What's the complexity of the sieve of eratosthenes? If you said O(n log n) you wouldn't be wrong, but it wouldn't be the best answer either. If you said big-theta(n log n), you would be wrong.

白龙吟 2024-09-16 02:47:32

因为有些算法的最佳情况很快,因此从技术上来说它是一个大 O,而不是一个大 Theta。

大 O 是上限,大 Theta 是等价关系

Because there are algorithms whose best-case is quick, and thus it's technically a big O, not a big Theta.

Big O is an upper bound, big Theta is an equivalence relation.

不奢求什么 2024-09-16 02:47:32

这里有很多很好的答案,但我注意到缺少一些东西。大多数答案似乎暗示人们使用 Big O 而不是 Big Theta 的原因是一个困难问题,在某些情况下这可能是正确的。通常,导致 Big Theta 结果的证明比导致 Big O 的证明要复杂得多。这通常是正确的,但我不认为这与使用一种分析而不是另一种分析有很大关系。

当谈论复杂性时,我们可以说很多事情。 Big O 时间复杂度只是告诉我们算法保证在上限内运行。大欧米茄很少被讨论,它告诉我们算法保证运行的最短时间,即下限。现在 Big Theta 告诉我们,对于给定的分析,这两个数字实际上是相同的。这告诉我们,应用程序有一个非常严格的运行时间,只能偏离渐近小于我们的复杂性的值。许多算法根本不具有渐近等价的上限和下限。

因此,对于你的问题,使用 Big O 代替 Big Theta 在技术上总是有效的,而使用 Big Theta 代替 Big O 仅当 Big O 和 Big Omega 碰巧相等时才有效。例如,插入排序的时间复杂度为 Big О,为 n^2,但其最佳情况是 Big Omega 为 n。在这种情况下,说它的时间复杂度是 n 或 n^2 的 Big Theta 是不正确的,因为它们是两个不同的界限,应该这样对待。

There are a lot of good answers here but I noticed something was missing. Most answers seem to be implying that the reason why people use Big O over Big Theta is a difficulty issue, and in some cases this may be true. Often a proof that leads to a Big Theta result is far more involved than one that results in Big O. This usually holds true, but I do not believe this has a large relation to using one analysis over the other.

When talking about complexity we can say many things. Big O time complexity is just telling us what an algorithm is guarantied to run within, an upper bound. Big Omega is far less often discussed and tells us the minimum time an algorithm is guarantied to run, a lower bound. Now Big Theta tells us that both of these numbers are in fact the same for a given analysis. This tells us that the application has a very strict run time, that can only deviate by a value asymptoticly less than our complexity. Many algorithms simply do not have upper and lower bounds that happen to be asymptoticly equivalent.

So as to your question using Big O in place of Big Theta would technically always be valid, while using Big Theta in place of Big O would only be valid when Big O and Big Omega happened to be equal. For instance insertion sort has a time complexity of Big О at n^2, but its best case scenario puts its Big Omega at n. In this case it would not be correct to say that its time complexity is Big Theta of n or n^2 as they are two different bounds and should be treated as such.

国产ˉ祖宗 2024-09-16 02:47:32

我见过 Big Theta,而且我很确定我在学校就被教导了其中的区别。不过我还是得查一下。维基百科是这样说的:

Big O 是比较函数时最常用的渐近表示法,尽管在许多情况下,Big O 可能会被替换为 Big Theta θ 以获得渐近更紧的界限。

资料来源: Big O Notation#Related asymptotic notation

我不知道为什么人们使用 Big -O 正式交谈时。也许是因为大多数人对 Big-O 比 Big-Theta 更熟悉?我什至都忘记了 Big-Theta 的存在,直到你提醒我。虽然现在我的记忆已经恢复,但我最终可能会在谈话中使用它。 :)

I have seen Big Theta, and I'm pretty sure I was taught the difference in school. I had to look it up though. This is what Wikipedia says:

Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds.

Source: Big O Notation#Related asymptotic notation

I don't know why people use Big-O when talking formally. Maybe it's because most people are more familiar with Big-O than Big-Theta? I had forgotten that Big-Theta even existed until you reminded me. Although now that my memory is refreshed, I may end up using it in conversation. :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文