大 O 和小 O 表示法的区别

发布于 2024-08-03 03:19:45 字数 110 浏览 7 评论 0原文

Big-O 表示法 O(n)Little-O 表示法 o(n) 之间有什么区别?

What is the difference between Big-O notation O(n) and Little-O notation o(n)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

晨与橙与城 2024-08-10 03:19:45

f ∈ O(g) 本质上说

对于至少一个常数k的选择> 0,您可以找到一个常数a,使得不等式 0 <= f(x) <= kg(x) 对于所有 x > 成立。一个。

请注意,O(g) 是满足此条件的所有函数的集合。

f ∈ o(g) 本质上说

对于常数每个的选择k> 0,您可以找到一个常数a,使得不等式 0 <= f(x) < kg(x) 对于所有 x > 均成立a.

再次注意 o(g) 是一个集合。

在 Big-O 中,您只需要找到一个特定的乘数 k,使其不等式保持在某个最小值 x 之上。

在 Little-o 中,必须存在一个最小 x,在此之后,无论 k 多么小,只要它不为负或零,不等式都成立。

它们都描述了上限,尽管有些违反直觉,但 Little-o 是更强的陈述。如果 f ∈ o(g) 则 f 和 g 的增长率之间的差距比 f ∈ O(g) 时要大得多。

这种差异的一个例证是:f ∈ O(f) 为真,但 f ∈ o(f) 为假。因此,Big-O 可以理解为“f ∈ O(g) 意味着 f 的渐近增长不比 g 快”,而“f ∈ o(g) 意味着 f 的渐近增长严格慢于 g”。就像 <=< 一样。

更具体地说,如果 g(x) 的值是 f(x) 值的常数倍,则 f ∈ O(g) 为真。这就是为什么在使用大 O 表示法时可以删除常量。

然而,要使 f ∈ o(g) 为真,则 g 的公式中必须包含 x 的更高,因此 f(x) 和 g(x) 之间的相对分离实际上必须随着 x 变大而变大。

使用纯粹的数学示例(而不是参考算法):

以下对于 Big-O 来说是正确的,但如果您使用little-o,则不是这样:

  • x² Î O(x²)
  • x² Î O(x² + x)
  • x² Ë O(200 * x²)

以下对于小 o 成立:

  • x² ∈ o(x³)
  • x² ∈ o(x!)
  • ln(x) ∈ o(x)

请注意,如果 f ∈ o(g),这意味着 f ε O(g)。例如 x² ∈ o(x³) 因此 x² ∈ O(x³) 也是正确的(再次将 O 视为 <= 并将 o 视为 <

f ∈ O(g) says, essentially

For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.

Note that O(g) is the set of all functions for which this condition holds.

f ∈ o(g) says, essentially

For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a.

Once again, note that o(g) is a set.

In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.

In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.

These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).

One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false. Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.

More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true. This is why you can drop constants when working with big-O notation.

However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.

To use purely math examples (rather than referring to algorithms):

The following are true for Big-O, but would not be true if you used little-o:

  • x² ∈ O(x²)
  • x² ∈ O(x² + x)
  • x² ∈ O(200 * x²)

The following are true for little-o:

  • x² ∈ o(x³)
  • x² ∈ o(x!)
  • ln(x) ∈ o(x)

Note that if f ∈ o(g), this implies f ∈ O(g). e.g. x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <= and o as <)

风月客 2024-08-10 03:19:45

Big-O 与 Little-O 之间的关系就像 < 之间的关系一样。 Big-O 是包容性上限,而 Little-O 是严格上限。

例如,函数 f(n) = 3n 为:

  • O(n²)o(n²)O 中(n)
  • 不在 O(lg n)o(lg n)o(n)

类似地,数字 1 为:

  • ≤ 2< 2,并且 ≤ 1
  • ≤ 0< 0,或< 1

这是一个表格,显示了总体思路:

Big o table

(注意:该表格是一个很好的指南但其限制定义应采用上级限制,而不是正常限制。 3 + (n mod 2) 永远在 3 和 4 之间振荡,尽管没有正常限制,但它的时间复杂度为 O(1),因为它仍然有一个 lim。 sup: 4.)

我建议记住 Big-O 表示法如何转换为渐近比较。比较更容易记住,但灵活性较差,因为您不能说 nO(1) = P 之类的东西。

Big-O is to little-o as is to <. Big-O is an inclusive upper bound, while little-o is a strict upper bound.

For example, the function f(n) = 3n is:

  • in O(n²), o(n²), and O(n)
  • not in O(lg n), o(lg n), or o(n)

Analogously, the number 1 is:

  • ≤ 2, < 2, and ≤ 1
  • not ≤ 0, < 0, or < 1

Here's a table, showing the general idea:

Big o table

(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup: 4.)

I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.

自我难过 2024-08-10 03:19:45

我发现当我无法从概念上掌握某些东西时,思考为什么要使用 X 有助于理解 X。(并不是说你没有尝试过,我只是在做准备.)

你知道的东西:对算法进行分类的一种常见方法是按运行时,通过引用算法的大哦复杂性,你可以很好地估计哪个算法“更好” - - 以 O! 中具有“最小”功能的为准即使在现实世界中,O(N) 也比 O(N²) “更好”,除非是超大质量常数等愚蠢的事情。

假设有一些算法的运行时间为 O(N)。相当不错吧?但假设你(你这个聪明人)想出了一个运行时间复杂度为 O(NloglogloglogN) 的算法。耶!它更快!但当你写论文时,一遍又一遍地写这些,你会觉得很愚蠢。所以你写一次,你就可以说“在这篇论文中,我已经证明了算法 X,以前可以在 O(N) 时间内计算,实际上可以在 o(n) 中计算。”

因此,每个人都知道你的算法更快——具体快多少尚不清楚,但他们知道它更快。理论上来说。 :)

I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)

Stuff you know: A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.

Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(NloglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."

Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)

以酷 2024-08-10 03:19:45

一般来说

,渐近符号可以理解为:缩小时函数如何比较?(测试这一点的一个好方法就是使用像 Desmos 并使用鼠标滚轮进行游戏)。特别是:

  • f(n) ∈ o(n) 表示:在某些时候,缩小得越多,f(n) 就越会被 n 所支配(它将逐渐偏离它)。
  • g(n) ∈ θ(n) 表示:在某些情况下一点,缩小不会改变 g(n)n 的比较(如果我们从轴上删除刻度,您将无法分辨缩放级别)。

最后,h(n) ∈ O(n) 意味着函数 h 可以属于这两个类别中的任何一个。它可以看起来很像 n,也可以在 n 增加时越来越小于 n。基本上,f(n)g(n) 也在 O(n) 中。

我认为这个维恩图(改编自本课程)可以帮助:

在此处输入图像描述

这与我们用于比较数字的内容完全相同:

在此处输入图像描述

在计算机科学中

在计算机科学中,人们通常会证明给定的算法承认上界O和下界

In general

Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:

  • f(n) ∈ o(n) means: at some point, the more you zoom out, the more f(n) will be dominated by n (it will progressively diverge from it).
  • g(n) ∈ Θ(n) means: at some point, zooming out will not change how g(n) compare to n (if we remove ticks from the axis you couldn't tell the zoom level).

Finally h(n) ∈ O(n) means that function h can be in either of these two categories. It can either look a lot like n or it could be smaller and smaller than n when n increases. Basically, both f(n) and g(n) are also in O(n).

I think this Venn diagram (adapted from this course) could help:

enter image description here

It's the exact same has what we use for comparing numbers:

enter image description here

In computer science

In computer science, people will usually prove that a given algorithm admits both an upper O and a lower bound ????. When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem Θ.

For example, if we prove that the complexity of an algorithm is both in O(n) and ????(n) it implies that its complexity is in Θ(n). (That's the definition of Θ and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n). Again, roughly saying "this problem can't be solved in (strictly) less than n steps".

Usually the o is used within lower bound proof to show a contradiction. For example:

Suppose algorithm A can find the min value in an array of size n in o(n) steps. Since A ∈ o(n) it can't see all items from the input. In other words, there is at least one item x which A never saw. Algorithm A can't tell the difference between two similar inputs instances where only x's value changes. If x is the minimum in one of these instances and not in the other, then A will fail to find the minimum on (at least) one of these instances. In other words, finding the minimum in an array is in ????(n) (no algorithm in o(n) can solve the problem).

Details about lower/upper bound meanings

An upper bound of O(n) simply means that even in the worse case, the algorithm will terminate in at most n steps (ignoring all constant factors, both multiplicative and additive). A lower bound of ????(n) is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n steps (ignoring multiplicative and additive constants). The number of steps is at most n and at least n so this problem complexity is "exactly n". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n) for short.

甜宝宝 2024-08-10 03:19:45

大 O 表示法有一个同伴,称为小 O 表示法。大 O 表示法表示一个函数不比另一个函数渐近。为了表示一个函数渐近小于另一个函数,我们使用小-o 表示法。大 O 和小 O 表示法之间的差异类似于 <= (小于等于)和 <= 之间的差异。 (少于)。

The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than another. To say that one function is asymptotically less than another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文