算法速度顺序

发布于 2024-07-13 01:01:38 字数 299 浏览 9 评论 0原文

有时我会完全被愚弄,尝试用 O(x) 表示法来估计算法的速度,我的意思是,我真的可以指出顺序何时为 O(n) 或 O(mxn),但对于那些 O(lg( n)) 或 O(C(n)) 我认为我错过了一些东西...... 那么,您有哪些技巧和窍门可以轻松估计并快速忽略算法呢?

作为我正在寻找的示例,这里有一些简单的循环(可能是错误的,但尽我所能):

  • O(n): 如果有一个从 1 到 n 的简单循环(或其中几个,但是不嵌套。
  • O(mxn):另一个嵌套循环,其限制是 m 和 n

Sometimes I get totally fooled trying to estimate an algorithm's speed with the O(x) notation, I mean, I can really point out when the order is O(n) or O(mxn), but for those that are O(lg(n)) or O(C(power n)) I think that I am missing something there...
So, what are your tips and tricks for an easy estimate with a fast overlook on an algorithm?

As an example of what I am looking for, here are some for the easy ones (can be wrong, but trying my best):

  • O(n): If there is a simple loop from 1 to n (or several of them, but not nested.
  • O(mxn): A nested loop inside of another where the limits are m and n.

Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

幸福丶如此 2024-07-20 01:01:38

递归、分治算法通常是 O(logN)。 循环分而治之的算法将是 O(NlogN)。

Recursive, divide-and-conquer algorithms are often O(logN). An algorithm that loops over a divide-and-conquer would be O(NlogN).

彡翼 2024-07-20 01:01:38

这是一篇可能有帮助的博客文章:

将事物分解并重新组合起来的成本

那篇文章解释了处理大 O 订单的“主定理”。

Here's a blog post that might help:

The cost of breaking things down and putting them back together

That post explains the "master theorem" for working with big-O orders.

复古式 2024-07-20 01:01:38

O(lg(n)):如果您的问题在算法的每个步骤中按 n(通常为 n/2)的某个比例变小,并且每个步骤执行恒定量的工作。 二分搜索是一个很好的例子,因为每一步都通过执行恒定的工作量(计算中点并进行一次比较)来将问题大小减少一半。

请注意,n 位于该比例的顶部。 这与每一步减少 1/n 的问题大小不同。 :)

O(lg(n)): If your problem gets smaller by some proportion of n (often n/2) at each step of your algorithm and each step does a constant amount of work. Binary search is a good example since each step cuts your problem size in half by doing a constant amount of work (calculate the mid-point and do one comparison).

Note that the n is on the top of that proportion. This is not the same as your problem size reducing by 1/n at each step. :)

坏尐絯℡ 2024-07-20 01:01:38

如果您正在寻找一种快速方法来估计算法的运行时间,那么其他答案都很好。 如果您想要更详细的答案,我建议您查看“主定理”。 在德语文章中,有一个很好的表格。

编辑:约翰·D·库克对主定理做了很好的回顾,请参阅链接他的答案。

If you're looking for a quick way to estimate the runtime of an algorithm, the other answers are good. If you want a more elaborate answer, I suggest you look at the "Master theorem". In the German article, there is a nice table to that.

Edit: John D. Cook has made a nice recap to the master theorem, see the Link his answer.

錯遇了你 2024-07-20 01:01:38

关于 Big O 表示法的维基百科文章有一个很好的常用函数的顺序

The Wikipedia article on Big O Notation has a nice chart of the orders of common functions.

梦境 2024-07-20 01:01:38

算法的渐近复杂性在实践中很重要,以下是我在审查我的或其他人的代码时使用的一些经验法则。 通常实用的计算机算法是许多变量和重要数据结构的函数,但让我们假设(仅用于说明)我们的算法 f 基本上采用单个整数 X 作为其参数,并且我们希望根据以下公式找到 f 的渐近复杂度: X. 假设 f(0) 是微不足道的。 那么一般来说:

  • 从 0 到 X 的每个嵌套循环都会向 X 添加一个指数,因此两个循环(一个嵌套在另一个循环中)给出 X**2 (二次)。
  • 如果 f(X) 尾递归地调用 f(X-1),则它通常对应于迭代,即单个外循环 (O(X))。
  • 我见过作者打算迭代的例程,但其中既有从 0..X 的迭代,又有到 X-1 的尾递归; 这些会导致二次行为 (O(X**2))
  • 如果 f(X) 调用 f(X-1) 两次或多次,则会产生指数算法,并从中得到 O(2**X)。
  • 如果 f(X) 调用 f(X/2) 两次,则它在复杂性方面对应于单次迭代(它是一种分而治之算法)。 (它会导致 O(X log X) 或 O(X),具体取决于细节,但我意识到无论如何我都将其视为单个迭代。)
  • 如果 f 使用任何有序数据结构(有序)集、优先级堆等)已正确实现,并且该算法向数据结构添加大约 X 个对象,操作为 O(log X)。 因此,如果在循环中发生恒定数量的数据结构操作,例如,您将得到 O(X * log X)。
  • 如果有序数据结构的实现不正确,则各个操作可能会得到 O(X) 而不是 O(log X)。

一些特殊情况:

  • 通过追加字符串或内存区域来增长字符串或内存区域的算法,在许多语言中都会产生 O(X**2) 开销,例如

    for (i = 0; i < X; i++) { s += "foo";   } // 二次 
      
  • 这个典型的嵌套循环也有 X**2 开销:

    for (i = 0; i < X; i++) { for (j = 0; j < i; j++) { ... } } // 二次 
      
  • C++ STL 容器(如 std::set 和 std::map)几乎所有操作都有 O(log X) 开销

  • strlen(s) 都有 O(log X) 开销,其他类似的计数算法在返回 X 时有 O(X) 开销
  • memcpy 等。结果为 O (X)
  • 存在复杂性方面的危险操作,例如通过从链表中进行相等比较来删除元素,这会产生 O(X) 或更糟的结果。
  • 使用基于模板的容器时,请确保比较、排序等运算符是快速且没有隐藏的复杂性因素
  • 使用引用计数,则如果将最后一个引用删除到长度为 X 的引用链接列表,则删除引用可能是最坏情况的 O(X) 操作。
  • 如果 如果复制对象的例程非常重要,例如更新全局对象集,面向对象的语言可能会产生奇怪的渐近复杂性。

只需我的 2 美分! 希望这可以帮助!

the asymptotic complexity of algorithms is important in practice, and here are some of rules of thumb I use when I review mine or other people's code. Usually practical computer algorithms are functions of many variables and nontrivial data structures, but let us assume (just for illustration) that our algorithm f takes basically a single integer X as its argument, and we want to find the asymptotic complexity of f in terms of X. Suppose f(0) is trivial. Then in general:

  • Every nested loop from 0 to X adds an exponent to X, so two loops (one nested inside another) gives X**2 (quadratic).
  • If f(X) calls f(X-1) tail-recursively, it corresponds usually to iteration, i.e. a single outer loop (O(X)).
  • I have seen routines which the author intended as iterative, but where there is both iteration from 0..X AND a tail-recursion to X-1; these result in quadratic behavior (O(X**2))
  • If f(X) calls f(X-1) twice or more, it results in an exponential algorithm, and you get O(2**X) from that.
  • If f(X) calls f(X/2) twice, it corresponds complexity-wise to single iteration (it is a divide-and-conquer algorithm). (It results in O(X log X) or O(X) depending on the details but I realize that I think of it as a single iteration anyway.)
  • If f uses any ordered data structure (ordered set, priority heap etc.) that has been properly implemented, and the algorithm adds roughly X objects to the data structure, the operations are O(log X). So if a constant number of data structure operations take place in a loop, say, you get O(X * log X).
  • If the ordered data structure is improperly implemented, you could get O(X) instead of O(log X) for the individual operations.

Some special cases:

  • Algorithms which grow strings or memory areas by appending to them, do in many languages incur O(X**2) overhead, such as

    for (i = 0; i < X; i++) { s += "foo"; } // quadratic
    
  • This typical nested loop has also X**2 overhead:

    for (i = 0; i < X; i++) { for (j = 0; j < i; j++) { ... } } // quadratic
    
  • C++ STL containers like std::set and std::map have O(log X) overheads for almost all operations

  • strlen(s) and other similar counting algorithms when they return X have O(X) overheads
  • memcpy etc. result in O(X)
  • There are complexity-wise dangerous operations, such as erasing an element by equality comparison from a linked list, which yield O(X) or worse
  • When using template-based containers, make sure that the comparison, ordering etc. operators are fast and do not have hidden complexity factors
  • If you use reference counting, the dropping of a reference can be worst-case O(X) operation if you drop the last reference to a linked list of references whose length is X
  • Copying complex data structures in object-oriented languages can yield strange asymptotic complexities if the routines to copy objects are nontrivial and e.g. update global object sets

Just my 2 cents! Hope this helps!

人间☆小暴躁 2024-07-20 01:01:38

通常会出现类似 O(logN) 的情况,因为数据大小为 N,但它是组织在树中,其中树的深度为 logN。 如果典型的搜索涉及从根到叶(在最坏的情况下),那么很容易看出该算法将是 O(logN)。

没有硬性规定——你只需查看每种情况,找出最坏的情况,并计算出这种情况的成本。

Usually something like O(logN) comes about because the data is size N, but it is organized e.g. in a tree where the depth of the tree is logN. If the typical search involves going from the root to the leaf (in worse case) then it is easy to see that the algorithm will be O(logN).

There are no hard and fast rules - you just have to look at each case, figure out the worse case scenario, and calculate what the cost of that would be.

傾旎 2024-07-20 01:01:38

我不太喜欢回答自己的问题,但我今天发现了这个并提醒我这个问题。

http://bigocheatsheet.com/

I am not a big fan of answering my own questions, but I found this today and remind me this SO question.

http://bigocheatsheet.com/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文