BigO 在某些方法上的运行时间

发布于 2024-10-06 23:44:20 字数 218 浏览 1 评论 0原文

好吧,这些都是非常简单的方法,而且有几个,所以我不想在它们都是同一件事时创建多个问题。 BigO是我的弱点。我只是不明白他们是如何得出这些答案的。无论如何,您是否可以让我深入了解一下您对分析其中一些方法的运行时间的想法?你如何分解它?当我看到这样的事情时我应该如何思考? (特别是第二个,我不明白那是 O(1)) 替代文本

Ok, these are all pretty simple methods, and there are a few of them, so I didnt want to just create multiple questions when they are all the same thing. BigO is my weakness. I just cant figure out how they come up with these answers. Is there anyway you can give me some insight into your thinking for analyzing running times of some of these methods? How do you break it down? How should I think when I see something like these? (specifically the second one, I dont get how thats O(1))
alt text

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

我是有多爱你 2024-10-13 23:44:20
function f1:
  loop 3 times
    loop n times

因此 O(3*n) 实际上是 O(n)。


function f2:
  loop 50 times

O(50) 实际上是 O(1)。

我们知道它将循环 50 次,因为它会一直循环到 n = n - (n / 50) 为 0。要实现这一点,它必须迭代 50 次 (n - (n / 50)*50 = 0)。


function f3:
  loop n times
    loop n times

因此 O(n^2)。


function f4:
  recurse n times

您知道这一点,因为最坏的情况是 n = 高 - 低 + 1。忽略 +1。
这意味着 n = 高 - 低。

要终止,

arr[hi] * arr[low] > 10

假设这种情况不会发生,直到低电平增加到它可以达到的最高值(高电平)。

这意味着 n = high - 0 并且我们必须递归最多 n 次。


function 5:
  loops ceil(log_2(n)) times

我们知道这一点是因为 m/=2

例如,令n=10。 log_2(10) = 3.3,上限为4。

10 / 2 = 
  5 / 2 = 
   2.5 / 2 = 
    1.25 / 2 = 
      0.75

总共有4次迭代。

function f1:
  loop 3 times
    loop n times

Therefore O(3*n) which is effectively O(n).


function f2:
  loop 50 times

O(50) is effectively O(1).

We know it will loop 50 times because it will go until n = n - (n / 50) is 0. For this to be true, it must iterate 50 times (n - (n / 50)*50 = 0).


function f3:
  loop n times
    loop n times

Therefore O(n^2).


function f4:
  recurse n times

You know this because worst case is that n = high - low + 1. Disregard the +1.
That means that n = high - low.

To terminate,

arr[hi] * arr[low] > 10

Assume that this doesn't occur until low is incremented to the highest it can go (high).

This means n = high - 0 and we must recurse up to n times.


function 5:
  loops ceil(log_2(n)) times

We know this because of the m/=2.

For example, let n=10. log_2(10) = 3.3, the ceiling of which is 4.

10 / 2 = 
  5 / 2 = 
   2.5 / 2 = 
    1.25 / 2 = 
      0.75

In total, there are 4 iterations.

同展鸳鸯锦 2024-10-13 23:44:20

在循环中执行循环时,您会得到 n^2 分析,例如第三种方法。
但是,第一种方法不进行 ^2 时序分析,因为第一个循环被定义为运行 3 次。这使得第一个的时间为 3n,但我们不关心 Big-O 的数字。

第二个引入了一个有趣的范例,尽管事实上只有一个循环,但时序分析仍然是 O(1)。这是因为,如果您要绘制执行此方法所需的时间,则对于较小的数字,它的行为不会像 O(n) 一样。对于更大的数字,这一点变得显而易见。

对于第四种方法,时间复杂度为 O(n),因为递归函数调用传递 lo + 1。这类似于使用 for 循环并使用 lo++/++lo 递增。

最后一个的时间复杂度为 O(log n),因为您将变量除以二。请记住,任何让您想起二分搜索的事物都会有 log n 时间。

时序分析还有另一个技巧。假设您有一个循环中的循环,并且在两个循环中的每一个循环中,您都从文件中读取行或从堆栈中弹出元素。这实际上只是一个 O(n) 方法,因为文件只能读取一定数量的行,而堆栈只能弹出一定数量的元素。

You get an n^2 analysis when performing a loop within a loop, such as the third method.
However, the first method doesn't a n^2 timing analysis because the first loop is defined as running three times. This makes the timing for the first one 3n, but we don't care about numbers for Big-O.

The second one, introduces an interesting paradigm, where despite the fact that you have a single loop, the timing analysis is still O(1). This is because if you were to chart the timing it takes to perform this method, it wouldn't behave as O(n) for smaller numbers. For larger numbers it becomes obvious.

For the fourth method, you have an O(n) timing because you're recursive function call is passing lo + 1. This is similar to if you were using a for loop and incrementing with lo++/++lo.

The last one has a O(log n) timing because your dividing your variable by two. Just remember than anything that reminds you of a binary search will have a log n timing.

There is also another trick to timing analysis. Say you had a loop within a loop, and within each of the two loops you were reading lines from a file or popping of elements from a stack. This actually would only be a O(n) method, because a file only has a certain number of lines you can read, and a stack only has a certain number of elements you can pop off.

陈甜 2024-10-13 23:44:20

大O表示法的总体思想是这样的:它粗略地回答了这个问题:“如果给你一组N个项目,并且你必须对这些项目重复执行某些操作,你需要执行多少次执行这个操作吗?”我说一个粗略的答案,因为它(大多数时候)并没有给出“5*N+35”的精确答案,而只是“N”。这就像一个球场。你并不真正关心精确的答案,你只是想知道当 N 变大时它会变得多么糟糕。因此,像 O(N)、O(N*N)、O(logN) 和 O(N!) 这样的答案是典型的,因为它们各自代表一类答案,您可以将它们相互比较。当 N 足够大时,O(N) 的算法将比 O(N*N) 的算法执行得更好,无论操作本身有多长。

所以我将其分解为:首先确定 N 是什么。在上面的示例中,这一点非常明显 - 它是输入数组的大小,因为它决定了我们将循环多少次。有时它不是那么明显,有时你有多个输入数据,所以你不仅得到 N,还得到 M 和其他字母(然后答案类似于 O(N*M*M))。

然后,当我弄清楚 N 时,我尝试识别依赖于 N 的循环。实际上,这两件事经常被一起识别,因为它们几乎是紧密联系在一起的。

最后,当然,我必须计算出程序将根据 N 进行多少次迭代。为了使它更容易,我并没有真正尝试计算它们,只是尝试识别典型的答案 - O(1) 、 O(N)、O(N*N)、O(logN)、O(N!) 或可能是 N 的其他幂。 O(N!) 实际上非常罕见,因为它的效率非常低,因此实现它毫无意义。

如果你得到像 N*N+N+1 这样的答案,那么就丢弃较小的答案,因为,当 N 变大时,其他的就不再重要了。并忽略该操作是否重复固定次数。 O(5*N) 与 O(N) 相同,因为它是我们正在寻找的大致范围。

补充:根据评论中的要求,这里对前两种方法进行分析:

第一种很简单。只有两个循环,内部循环的复杂度为 O(N),外部循环仅重复 3 次。所以仍然是O(N)。 (记住 - O(3N) = O(N))。

第二个是棘手的。我不太确定。看了一会儿我明白了为什么它最多只循环50次。由于这根本不依赖于 N,因此它算作 O(1)。但是,如果您要传递一个仅包含 10 个项目且全部为正的数组,那么它将进入无限循环。我猜那是 O(∞)。那么是哪一个呢?我不知道......

我认为没有一种正式的方法可以确定算法的大O数。这就像停止问题。事实上,想一想,如果你可以普遍地确定一段代码的大O,你也可以确定它是否曾经停止过,从而与停止问题相矛盾。但这只是我的想法。

通常我只是路过……不知道,有点“直觉”。一旦你“明白”了 Big-O 所代表的含义,它就会变得非常直观。但对于复杂的算法,并不总是能够确定。以快速排序为例。平均而言,它的复杂度为 O(N*logN),但根据数据的不同,它可能会降级为 O(N*N)。不过,您在考试中遇到的问题应该有明确的答案。

The general idea of big-O notation is this: it gives a rough answer to the question "If you're given a set of N items, and you have to perform some operation repeatedly on these items, how many times will you need to perform this operation?" I say a rough answer, because it (most of the time) doesn't give a precise answer of "5*N+35", but just "N". It's like a ballpark. You don't really care about the precise answer, you just want to know how bad it will get when N gets large. So answers like O(N), O(N*N), O(logN) and O(N!) are typical, because they each represent sort of a "class" of answers, which you can compare to each other. An algorithm with O(N) will perform way better than an algorithm with O(N*N) when N gets large enough, it doesn't matter how lengthy the operation is itself.

So I break it down thus: First identify what the N will be. In the examples above it's pretty obvious - it's the size of the input array, because that determines how many times we will loop. Sometimes it's not so obvious, and sometimes you have multiple input data, so instead of just N you also get M and other letters (and then the answer is something like O(N*M*M)).

Then, when I have my N figured out, I try to identify the loop which depends on N. Actually, these two things often get identified together, as they are pretty much tied together.

And, lastly of course, I have to figure out how many iterations the program will make depending on N. And to make it easier, I don't really try to count them, just try to recognize the typical answers - O(1), O(N), O(N*N), O(logN), O(N!) or perhaps some other power of N. The O(N!) is actually pretty rare, because it's so inefficient, that implementing it would be pointless.

If you get an answer of something like N*N+N+1, then just discard the smaller ones, because, again, when N gets large, the others don't matter anymore. And ignore if the operation is repeated some fixed number of times. O(5*N) is the same as O(N), because it's the ballpark we're looking for.

Added: As asked in the comments, here are the analysis of the first two methods:

The first one is easy. There are only two loops, the inner one is O(N), and the outer one just repeats that 3 times. So it's still O(N). (Remember - O(3N) = O(N)).

The second one is tricky. I'm not really sure about it. After looking at it for a while I understood why it loops at most only 50 times. Since this is not dependant on N at all, it counts as O(1). However, if you were to pass it, say, an array of only 10 items, all positive, it would go into an infinite loop. That's O(∞), I guess. So which one is it? I don't know...

I don't think there's a formal way of determining the big-O number for an algorithm. It's like the halting problem. In fact, come to think of it, if you could universally determine the big-O for a piece of code, you could also determine if it ever halts or not, thus contradicting the halting problem. But that's just my musings.

Typically I just go by... dunno, sort of a "gut feeling". Once you "get" what the Big-O represents, it becomes pretty intuitive. But for complicated algorithms it's not always possible to determine. Take Quicksort for example. On average it's O(N*logN), but depending on the data it can degrade to O(N*N). The questions you'll get on the test though should have clear answers.

半岛未凉 2024-10-13 23:44:20

第二个是 50,因为大 O 是输入长度的函数。也就是说,如果输入大小从 100 万变为 10 亿,如果函数为 O(N),则运行时间应增加 1000;如果函数为 O(n^2),则运行时间应增加 100 万。然而,无论输入长度如何,第二个函数都会在 50 时间内运行,因此它的复杂度为 O(1)。从技术上讲,它是 O(50),但常数对于大 O 来说并不重要。

The second one is 50 because big O is a function of the length of the input. That is if the input size changes from 1 million to 1 billion, the runtime should increase by 1000 if the function is O(N) and 1 million if it's O(n^2). However the second function runs in time 50 regardless of the input length, so it's O(1). Technically it would be O(50) but constants don't matter for big O.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文