运行1+ 2+ ...+ n times的算法的时间复杂性;
首先,我发现此 stackoverflow问题引用时间复杂性为o (n^2),但它没有回答为什么o(n^2)是时间复杂的问题,而是要求提供这样一种算法的示例。从我的理解中,运行1+2+3+...+n的算法将是 小于o(n^2)。例如,
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
在此处获取此函数的一些输入和返回值
数字 | 从该表中您可以看到该算法在少于O(n^2)中运行, |
---|---|
1 | 1 |
1 2 | 3 |
3 | 6 |
4 | 10 |
5 | 15 |
6 | 21 |
7 | 28 |
但更多比O(n)。我还意识到,运行1+(1+2)+(1+2+3)+...+(1+2+3+...+n)的算法是true O(n^2)时间复杂性。对于问题中所述的算法,我们是否只是说它在O(n^2)中运行,因为它运行的运行大于O(log n)次?
To start off I found this stackoverflow question that references the time complexity to be O(n^2), but it doesn't answer the question of why O(n^2) is the time complexity but instead asks for an example of such an algorithm. From my understanding an algorithm that runs 1+2+3+...+n times would be
less than O(n^2). For example, take this function
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
Here are some input and return values
num | sum |
---|---|
1 | 1 |
2 | 3 |
3 | 6 |
4 | 10 |
5 | 15 |
6 | 21 |
7 | 28 |
From this table you can see that this algorithm runs in less than O(n^2) but more than O(n). I also realize than algorithm that runs 1+(1+2)+(1+2+3)+...+(1+2+3+...+n) is true O(n^2) time complexity. For the algorithm stated in the problem, do we just say it runs in O(n^2) because it runs more than O(log n) times?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
众所周知,1 + 2 + ... + n具有
n *(n + 1) / 2 < / code>的简短形式。即使您不知道,您也必须考虑,当
>)最多运行i
到达n
时,内部循环最多运行n
时间。因此,您完全具有n
times(对于外循环i
),每个n
times(对于内部循环j ),因此
o(n^2)
变得更加明显。我同意复杂性将完全是
n^2
如果内部循环也从0到N,则您有理由认为循环i
从0到n
和另一个循环j
从0到i
必须表现更好,但这是事实,但是使用Big Oh note,实际上您正在测量学位算法的复杂性,而不是确切的操作数量。PS
O(log n)
通常是将主要问题分为子问题时实现的。It's known that 1 + 2 + ... + n has a short form of
n * (n + 1) / 2
. Even if you didn't know that, you have to consider that, wheni
gets ton
, the inner loop runs at mostn
times. So you have exactlyn
times (for outer loopi
), each running at mostn
times (for inner loopj
), so theO(n^2)
becomes more apparent.I agree that the complexity would be exactly
n^2
if the inner loop also ran from 0 to n, so you have your reasons to think that a loopi
from 0 ton
and another loopj
from 0 toi
has to perform better and that's true, but with big Oh notation you're actually measuring the degree of algorithm's complexity, not the exact number of operations.p.s.
O(log n)
is usually achieved when you split the main problem into sub-problems.我认为您应该以不同的方式解释表。 O(n^2)复杂性说,如果输入n加倍,运行时应该四倍(需要4倍)。在这种情况下,函数(n:number)返回一个镜像其运行时的数字。我将f(n)用作它的简短。
因此,说n从1到2,这意味着输入增加了一倍(2/1 = 2)。然后,运行时间从F(1)变为F(2),这意味着它增加了F(2)/F(1)= 3/1 = 3次。那不是4次,但是Big-O复杂性度量是渐近的,处理了N接近Infinity的情况。如果我们从表中测试另一个输入,则为F(6)/F(3)= 21/6 = 3.5。它已经接近4。
让我们现在流浪在桌子外面,尝试更大的n。 )= 12502500/3126250 = 3.999。趋势很明显。随着n接近无穷大,输入的两倍趋向于四倍的运行时。这就是O(n^2)的标志。
I think you should interpret the table differently. The O(N^2) complexity says that if you double the input N, the runtime should quadruple (take 4 times as long). In this case, the function(n: number) returns a number mirroring its runtime. I use f(N) as a short for it.
So say N goes from 1 to 2, which means the input has doubled (2/1 = 2). The runtime then has gone from f(1) to f(2), which means it has increased f(2)/f(1) = 3/1 = 3 times. That is not 4 times, but the Big-O complexity measure is asymptotic, dealing with the situation where N approaches infinity. If we test another input doubling from the table, we have f(6)/f(3) = 21/6 = 3.5. It is already closer to 4.
Let us now stray outside the table and try more doublings with bigger N. For example we have f(200)/f(100) = 20100/5050 = 3.980 and f(5000)/f(2500) = 12502500/3126250 = 3.999. The trend is clear. As N approaches infinity, a doubled input tends toward a quadrupled runtime. And that is the hallmark of O(N^2).