为什么 O(1) != O(log(n)) ?对于 n=[整数,长整型,...]

发布于 2024-10-08 07:53:24 字数 140 浏览 8 评论 0 原文

例如,假设 n = Integer.MAX_VALUE 或 2^123,那么 O(log(n)) = 32 和 123 就是一个小整数。不是 O(1) 吗?

有什么区别?我认为,原因是 O(1) 是常数,但 O(log(n)) 不是。还有其他想法吗?

for example, say n = Integer.MAX_VALUE or 2^123 then O(log(n)) = 32 and 123 so a small integer. isn't it O(1) ?

what is the difference ? I think, the reason is O(1) is constant but O(log(n)) not. Any other ideas ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

神魇的王 2024-10-15 07:53:24

如果n在上面有界,那么涉及n的复杂性类就没有意义。不存在“在 2^123 接近无穷大的极限”之类的东西,除了老笑话中的“对于足够大的 5 值,五边形近似于圆形”。

一般来说,在分析代码的复杂性时,我们假装输入大小不受机器资源限制的限制,尽管确实如此。这确实会导致 log n 周围发生一些稍微奇怪的事情,因为如果 n 必须适合固定大小的 int 类型,那么 log n code> 有一个相当小的界限,因此该界限更有可能有用/相关。

因此有时,我们正在分析稍微理想化的算法版本,因为实际编写的代码无法接受任意大的输入。

例如,在最坏的情况下,您的平均快速排序正式使用 Theta(log n) 堆栈,显然,对于相当常见的实现,即在分区的“小”侧进行调用递归,在“大”侧进行循环递归。但在 32 位机器上,您实际上可以安排使用大约 240 字节的固定大小数组来存储“待办事项列表”,这可能比您基于正式具有 O 的算法编写的其他函数要少。 (1)堆栈的使用。道德是实现!=算法,复杂性不会告诉你任何关于小数字的信息,并且任何特定的数字都是“小”。

如果您想考虑边界,您可以说,例如,对数组进行排序的代码运行时间为 O(1),因为数组必须低于适合您的大小。 PC 的地址空间以及因此对其进行排序的时间是有限的。但是,如果您这样做,您的计算机科学作业将会失败,并且您不会向任何人提供任何有用的信息:-)

If n is bounded above, then complexity classes involving n make no sense. There is no such thing as "in the limit as 2^123 approaches infinity", except in the old joke that "a pentagon approximates a circle, for sufficiently large values of 5".

Generally, when analysing the complexity of code, we pretend that the input size isn't bounded above by the resource limits of the machine, even though it is. This does lead to some slightly odd things going on around log n, since if n has to fit into a fixed-size int type, then log n has quite a small bound, so the bound is more likely to be useful/relevant.

So sometimes, we're analysing a slightly idealised version of the algorithm, because the actual code written cannot accept arbitrarily large input.

For example, your average quicksort formally uses Theta(log n) stack in the worst case, obviously so with the fairly common implementation that call-recurses on the "small" side of the partition and loop-recurses on the "big" side. But on a 32 bit machine you can arrange to in fact use a fixed-size array of about 240 bytes to store the "todo list", which might be less than some other function you've written based on an algorithm that formally has O(1) stack use. The morals are that implementation != algorithm, complexity doesn't tell you anything about small numbers, and any specific number is "small".

If you want to account for bounds, you could say that, for example, your code to sort an array is O(1) running time, because the array has to be below the size that fits in your PC's address space, and hence the time to sort it is bounded. However, you will fail your CS assignment if you do, and you won't be providing anyone with any useful information :-)

以酷 2024-10-15 07:53:24

显然,如果您知道输入始终具有固定数量的元素,则算法将始终在恒定时间内运行。 Big-O 表示法用于表示最坏情况运行时间,它描述了元素数量无限大时的极限。

Obviously if you know that the input will always have a fixed number of elements, the algorithm will always run in constant time. Big-O notation is used to denote worse-case running time, which describes the limit when the number of elements grows infinitely large.

不知在何时 2024-10-15 07:53:24

不同之处在于 n 不是固定的。 Big-O 表示法背后的想法是了解输入的大小如何影响运行时间(或内存使用量)。因此,如果一个算法总是花费相同的时间,无论 n = 1 还是 n = Integer.MAX_VALUE,我们都说它是 O(1)。如果输入大小每增加一倍,算法就需要多花一个时间单位,那么我们说它是O(logn)

编辑:为了回答您关于 O(1) 和 O(logn) 之间差异的具体问题,我会给您一个例子。假设我们想要一个能够在未排序数组中找到最小元素的算法。一种方法是遍历每个元素并跟踪当前的最小值。另一种方法是对数组进行排序,然后返回第一个元素。

第一个算法是 O(n),第二个算法是 O(nlogn)。假设我们从一个包含 16 个元素的数组开始。第一个算法将在时间 16 中运行,第二个算法将在时间 16*4 中运行。如果我们把它增加到17,那么它就变成17和17*4。我们可能天真地说,第二个算法花费的时间是第一个算法的 4 倍(如果我们将 logn 分量视为常数)。

但是让我们看看当数组包含 2^32 个元素时会发生什么。现在第一个算法需要 2^32 时间才能完成,第二个算法需要 32*2^32 时间才能完成。需要32倍的时间。是的,这是一个很小的差异,但仍然是一个差异。如果第一个算法需要1分钟,第二个算法需要半个多小时!

The difference is that n isn't fixed. The idea behind Big-O notation is to get an idea of how the size of the input effects the running time (or memory usage). So if an algorithm always takes the same amount of time, whether n = 1 or n = Integer.MAX_VALUE, we say it is O(1). If the algorithm takes a unit of time longer each time the input size doubles, then we say it is O(logn).

Edit: to answer your specific question on the difference between O(1) and O(logn), I'll give you an example. Let's say we want an algorithm that will find the min element in an unsorted array. One approach is to go through each element and keep track of the current min. Another approach is to sort the array and then return the first element.

The first algorithm is O(n), and the second algorithm is O(nlogn). So let's say we start with an array of 16 elements. The first algorithm will run in time 16, the second algorithm will run in time 16*4. If we increase it to 17, then it becomes 17 and 17*4. We might naively say that the second algorithm takes 4 times as long as the first algorithm (if we treat the logn component as constant).

But let's look at what happens when our array contains 2^32 elements. Now the first algorithm takes 2^32 time to complete, where our second algorithm takes 32*2^32 time to complete. It takes 32 times as long. Yes, it's a small difference, but it is still a difference. If the first algorithm takes 1 minute, the second algorithm will take over half an hour!

胡渣熟男 2024-10-15 07:53:24

我想如果它被称为O(n^0),你会得到更好的想法。

它是一个取决于输入变量N缩放函数。它是一个函数,而不是数字,您永远不应该为变量 N 假设任何数字。

就像你说函数f(x)是3,因为f(100) = 3,这是错误的。它是一个函数,而不是任何特定的数字。常量函数f(x) = 1仍然是一个函数,它永远不会等于另一个函数g(x) = N,即g(x) =f(x)

I think you will get a better idea if it is called O(n^0).

It is a scaling function depending on the input variable N. It is a function, not number, you should never assume any number for the variable N.

It is just like that you say that a function f(x) is 3 because f(100) = 3, it is wrong. It is a function, not any particular number. A constant function f(x) = 1 is still a function, it will never equal to another function g(x) = N, i.e. g(x)=f(x)

幽蝶幻影 2024-10-15 07:53:24

这是您想要查看的增长率。 O(1) 意味着根本没有增长。而O(logn)确实有增长。尽管增长很小,但仍然是增长。

Its the growth rate that you want to look at. O(1) implies no growth at all. While O(logn) does have growth. Even though the growth is small it is still growth.

千鲤 2024-10-15 07:53:24

你的想法还不够大。在计算机上运行的任何算法都将永远运行或在少量步骤后终止 - 由于计算机只是一个有限状态机,因此您无法编写运行任意时间然后终止的算法。根据这一论点,大 O 表示法只是理论上的,在现实生活中的计算机程序中没有任何用途。即使 O(2^n) 也会达到 O(2^INT_MAX) 的上限,这相当于 O(1)

但实际上,如果您知道常数因素,Big-O 可以帮助您。即使算法的上限为 O(log n),并且 n 可以有 32 位,这也可能意味着请求花费 1 秒和 32 秒之间的差异。

You’re not thinking big enough. Any algorithm that runs on a computer will either run forever or terminate after some small number of steps — since the computer is only a finite state machine, you cannot write algorithms that run for an arbitrary amount of time and then terminate. By that argument, Big-O notation is only theoretical and has no purpose in a real-life computer program. Even O(2^n) hits an upper limit at O(2^INT_MAX), which is equivalent to O(1).

Realistically, though, Big-O can help you out if you know the constant factors. Even if an algorithm has an upper bound of O(log n), and n can have 32 bits, that could mean the difference between a request taking 1 second and 32 seconds.

月下客 2024-10-15 07:53:24

Big-O 显示运行时间(或内存等)如何随着问题大小的变化而变化
当问题的规模增大 10 倍时,O(n) 解决方案需要 10 倍的时间,O(log(n)) 解决方案需要更长的时间,而 O(1) 解决方案则需要相同的时间: O( 1) 表示“变化与常数 1 一样快”,但常数不会变化。

更详细地熟悉 big-O 表示法

Big-O shows how running time (or memory, etc) changes as the size of problem changes.
When size of the problem gets 10 times bigger, an O(n) solution takes 10 times as long, an O(log(n)) solution takes a bit longer, and an O(1) solution takes the same time: O(1) means 'changes as fast as constant 1', but constants don't change.

Familiarize yourself with the big-O notation in a bit more detail.

So尛奶瓶 2024-10-15 07:53:24

您保留“O(n)”并考虑删除“O(log n)”是有原因的。它们都是“常数”:前者小于 32,后者小于 232。但你仍然有一种自然的感觉,你不能将 O(n) 称为 O(1)。

然而,如果 log(n) log(n) log(n) log(n) log(n) 32,这意味着 O(n*logn) 算法的工作速度比其 O(n) 版本慢三十二倍。大到可以写“log*n”吗?

There is a reason why you leave "O(n)" in, and consider to drop "O(log n)". They both are "constants": the former is less than 32, and the latter is less than 232. But you nevertheless have a natural feeling that you can't call O(n) O(1).

However, if log(n) < 32, it means that O(n*logn) algorithm works thirty two times slower than its O(n) version. Big enough to write "log*n"s?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文