是否存在“负面”这样的东西?大O复杂性?

发布于 2024-09-09 02:13:06 字数 375 浏览 5 评论 0原文

可能的重复:
是否有 O(1/n) 算法?

这刚刚出现在我的没有什么特别的原因,我认为这是一个奇怪的问题。是否有任何已知的算法或问题实际上可以通过更大的输入变得更容易更快解决?我猜想,如果有的话,也不会是突变或排序之类的事情,而是决策问题。也许存在一些问题,即拥有大量输入可以轻松决定某件事,但我无法想象是什么。

如果负复杂性不存在,是否有证据表明不可能存在?或者只是还没有人发现?

Possible Duplicate:
Are there any O(1/n) algorithms?

This just popped in my head for no particular reason, and I suppose it's a strange question. Are there any known algorithms or problems which actually get easier or faster to solve with larger input? I'm guessing that if there are, it wouldn't be for things like mutations or sorting, it would be for decision problems. Perhaps there's some problem where having a ton of input makes it easy to decide something, but I can't imagine what.

If there is no such thing as negative complexity, is there a proof that there cannot be? Or is it just that no one has found it yet?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

不回头走下去 2024-09-16 02:13:06

不,那是不可能的。由于 Big-Oh 被认为是算法执行的与其域大小相关的操作数的近似值,因此将算法描述为使用负数操作是没有意义的。

维基百科 文章 的正式定义部分实际上使用正实数定义了 Big-Oh 表示法数字。所以实际上甚至没有证明,因为根据正式定义,Big-Oh 的整个概念对于负实数没有任何意义。

简短回答:这是不可能的,因为定义是这样说的。

No that is not possible. Since Big-Oh is suppose to be an approximation of the number of operations an algorithm performs related to its domain size then it would not make sense to describe an algorithm as using a negative number of operations.

The formal definition section of the wikipedia article actually defines the Big-Oh notation in terms of using positive real numbers. So there actually is not even a proof because the whole concept of Big-Oh has no meaning on the negative real numbers per the formal definition.

Short answer: Its not possible because the definition says so.

能否归途做我良人 2024-09-16 02:13:06

更新
为了说清楚,我正在回答问题的这一部分:是否有任何已知的算法或问题实际上可以通过更大的输入更容易或更快地解决?

正如此处接受的答案中所述,有没有算法在输入更大的情况下工作得更快。
是否有 O(1/n) 算法?
即使像 sleep(1/n) 这样的算法也必须花时间读取其输入,因此其运行时间有一个下限。

特别是,作者参考了比较简单的子串搜索算法:
http://en.wikipedia.org/wiki/Horspool

PS 但是使用术语“负复杂性” ' 因为这样的算法对我来说似乎不合理。

update
Just to make it clear, I'm answering this part of the question: Are there any known algorithms or problems which actually get easier or faster to solve with larger input?

As noted in accepted answer here, there are no algorithms working faster with bigger input.
Are there any O(1/n) algorithms?
Even an algorithm like sleep(1/n) has to spend time reading its input, so its running time has a lower bound.

In particular, author referes relatively simple substring search algorithm:
http://en.wikipedia.org/wiki/Horspool

PS But using term 'negative complexity' for such algorithms doesn't seem to be reasonable to me.

⒈起吃苦の倖褔 2024-09-16 02:13:06

并不真地。 O(1) 是您所能期望的最好结果。

我能想到的最接近的是语言翻译,它使用目标语言中的大型短语数据集来匹配源语言中的较小片段。数据集越大,翻译就越好(并且在一定程度上更快)。但这仍然不是 O(1)。

Not really. O(1) is the best you can hope for.

The closest I can think of is language translation, which uses large datasets of phrases in the target language to match up smaller snippets from the source language. The larger the dataset, the better (and to a certain extent faster) the translation. But that's still not even O(1).

桃气十足 2024-09-16 02:13:06

思考在负时间内执行的算法与思考时间倒退是一样的。

如果程序在上午 10:30 开始执行,并在上午 10:00 停止,而没有经过上午 11:00,则它刚刚执行完毕,时间 = O(-1)。

=]

现在,对于数学部分:

如果你不能想出一系列在时间上向后执行的动作(你永远不知道......哈哈),证明非常简单:

正时间 = O(-1) 意味着:

正时间 <= c * -1,对于任何 C > 0且n> n0> 0

考虑“C > 0”限制。
我们找不到一个正数乘以-1会得到另一个正数。
考虑到这一点,结果如下:

PositiveTime <= negativeNumber,对于任何 n > 。 n0> 0

这只是证明你不能拥有 O(-1) 的算法。

To think in an algorithm that executes in negative time, is the same as thinking about time going backwards.

If the program starts executing at 10:30 AM and stops at 10:00 AM without passing through 11:00 AM, it has just executed with time = O(-1).

=]

Now, for the mathematical part:

If you can't come up with a sequence of actions that execute backwards in time (you never know...lol), the proof is quite simple:

positiveTime = O(-1) means:

positiveTime <= c * -1, for any C > 0 and n > n0 > 0

Consider the "C > 0" restriction.
We can't find a positive number that multiplied by -1 will result in another positive number.
By taking that in account, this is the result:

positiveTime <= negativeNumber, for any n > n0 > 0

Wich just proves that you can't have an algorithm with O(-1).

一花一树开 2024-09-16 02:13:06

好吧,对于许多计算,例如“给定输入 A 返回 f(A)”,您可以“缓存”计算结果(将它们存储在数组或映射中),这将使计算速度更快,如果其中一些值重复的话,可以使用大量值。

但我不认为它符合“负复杂性”。在这种情况下,最快的性能可能会计算为 O(1),最坏情况的性能将为 O(N),而平均性能将介于两者之间。

这在某种程度上适用于排序算法 - 其中一些算法具有 O(N) 最佳情况复杂度和 O(N^2) 最坏情况复杂度,具体取决于要排序的数据的状态。

我认为为了具有负复杂度,算法应该在被要求计算结果之前返回结果。即它应该连接到时间机器并且应该能够处理相应的“祖父悖论”

Well, for many calculations like "given input A return f(A)" you can "cache" calculation results (store them in array or map), which will make calculation faster with larger number of values, IF some of those values repeat.

But I don't think it qualifies as "negative complexity". In this case fastest performance will probably count as O(1), worst case performance will be O(N), and average performance will be somewhere inbetween.

This is somewhat applicable for sorting algorithms - some of them have O(N) best-case scenario complexity and O(N^2) worst case complexity, depending on the state of data to be sorted.

I think that to have negative complexity, algorithm should return result before it has been asked to calculate result. I.e. it should be connected to a time machine and should be able to deal with corresponding "grandfather paradox".

墨洒年华 2024-09-16 02:13:06

与关于空算法的其他问题一样,这个问题是一个定义问题,而不是什么是可能或不可能的问题。当然可以考虑一个算法花费 O(1/n) 时间的成本模型。 (当然,这不是负数,而是随着输入的增加而减少。)该算法可以执行诸如 sleep(1/n) 之类的操作,正如其他答案之一所建议的那样。确实,当 n 被发送到无穷大时,成本模型就会崩溃,但 n 永远不会被发送到无穷大;无论如何,每种成本模型最终都会崩溃。对于 1 字节到 1 GB 的输入大小来说,sleep(1/n) 花费 O(1/n) 时间可能是非常合理的。对于任何时间复杂度公式来说,这个范围都非常广泛。

另一方面,时间复杂度最简单、最标准的定义使用单位时间步长。正整数值函数不可能具有渐近递减;最小的可能是 O(1)。

As with the other question about the empty algorithm, this question is a matter of definition rather than a matter of what is possible or impossible. It is certainly possible to think of a cost model for which an algorithm takes O(1/n) time. (That is not negative of course, but rather decreasing with larger input.) The algorithm can do something like sleep(1/n) as one of the other answers suggested. It is true that the cost model breaks down as n is sent to infinity, but n never is sent to infinity; every cost model breaks down eventually anyway. Saying that sleep(1/n) takes O(1/n) time could be very reasonable for an input size ranging from 1 byte to 1 gigabyte. That's a very wide range for any time complexity formula to be applicable.

On the other hand, the simplest, most standard definition of time complexity uses unit time steps. It is impossible for a positive, integer-valued function to have decreasing asymptotics; the smallest it can be is O(1).

深居我梦 2024-09-16 02:13:06

我不知道这是否合适,但这让我想起了 BitTorrent。下载文件的人越多,所有人的下载速度就越快

I don't know if this quite fits but it reminds me of bittorrent. The more people downloading a file, the faster it goes for all of them

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文