阶乘时间算法和 P/NP
很容易看出 n!增长速度比几乎任何事物的 N 次方(例如 100^N)都要慢,因此,如果一个问题被认为是 NP 完全问题并且发生了!算法逼近解决方案,人们会跳史努比舞。
关于这种情况我有两个问题:
- n!算法被认为是多项式时间内的解决方案吗?阶乘显然不是一个幂项。
- 如果找到一个!解决方案意味着我们有一个相当快的算法,并且因为 n!增长速度超过 2^N,那么这是否意味着某些 NP 完全问题不需要启发式/近似算法(除了模糊的情况)?
当然,这两个问题都依赖于第一段的真实性;如果我错了,请告诉我。
It's quite easy to see that n! grows slower than almost anything to the N power (say, 100^N) and so, if a problems is considered NP complete and one happened upon a n! algorithm that approximates the solution, one would do the Snoopy dance.
I have 2 questions about this situation:
- Would the n! algorithm be considered a solution in polynomial time? A factorial certainly doesn't appear to be a term raised to a power.
- If finding a n! solution means we have a decently fast algorithm and since n! grows faster than 2^N, then does this mean that some NP-complete problems do not need heuristic/approximation algorithms (except for obscure cases)?
Of course, these two questions rely on the first paragraph being true; if I've erred, please let me know.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
没有。阶乘时间不是多项式时间。多项式时间通常意味着 O(Nk) 形式的方程,其中 N = 正在处理的项目数,k = 某个常数。重要的是指数是一个常数——你将 N 乘以某个固定的数字——不依赖于 N 本身。阶乘复杂度算法意味着乘法次数不是固定的——乘法次数本身随 N 增长。
您似乎在这里遇到了同样的问题。 N2 是多项式复杂度。 2N 则不会。你的基本原则也是错误的——阶乘复杂度算法并不意味着“我们有一个相当快的算法”,至少作为一般规则。如果有的话,结论恰恰相反:阶乘算法在一些特殊情况下(即,N 非常小)可能是实用的,但随着 N 的增长,很快就会变得不切实际。
让我们试着正确看待这一点。二分查找的时间复杂度为 O(log N)。线性搜索是 O(N)。在排序中,“慢速”算法为 O(N2),“高级”算法为 O(N lg N)。阶乘复杂度是(显然足够)O(N!)。
让我们试着用一些数字来说明这一点,(目前)只考虑 10 个项目。其中每个项目大约是处理 10 个项目而不是 1 个项目所需的时间的多长时间:
O(log N): 2
O(N):10
O(N log N): 23
O(N2): 100
O(N!): 3,628,800
目前我有点作弊,使用自然对数而不是以 2 为底的对数,但我们在这里只是尝试进行大概估计(差异是一个相当小的常数因子)任何情况)。
正如您所看到的,阶乘复杂度算法的增长率比任何其他算法都要快得多。如果我们将其扩展到 20 个项目,差异会变得更加显着:
O(log N): 3
O(n):20
O(N log N): 60
O(N2): 400
O(N!): 2,432,902,008,176,640,000
N! 的增长率速度如此之快,以至于它们几乎肯定是不切实际的,除非涉及的项目数量已知非常小。对于 grins,我们假设上述进程的基本操作都可以在单个机器时钟周期中运行。为了便于论证(并为了使计算简单),我们假设 CPU 为 10 GHz。因此,基础是处理一项需要 0.1 ns。在这种情况下,有 20 个项目:
O(log N) = .3 ns
O(N) = 2 纳秒
O(N log N) = 6 纳秒
O(N2) = 40 纳秒
O(N!) = 7.7 年。
No. factorial time is not polynomial time. Polynomial time normally means an equation of the form O(Nk), where N = number of items being processed, and k = some constant. The important part is that the exponent is a constant -- you're multiplying N by itself some number of that's fixed -- not dependent on N itself. A factorial-complexity algorithm means the number of multiplications is not fixed -- the number of multiplications itself grows with N.
You seem to have the same problem here. N2 would be polynomial complexity. 2N would not be. Your basic precept is mistaken as well -- a factorial-complexity algorithm does not mean "we have a decently fast algorithm", at least as a general rule. If anything, the conclusion is rather the opposite: a factorial algorithm may be practical in a few special cases (i.e., where N is extremely small) but becomes impractical very quickly as N grows.
Let's try to put this in perspective. A binary search is O(log N). A linear search is O(N). In sorting, the "slow" algorithms are O(N2), and the "advanced" algorithms O(N lg N). A factorial-complexity is (obviously enough) O(N!).
Let's try to put some numbers to that, considering (for the moment) only 10 items. Each of these will be roughly how many times longer processing should take for 10 items instead of 1 item:
O(log N): 2
O(N):10
O(N log N): 23
O(N2): 100
O(N!): 3,628,800
For the moment I've cheated a bit, and use a natural logarithm instead of a base 2 logarithm, but we're only trying for ballpark estimates here (and the difference is a fairly small constant factor in any case).
As you can see, the growth rate for the factorial-complexity algorithm is much faster than for any of the others. If we extend it to 20 items, the difference becomes even more dramatic:
O(log N): 3
O(n): 20
O(N log N): 60
O(N2): 400
O(N!): 2,432,902,008,176,640,000
The growth rate for N! is so fast that they're pretty much guaranteed to be impractical except when the number of items involves is known to be quite small. For grins, let's assume that the basic operations for the processes above can each run in a single machine clock cycle. Just for the sake of argument (and to keep the calculations simple) let's assume a 10 GHz CPU. So, the base is that processing one item takes .1 ns. In that case, with 20 items:
O(log N) = .3 ns
O(N) = 2 ns
O(N log N) = 6 ns
O(N2) = 40 ns
O(N!) = 7.7 years.
很容易看出阶乘的行为是(大约)指数的。
它可以(非常粗略地)近似为 nn (更具体地说,sqrt(2πn)(n/e)n)。
因此,如果您发现任何特定的 M,您认为 Mn 是一个很好的近似值,那么您(可能)错了。 269!大于 100n 且为 n!乘以大于 100 的数字,它将继续以更快的速度增长。
It's quite easy to see that the factorial is (approximately) exponential in behaviour.
It can be (very crudely) approximated as nn (more specifically, sqrt(2πn)(n/e)n).
So if you have found any specific M where you think Mn is a good approximation, you're (probably) wrong. 269! is larger than 100n and as n! will be multiplied by numbers larger than 100, it will continue to grow faster.