为什么渐近分析中常量被忽略?
为什么渐近分析中常量被忽略?
Why are constants ignored in asymptotic analysis?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
为什么渐近分析中常量被忽略?
Why are constants ignored in asymptotic analysis?
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(3)
常数因子被忽略,因为在考虑常数因子时,运行时间和内存消耗(最常使用 O 表示法测量的两个属性)很难推理。
如果我们将
U( f(n) )
定义为所有存在N
的函数g
的集合,使得对于所有 <代码>N> n: g(n) <= f(n) (即与没有常数因子的O
相同),要证明算法的运行时间在 <代码>U( f(n) ) 比O( f(n) )
。一方面,我们需要一个精确的单位来测量运行时间。使用 CPU 指令作为基本单元是可行的,但这取决于算法的具体实现以及它运行的处理器架构。
内存消耗也类似:同一算法的不同实现的内存消耗会有所不同(通过一个常数因子)。此外,如果实现使用大量指针,则相同的实现在 64 位机器上使用的内存大约是 32 位机器上的两倍。但是,如果说“当使用此 C 代码实现时,该算法的内存消耗在 32 位 Intel PC 上为
O(23 * n)
”。则为O(42 * n)
在 64 位 PC 上”没有用。忽略常量使我们能够以独立于实现和平台的方式推断算法的属性。
Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors.
If we define
U( f(n) )
to be the set of all functiong
for which there exists anN
such that for allN > n: g(n) <= f(n)
(i.e. the same asO
without the constant factor), it is much harder to show that an algorithm's running time is inU( f(n) )
thanO( f(n) )
.For one thing, we'd need an exact unit for measuring running time. Using a CPU instruction as the basic unit would work, but that'd depend on the exact implementation of the algorithm as well as the processor architecture it runs on.
It's similar for memory consumption: different implementations of the same algorithm will differ in their memory consumption (by a constant factor). Further if an implementation uses a lot of pointers, the same implementation will use about twice as much memory on a 64-bit machine than on a 32-bit machine. But saying things like "this algorithm's memory consumption, when implemented using this C-code, is in
O(23 * n)
on a 32-bit Intel PC. It's inO(42 * n)
on a 64-bit PC" is just not useful.Ignoring constants allows us to reason about the properties of an algorithm in an implementation- and platform-independent manner.
这是因为图灵机的线性加速定理。
如果您向我展示一台图灵机,它可以在 f(n) 步中解决 n 大小的问题,并指定一个常数 c > > 0,我可以制作一台图灵机,在c f(n)步(或一步,如果< em>c f(n) < 1)。例如,通过采用c = ½,我的机器可以用一半的步骤解决相同的问题。或者,通过使用c = 1/1000000,我的机器只需百万分之一的步骤就可以解决相同的问题!
这个结果使得常数因子变得无趣(理论上来说:显然在实践中它们仍然有一些兴趣)。
It's because of the linear speedup theorem for Turing machines.
If you show me a Turing machine that solves a problem of size n in f(n) steps, and specify a constant c > 0, I can make a Turing machine that solves the same problem in c f(n) steps (or one step, if c f(n) < 1). For example, by taking c = ½ my machine can solve the same problem in half as many steps. Or, by taking c = 1/1000000, my machine can solve the same problem in only a millionth as many steps!
This result makes constant factors uninteresting (theoretically speaking: obviously in practice they still have some interest).
如果您正在谈论这个
http://www.cs.cornell .edu/courses/cs312/2004fa/lectures/lecture16.htm
当您分析算法的运行时间(或其他一些方面)并发现它类似于
然后,当您计算出大的- O 性能时间——查看 k 没有意义,因为您想知道 n 变大时的运行时间。
n ^ 2
比 k 大得多——您可以放心地忽略它。If you are talking about this
http://www.cs.cornell.edu/courses/cs312/2004fa/lectures/lecture16.htm
When you analyze the running time (or some other aspect) of an algorithm and find that it is something like
Then, when you are figuring out the big-O performance time -- it makes no sense to look at k, because you want to know the running time as n is getting large.
n ^ 2
is so much larger than k -- that you can safely ignore it.