为什么渐近分析中常量被忽略?

发布于 2024-10-02 07:55:36 字数 21 浏览 6 评论 0原文

为什么渐近分析中常量被忽略?

Why are constants ignored in asymptotic analysis?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

辞别 2024-10-09 07:55:36

常数因子被忽略,因为在考虑常数因子时,运行时间和内存消耗(最常使用 O 表示法测量的两个属性)很难推理。

如果我们将 U( f(n) ) 定义为所有存在 N 的函数 g 的集合,使得对于所有 <代码>N> n: g(n) <= f(n) (即与没有常数因子的 O 相同),要证明算法的运行时间在 <代码>U( f(n) ) 比O( f(n) )

一方面,我们需要一个精确的单位来测量运行时间。使用 CPU 指令作为基本单元是可行的,但这取决于算法的具体实现以及它运行的处理器架构。

内存消耗也类似:同一算法的不同实现的内存消耗会有所不同(通过一个常数因子)。此外,如果实现使用大量指针,则相同的实现在 64 位机器上使用的内存大约是 32 位机器上的两倍。但是,如果说“当使用此 C 代码实现时,该算法的内存消耗在 32 位 Intel PC 上为 O(23 * n)”。则为 O(42 * n) 在 64 位 PC 上”没有用。

忽略常量使我们能够以独立于实现和平台的方式推断算法的属性。

Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors.

If we define U( f(n) ) to be the set of all function g for which there exists an N such that for all N > n: g(n) <= f(n) (i.e. the same as O without the constant factor), it is much harder to show that an algorithm's running time is in U( f(n) ) than O( f(n) ).

For one thing, we'd need an exact unit for measuring running time. Using a CPU instruction as the basic unit would work, but that'd depend on the exact implementation of the algorithm as well as the processor architecture it runs on.

It's similar for memory consumption: different implementations of the same algorithm will differ in their memory consumption (by a constant factor). Further if an implementation uses a lot of pointers, the same implementation will use about twice as much memory on a 64-bit machine than on a 32-bit machine. But saying things like "this algorithm's memory consumption, when implemented using this C-code, is in O(23 * n) on a 32-bit Intel PC. It's in O(42 * n) on a 64-bit PC" is just not useful.

Ignoring constants allows us to reason about the properties of an algorithm in an implementation- and platform-independent manner.

莳間冲淡了誓言ζ 2024-10-09 07:55:36

这是因为图灵机的线性加速定理

如果您向我展示一台图灵机,它可以在 f(n) 步中解决 n 大小的问题,并指定一个常数 c > > 0,我可以制作一台图灵机,在c f(n)步(或一步,如果< em>c f(n) < 1)。例如,通过采用c = ½,我的机器可以用一半的步骤解决相同的问题。或者,通过使用c = 1/1000000,我的机器只需百万分之一的步骤就可以解决相同的问题!

这个结果使得常数因子变得无趣(理论上来说:显然在实践中它们仍然有一些兴趣)。

It's because of the linear speedup theorem for Turing machines.

If you show me a Turing machine that solves a problem of size n in f(n) steps, and specify a constant c > 0, I can make a Turing machine that solves the same problem in c f(n) steps (or one step, if c f(n) < 1). For example, by taking c = ½ my machine can solve the same problem in half as many steps. Or, by taking c = 1/1000000, my machine can solve the same problem in only a millionth as many steps!

This result makes constant factors uninteresting (theoretically speaking: obviously in practice they still have some interest).

染火枫林 2024-10-09 07:55:36

如果您正在谈论这个

http://www.cs.cornell .edu/courses/cs312/2004fa/lectures/lecture16.htm

当您分析算法的运行时间(或其他一些方面)并发现它类似于

 n ^ 2  + k

然后,当您计算出大的- O 性能时间——查看 k 没有意义,因为您想知道 n 变大时的运行时间。 n ^ 2 比 k 大得多——您可以放心地忽略它。

If you are talking about this

http://www.cs.cornell.edu/courses/cs312/2004fa/lectures/lecture16.htm

When you analyze the running time (or some other aspect) of an algorithm and find that it is something like

 n ^ 2  + k

Then, when you are figuring out the big-O performance time -- it makes no sense to look at k, because you want to know the running time as n is getting large. n ^ 2 is so much larger than k -- that you can safely ignore it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文