为什么程序运行时间不是衡量标准?
我了解到程序是通过其复杂性来衡量的 - 我的意思是通过大 O 表示法。 我们为什么不通过它的绝对运行时间来衡量它呢? 谢谢 :)
i have learned that a program is measured by it's complexity - i mean by Big O Notation.
why don't we measure it by it's absolute running time?
thanks :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
运行时间不衡量复杂性,它只衡量性能或执行任务所需的时间。 MP3 播放器将运行播放歌曲所需的时间。在这种情况下,消耗的 CPU 时间可能更有用。
复杂性的衡量标准之一是它如何扩展到更大的输入。这对于规划所需的硬件很有用。在所有条件相同的情况下,相对线性扩展的东西比扩展性差的东西更好。事情很少是平等的。
复杂性的另一个衡量标准是代码的简单程度。对于性能复杂度相对线性的程序,代码复杂度通常较高。复杂的代码可能需要昂贵的维护成本,并且更改更有可能引入错误。
所有这三种(或四种)措施都是有用的,但它们本身都不是非常有用。三者结合在一起会非常有用。
Running time does not measure complexity, it only measures performance, or the time required to perform the task. An MP3 player will run for the length of the time require to play the song. The elapsed CPU time may be more useful in this case.
One measure of complexity is how it scales to larger inputs. This is useful for planning the require hardware. All things being equal, something that scales relatively linearly is preferable to one which scales poorly. Things are rarely equal.
The other measure of complexity is a measure of how simple the code is. The code complexity is usually higher for programs with relatively linear performance complexity. Complex code can be costly maintain, and changes are more likely to introduce errors.
All three (or four) measures are useful, and none of them are highly useful by themselves. The three together can be quite useful.
这个问题可以使用更多的上下文。
在编写实际程序时,我们很可能会测量程序的运行时间。但这有多个潜在的问题
1. 程序运行在什么硬件上?比较在不同硬件上运行的两个程序实际上并没有给出有意义的比较。
2. 还有哪些软件正在运行?如果有其他东西在运行,它就会窃取 CPU 周期(或者程序运行的任何其他资源)。
3.输入是什么?正如已经说过的,对于一个小集合,解决方案可能看起来非常快,但可扩展性却很困难。此外,某些输入比其他输入更容易。如果作为一个人,你递给我一本字典并要求我排序,我会立即把它还给你并说完成。如果按照随机顺序给我一套 50 张卡片(比字典小得多),我会花费更长的时间。
4. 起始条件是什么?如果您的程序第一次运行,很可能将其从硬盘上卸下将占用现代系统上最大的时间。比较两个具有小输入的实现可能会掩盖它们的差异。
大 O 表示法涵盖了很多此类问题。
1. 硬件并不重要,因为一切都通过 1 次操作 O(1) 的速度进行归一化。
2. Big O 谈论的算法与它周围的其他算法无关。
3. Big O谈论输入将如何改变运行时间,而不是一个输入需要多长时间。它告诉您算法的执行情况越差,而不是它在平均或简单输入上的执行情况。
4. 再次强调,Big O 处理算法,而不是在物理系统中运行的程序。
The question could use a little more context.
In programming a real program, we are likely to measure the program's running time. There are multiple potential issues with this though
1. What hardware is the program running on? Comparing two programs running on different hardware really doesn't give a meaningful comparison.
2. What other software is running? If anything else running, it's going to steal CPU cycles (or whatever other resource your program is running on).
3. What is the input? As already said, for a small set, a solution might look very fast, but scalability goes out the door. Also, some inputs are easier than others. If as a person, you hand me a dictionary and ask me to sort, I'll hand it right back and say done. Giving me a set of 50 cards (much smaller than a dictionary) in random order will take me a lot longer to do.
4. What is the starting conditions? If your program runs for the first time, chances are, spinning it off the hard disk will take up the largest chunk of time on modern systems. Comparing two implementations with small inputs will likely have their differences masked by this.
Big O notation covers a lot of these issues.
1. Hardware doesn't matter, as everything is normalized by the speed of 1 operation O(1).
2. Big O talks about the algorithm free of other algorithms around it.
3. Big O talks about how the input will change the running time, not how long one input takes. It tells you the worse the algorithm will perform, not how it performs on an average or easy input.
4. Again, Big O handles algorithms, not programs running in a physical system.
您可以使用算法的复杂性而不是绝对运行时间来推理算法,因为程序的绝对运行时间不仅取决于所使用的算法和输入的大小。它还取决于它运行的机器、各种实现细节以及当前正在使用系统资源的其他程序。即使您在同一台计算机上使用相同的输入运行相同的应用程序两次,您也不会获得完全相同的时间。
因此,当给定一个程序时,您不能只是做出这样的陈述:“当使用大小为 n 的输入运行时,该程序将花费 20*n 秒”,因为程序的运行时间取决于比输入大小更多的因素。但是,您可以做出类似“该程序的运行时间为 O(n)”之类的声明,因此这更有用。
You use the complexity of an algorithm instead of absolute running times to reason about algorithms, because the absolute running time of a program does not only depend on the algorithm used and the size of the input. It also depends on the machine it's running on, various implementations detail and what other programs are currently using system resources. Even if you run the same application twice with the same input on the same machine, you won't get exactly the same time.
Consequently when given a program you can't just make a statement like "this program will take 20*n seconds when run with an input of size n" because the program's running time depends on a lot more factors than the input size. You can however make a statement like "this program's running time is in O(n)", so that's a lot more useful.
绝对运行时间并不能表明算法如何随着不同的输入集而增长。对于所有实际数据集,O(n*log(n)) 算法可能比 O(n^2) 算法慢得多。
Absolute running time is not an indicator of how the algorithm grows with different input sets. It's possible for a O(n*log(n)) algorithm to be far slower than an O(n^2) algorithm for all practical datasets.