无法理解有关编译器的声明 优化
我对虚拟机运行时和编译时的优化感兴趣。 我的想法是,优化在编译时是最有效和最简单的。
然而,我的想法在某些情况下似乎是错误的。 这在 丹尼尔引用的 Steve Yeggie 的声明中显而易见
[O]在运行时由聪明的人执行优化通常会更容易 虚拟机--。
为什么虚拟机在运行时执行优化比编译时更容易?
I am interested in optimization at runtime by a VM and at compile-time. I have had the idea that optimizations are most efficient and easiest at compile-time.
However, my thought seems to be false in certain circumstances. This is evident in Steve Yeggie's statement quoted by Daniel
[O]ptimization is often easier when performed at runtime by a clever
virtual machine - -.
Why is the optimization easier when performed at runtime by a VM than at compile-time?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
因为在运行时您有额外的信息:机器如何执行、进程的内存限制以及可能最重要的是正在执行哪些代码以及执行频率。
这些东西使您能够进行在编译时无法进行的运行时优化。
Because at runtime you have extra information: how the machine is performing, your process's memory limits and, probably most importantly, what code is being executed and how often.
Those things allow you to make runtime optimizations that you simply can't make at compile time.
虚拟机可以收集统计数据进行优化,就像数据库对您的使用情况所做的那样。
THe VM can gather statistics to optimize, likewise a database does on your usage.
持续保存统计数据和检查不变量也会增加编译或解释片段的执行时间。 如果您不能足够快且足够好地优化,请不要打扰。 我认为运行时而不是编译时更容易获得更好的结果。 我认为考虑到良好实施的复杂性就更难了。
就像人们普遍误解的那样,足够好的优化编译器会生成比人类更好的汇编,足够聪明的虚拟机可能需要太聪明才能执行得更快。
Continuously keeping statistics and checking invariants are also overhead to execution time of compiled or interpreted fragments. If you can't optimize quick and well enough, don't bother. I don't think it is easier to get any better results doing it run time instead of compile time. I think it is even harder considering the complexity of a good implementation.
Much like the common misconception of sufficiently good optimizing compilers' generating better assembly than humans, sufficiently clever VM may need to be damn too clever to execute faster.
需要认识到的是,允许运行时优化的并不是虚拟机的概念,而是许多虚拟机不会丢弃原始程序元数据并且内置了反射功能的事实。 更合适的术语是“运行时库”可以比单独的静态优化做得更好; 这适用于 C 等非 VM 语言。
Something to recognize is that it isn't the concept of a VM that allows runtime optimizations, it's the fact that many VMs don't throw away the original program meta-data and have built in functionality for reflection. A more appropriate term to use would be a "Runtime library" can do better optimizations than static optimizations alone; this applies to non-VM languages like C.
简短回答:因为在运行时更容易识别和分析热点 - 程序中使用时间最多的部分。
长答案:
如果您开始在解释模式下运行代码,虚拟机可以计算代码不同部分的使用频率和时长。 这些部分可以进行更好的优化。
以嵌套的 if-then-else 子句为例。 更少的布尔检查需要更少的运行时间。 如果您优化执行更频繁的部分的路径,则可以获得更好的整体运行时间。
另一点是,在运行时您可以做出在编译时不可能的假设。 例如,Java-VM 内联在服务器模式虚拟方法中 - 只要只加载一个类,它就会实现这些方法。 如果在编译时完成,那将是不安全的。 如果加载另一个类,JVM 会再次对代码进行去优化,但这种情况通常不会发生。
此外,在运行时,我们更了解程序运行的机器。 如果您的机器有更多寄存器,您可以使用它们。 同样,如果在编译时完成,这是不安全的。
有一件事要说:运行时优化也有缺点。 最重要的是:优化的时间会添加到程序的运行时间中。 而且它也更复杂,因为你必须编译部分程序并执行它们。 虚拟机中的错误非常严重。 想想编译器,有时会崩溃 - 您可以再次编译,一切都很好。 如果虚拟机有时崩溃,则意味着您的程序有时崩溃。 不好。
结论是:您可以在运行时进行所有优化,这在编译时是可能的……等等。 您可以获得有关程序、执行路径以及程序运行的机器的更多信息。 但您必须考虑运行优化所需的时间。 此外,在运行时执行起来更加复杂,并且错误比编译时更相关。
Short answer: Because it is easier to identify and analyze at runtime the hotspots - the parts of your program that are using the most time.
Long answer:
If you start running the code in interpreted mode a virtual machine can count how often and how long different parts of the code are used. These parts can be optimized better.
Take nested if-then-else-clauses. Fewer boolean checks need lesser runtime. If you optimize the path for the part, that is executed more often, you can get better overall runtime.
Another point is, that at runtime you can make assumptions, that are impossible at compile-time. The Java-VM for instance inlines in server-mode virtual methods - as long only one class is loaded, that implements these method. That would be unsafe, if done at compile time. The JVM deoptimizes the code again, if another class is loaded, but often this never happens.
Also at runtime is more known about the machine the program runs on. If you have a machine with more registers you could use them. Again, that is not safe if done at compile-time.
One thing to say: optimizing at runtime has also disadvantages. Most important: the time for the optimizations is added to the runtime of the program. Also it is more complicated, because you have to compile parts of the program and executes them. Bugs in the virtual machine are critical. Think about a compiler, that sometimes crash - you can compile again and everything is fine. If a VM crashes sometimes, that means your program is crashing sometimes. Not good.
As a conclusion: You can do every optimization at runtime, that is possible at compile-time ... and some more. You have more information about the program, it's execution-paths and the machine the program is running. But you have to factor in the time needed for running the optimizations. Also it is more complicated to do at runtime and faults are more relevant than at compile-time.
因为运行时有更多信息可用。 例如,确切的CPU、操作系统和其他环境细节是已知的。 此信息会影响如何进行优化。
Because there's more information available at runtime. For instance exact CPU, operating system and other environment details are known. This information has an effect on how the optimization should be done.
VM具有程序的完整代码,而编译器通常只有部分代码,因为不同翻译单元的单独翻译。 因此,虚拟机有更多数据可供分析。
The VM has full code of the program and the compiler often has only partial code because of separate translation of different translation units. The VM therefore has more data for analysis.