Java 内联如何越过虚拟函数边界?
我正在阅读一些关于 Java 是否可以比 C++ 更快的材料,并发现了以下引用:
Java 比 C++ 更快,因为 JIT 可以跨虚拟函数边界内联。
这是什么意思?这是否意味着 JIT 可以内联虚拟函数调用(因为大概它可以访问运行时信息),而 C++ 必须通过其 vtable 调用该函数?
I'm reading up some material on whether Java can be faster than C++, and came across the following quote:
Java can be faster than C++ because JITs can inline over virtual function boundaries.
Why Java Will Always Be Slower than C++ (wayback link)
What does this mean? Does it mean that the JIT can inline virtual function calls (because presumably it has access to run time information) whereas C++ must call the function through its vtable?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您的问题的答案是肯定的:这就是引用文本的含义。
JIT 将分析所有加载的类。如果它可以确定在任何给定点只能调用一种可能的方法,则可以避免分派和(如果适用)内联方法主体。
相比之下,C++ 编译器不知道所有可能的子类型,因此无法确定是否可以对(虚拟)方法进行此优化。 (当链接器运行时,为时已晚...)
其他答案说您可以在 C++ 中手动进行此优化...但这假设您(程序员)可以自己进行分析,并且将方法从虚拟更改为非虚拟。但如果你弄错了,你就会发现一个错误。
顺便说一句,我们可以假设这种优化对于普通 Java 应用程序来说是值得的。如果不是,JIT 编译器人员就不会实现它。毕竟,毫无价值的优化只会让 Java 应用程序启动更慢。
The answer to your question is Yes: that is what the quoted text means.
The JIT will analyse all of the loaded classes. If it can determine that there is only one possible method that can be called at any given point, it can avoid the dispatching and (if appropriate) inline the method body.
By contrast, a C++ compiler does not know all of the possible subtypes, and therefore cannot determine if this optimization can be done for a (virtual) method. (And by the time the linker runs, it is too late ...)
Other answers have said that you can do this optimization by hand in C++ ... but that assumes that you (the programmer) can do the analysis yourself, and change methods from virtual to non-virtual. But if you get it wrong, you've got a bug to track down.
By the way, we can assume that this optimization is worthwhile for the average Java application. If it was not, the JIT compiler guys would not have implement it. After all, a worthless optimization is only going to make Java applications start more slowly.
由于将 Java 字节码编译为机器代码被推迟到运行时,因此 JVM 可以执行 profile-引导优化和其他需要在代码运行之前才可用的信息的优化。这甚至可能包括“去优化”,其中先前进行的优化被撤销,以便可以发生其他优化。
有关此内容的更多信息,请参阅 Wikipedia 上的自适应优化,其中包括与内联相关的优化。
Since compilation of Java bytecode into machine code is deferred until runtime, it is possible for JVMs to perform profile-guided optimization and other optimizations that require information not available until code is running. This may even include "deoptimization", where a previously made optimization is revoked so that other optimizations can occur.
More information about this can be found under adaptive optimization on Wikipedia, which includes optimizations related to inlining.
就其价值而言,Java、C++、Assembly 将提供相对相同的性能。
是的,通过手动优化的 C++、C 或 Asm 可以实现更好的性能……但是,对于大多数应用程序(尝试严肃的图形应用程序之外的所有内容)来说,这不是瓶颈,而且成本较低实施弥补了任何感知到的较低绩效。
For what its worth, Java, C++, Assembly will provide relatively the same performance.
Yes, better performance can be acheived with handoptimzed C++, C, or Asm ... however, for the majorty of applications out there (try everything outside of serious graphics apps), that is not the bottleneck, -and- the lower cost of implementation makes up for any perceived lower performance.