Java编译器优化
最近,我正在阅读这篇文章。
根据该文章,Java 编译器(即 javac)在生成字节码时不执行任何优化。这是真的吗?如果是这样,那么它可以作为中间代码生成器来实现,以消除冗余并生成最优代码吗?
Recently, I was reading this article.
According to that article, Java Compiler i.e. javac does not perform any optimization while generating bytecode. Is it really true? If so, then can it be implemented as an intermediate code generator to remove redundancy and generate optimal code?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
javac
只会做很少的优化(如果有的话)。关键是 JIT 编译器执行了大部分优化 - 如果它有大量信息,它的效果最好,但如果 javac 也执行优化,其中一些信息可能会丢失。如果
javac
执行某种循环展开,那么 JIT 本身就很难以一般方式执行此操作 - 并且它具有有关哪些优化实际上有效的更多信息,因为它知道目标平台。javac
will only do a very little optimization, if any.The point is that the JIT compiler does most of the optimization - and it works best if it has a lot of information, some of which may be lost if
javac
performed optimization too. Ifjavac
performed some sort of loop unrolling, it would be harder for the JIT to do that itself in a general way - and it has more information about which optimizations will actually work, as it knows the target platform.当我读到这一部分时,我停止了阅读:
首先,在 Java 源代码上进行循环展开几乎不是一个好主意。 javac 没有做太多优化的原因是它是由 JVM 中的 JIT 编译器完成的,它可以做出比编译器更好的决策,因为它可以准确地看到哪些代码是跑得最多的。
I stopped reading when I got to this section:
Firstly, doing loop unrolling on Java source code is hardly ever a good idea. The reason
javac
doesn't do much in the way of optimization is that it's done by the JIT compiler in the JVM, which can make much better decisions that the compiler could, because it can see exactly which code is getting run the most.javac 编译器曾经支持通过在命令行上传递 -o 来生成优化字节码的选项。
但是从 J2SE1.3 开始, HotSpot JVM 随平台一起提供,它引入了动态技术,例如实时编译和通用执行路径的自适应优化。因此,从该版本开始,
-o
被 Java 编译器忽略。我在阅读有关 Ant
javac
任务及其 <代码>优化属性:本页提到了 HotSpot JVM 动态优化相对于编译时优化的优势:
The
javac
compiler once supported an option to generate optimized bytecode by passing-o
on the command line.However starting J2SE1.3, the HotSpot JVM was shipped with the platform, which introduced dynamic techniques such as just-in-time compilation and adaptive optimization of common execution paths. Hence the
-o
was ignored by the Java compiler starting this version.I came across this flag when reading about the Ant
javac
task and itsoptimize
attribute:The advantages of the HotSpot JVM's dynamic optimizations over compile-time optimization are mentioned in this page:
我过去研究过输出的 Java 字节码(使用名为 FrontEnd 的应用程序)。它基本上不做任何优化,除了内联常量(静态最终)和预先计算固定表达式(如 2*5 和“ab”+“cd”)。这就是为什么它如此容易反汇编(使用名为 JAD 的应用程序)的部分原因。
我还发现了一些有趣的点来优化你的 java 代码。它帮助我将内循环的速度提高了 2.5 倍。
一个方法有 5 个快速访问变量。当这些变量被调用时,它们比所有其他变量更快(可能是因为堆栈维护)。方法的参数也计入这 5 个。因此,如果 for 循环内有执行一百万次的代码,请在方法的开头分配这些变量,并且没有参数。
局部变量也比字段更快,因此如果您在内循环中使用字段,请通过在方法开始时将它们分配给局部变量来缓存这些变量。缓存引用而不是内容。 (如:int[] px = this.pixels;)
I have studied outputted Java bytecode in the past (using an app called FrontEnd). It basically doesn't do any optimization, except for inlining constants (static finals) and precalculating fixed expressions (like 2*5 and "ab"+"cd"). This is part of why is is so easy to disassemble (using an app called JAD)
I also discovered some interesting points to optimize your java code with. It helped me improve speeds of inner-loops by 2.5 times.
A method has 5 quick-access variables. When these variables are called, they're faster than all other variables (probably because of stack maintainance). The parameters of a method are also counted to these 5. So if you have code inside for loop which is executed like a million times, allocate those variables at the start of the method, and have no parameters.
Local variables are also faster than fields, so if you use fields inside inner loops, cache these variables by assigning them to a local variable at the start of the method. Cache the reference not the contents. (like: int[] px = this.pixels;)
要优化字节码,您可以使用 Proguard。
正如其他人所指出的,主流 JVM 中的 JIT 会在编译代码时对其进行优化。它的性能可能会优于 Proguard,因为它可以访问更多上下文。但在更简单的虚拟机中情况可能并非如此。在 Android 世界中,在针对 Dalvik(Lollipop 之前随 Android 附带的 VM)时使用 Proguard 优化是常见的做法。
Proguard 还缩小并混淆了字节码,这在传送客户端应用程序时是必须的(即使您不使用优化)。
To optimize your bytecode, you can use Proguard.
As others have noted, the JIT in a mainstream JVM will optimize the code while compiling it. It will probably outperform Proguard, because it has access to more context. But ths may not be the case in more simple VMs. In the Android world it is common practice to use Proguard optimizations when targeting Dalvik (the VM that came with Android before Lollipop).
Proguard also shrinks and obfuscates the bytecode, which is a must when shipping client side applications (even if you don't use the optimizations).
编译器不会优化字节码,因为它是在运行时由 JIT 优化器优化的。
如果您的目标运行时类型没有 JIT 优化器(即使它有 JIT 编译器),或者您正在进行 AOT 编译,我建议使用 Proguard 或 Allatori 等优化混淆器。
The compiler don't optimize the bytecode because it is optimized at run time by the JIT optimizer.
If the type of runtime you are targeting don't have a JIT optimizer (even if it had a JIT compiler), or you are AOT compiling, I recommend using an optimizing obfuscator like Proguard or Allatori.