为什么要解释java字节码?
据我了解,Java 编译为 Java 字节码,然后任何运行 Java 的机器都可以对其特定 CPU 进行解释。 Java 使用 JIT 来解释字节码,我知道它的速度非常快,但是为什么语言设计者在检测到正在运行的特定机器时不静态编译为机器指令呢?每次通过代码时都会解释字节码吗?
As far as I understand Java compiles to Java bytecode, which can then be interpreted by any machine running Java for its specific CPU. Java uses JIT to interpret the bytecode, and I know it's gotten really fast at doing so, but why doesn't/didn't the language designers just statically compile down to machine instructions once it detects the particular machine it's running on? Is the bytecode interpreted every single pass through the code?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
最初的设计是以“一次编译,到处运行”为前提的。因此虚拟机的每个实现者都可以运行编译器生成的字节码。
在编程大师一书中,詹姆斯·高斯林 解释道:
The original design was in the premise of "compile once run anywhere". So every implementer of the virtual machine can run the bytecodes generated by a compiler.
In the book Masterminds for Programming, James Gosling explained:
Java通常被编译为机器指令;这就是即时 (JIT) 编译。但默认情况下,Sun 的 Java 实现仅对经常运行的代码执行此操作(因此仅执行一次的启动和关闭字节码仍会被解释以防止 JIT 开销)。
Java is commonly compiled to machine instructions; that's what just-in-time (JIT) compilation is. But Sun's Java implementation by default only does that for code that is run often enough (so startup and shutdown bytecode, that is executed only once, is still interpreted to prevent JIT overhead).
在很多情况下,字节码解释通常“足够快”。另一方面,编译是相当昂贵的。如果 90% 的运行时间花费在 1% 的代码中,那么最好只编译那 1%,而忽略其他 99%。
Bytecode interpretation is usually "fast enough" for a lot of cases. Compiling, on the other hand, is rather expensive. If 90% of the runtime is spent in 1% of the code it's far better to just compile that 1% and leave the other 99% alone.
静态编译可能会给你带来麻烦,因为你使用的所有其他库也需要一次性编写运行在任何地方(即字节码),包括它们的所有依赖项。这可能会导致一系列依赖关系的编译链,这些依赖关系可能会导致您崩溃。我认为一般的想法是,当(运行时)运行时发现它实际上需要编译的代码部分时,仅编译代码。可能有许多代码路径您实际上并未遵循,尤其是当库出现问题时。
Static compiling can blow up on you because all the other libraries you use also need to be write-once run everywhere (i.e. byte-code), including all of their dependencies. This can lead to a chain of compilations following dependencies that can blow up on you. Compiling only the code as (while running) the runtime discovers it actually needs that section of code compiled is the general idea I think. There may be many code paths you don't actually follow, especially when libraries come into question.
Java 字节码被解释是因为字节码可以跨各种平台移植。JVM 是依赖于平台的,它将字节码转换并执行到该机器的特定指令集,无论它是 Windows、LINUX 还是 MAC 等。
Java Bytecode is interpreted because bytecodes are portable across various platforms.JVM, which is platform dependent,converts and executes bytecodes to specific instruction set of that machine whether it may be a Windows or LINUX or MAC etc...
动态编译的一个重要区别是它优化了代码库的运行方式。有一个选项
-XX:CompileThreshold=
,默认为10000。您可以减少此数字,以便更快地优化代码,但如果您运行复杂的应用程序或基准测试,您会发现减少此数字可能会导致代码变慢。如果您运行一个简单的基准测试,您可能不会发现它有任何区别。动态编译比静态编译具有优势的一个例子是内联“虚拟”方法,尤其是那些可以替换的方法。例如,JVM 可以内联最多两个频繁使用的“虚拟”方法,这些方法可能位于编译调用者之后编译的单独 jar 中。甚至可以从正在运行的系统(例如 OSGi)中删除被调用的 jar,并添加另一个 jar 或替换它。然后可以内联替换 JAR 的方法。这只能通过动态编译来实现。
One important difference of dynamic compiling is that it optimises the code base don how it is run. There is an option
-XX:CompileThreshold=
which is 10000 by default. You can decrease this so it optimises the code sooner, but if you run a complex application or benchmark, you can find that reducing this number can result in slower code. If you run a simple benchmark, you may not find it makes any difference.One example where dynamic compiling has an advantage over static compiling is inlining "virtual" methods, esp those which can be replaced. For example, the JVM can inline up to two heavily used "virtual" methods, which may be in a separate jar compiled after the caller was compiled. The called jar(s) can even be removed from the running system e.g. OSGi and have another jar added or replace it. The replacement JAR's methods can then be inlined. This can only be achieved with dynamic compiling.