Java 和 .Net 的 JIT 有什么区别

发布于 2024-08-26 04:52:01 字数 71 浏览 5 评论 0原文

我知道 Microsoft .NET 使用 CLR 作为 JIT 编译器,而 Java 有 Hotspot。它们之间有什么区别?

I know Microsoft .NET uses the CLR as a JIT compiler while Java has the Hotspot. What Are the differences between them?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

那支青花 2024-09-02 04:52:01

它们是非常不同的野兽。正如人们指出的,CLR 在执行一段 MSIL 之前会编译为机器代码。除了典型的死代码消除和内联私有优化之外,这还允许它利用目标机器的特定 CPU 架构(尽管我不确定它是否这样做)。这也会对每个类造成影响(尽管编译器相当快并且许多平台库只是 Win32 API 上的一个薄层)。

HotSpot VM 采用了不同的方法。它规定大部分代码很少被执行,因此不值得花时间编译它。所有字节码都以解释模式启动。 VM 在调用站点保留统计信息,并尝试识别调用次数超过预定义次数的方法。然后,它使用快速 JIT 编译器 (C1) 仅编译这些方法,并在运行时交换该方法(这就是 HS 的特殊之处)。在多次调用 C1 编译的方法后,会使用缓慢但复杂的编译器来编译相同的方法,并再次动态交换代码。

由于 HotSpot 可以在运行时交换方法,因此 VM 编译器可以执行一些在静态编译代码中不安全的推测性优化。一个典型的例子是单态调用的静态调度/内联(只有一种实现的多态方法)。如果虚拟机发现此方法始终解析到同一目标,则会执行此操作。过去复杂的调用被简化为一些 CPU 指令保护,这些指令由现代 CPU 进行预测和流水线处理。当保护条件不再为真时,虚拟机可以采用不同的代码路径,甚至返回到解释模式。根据统计数据和程序工作量,生成的机器代码在不同时间可能不同。其中许多优化依赖于程序执行期间收集的信息,如果在加载类时编译一次,则不可能实现这些优化。

这就是为什么在对算法进行基准测试时需要预热 JVM 并模拟实际工作负载(偏差数据可能导致对优化的评估不切实际)。其他优化包括锁省略、自适应自旋锁定、逃逸分析和堆栈分配等。

也就是说,HotSpot 只是其中之一。 JRockit、Azul、IBM 的 J9 和 Resettable RVM - 都有不同的性能配置文件。

They are very different beasts. As people pointed out, the CLR compiles to machine code before it executes a piece of MSIL. This allows it in addition to the typical dead-code elimination and inlining off privates optimizations to take advantage of the particular CPU architecture of the target machine (though I'm not sure whether it does it). This also incurs a hit for each class (though the compiler is fairly fast and many platform libraries are just a thin layer over the Win32 API).

The HotSpot VM is taking a different approach. It stipulates that most of the code is executed rarely, hence it's not worth to spend time compiling it. All bytecode starts in interpreted mode. The VM keeps statistics at call-sites and tries to identify methods which are called more than a predefined number of times. Then it compiles only these methods with a fast JIT compiler (C1) and swaps the method while it is running (that's the special sauce of HS). After the C1-compiled method has been invoked some more times, the same method is compiled with slow, but sophisticated compiler and the code is swapped again on the fly.

Since HotSpot can swap methods while they are running, the VM compilers can perform some speculative optimizations that are unsafe in statically compiled code. A canonical example is static dispatch / inlining of monomorphic calls (polymorphic method with only one implementation). This is done if the VM sees that this method always resolves to the same target. What used to be complex invocation is reduced to a few CPU instructions guard, which are predicted and pipelined by modern CPUs. When the guard condition stops being true, the VM can take a different code path or even drop back to interpreting mode. Based on statistics and program workload, the generated machine code can be different at different time. Many of these optimizations rely on the information gathered during the program execution and are not possible if you compile once whan you load the class.

This is why you need to warm-up the JVM and emulate realistic workload when you benchmark algorithms (skewed data can lead to unrealistic assesment of the optimizations). Other optimizations are lock elision, adaptive spin-locking, escape analysis and stack allocation, etc.

That said, HotSpot is only one of the VMs. JRockit, Azul, IBM's J9 and the Resettable RVM, - all have different performance profiles.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文