即时编译与提前编译相比有何优点?

发布于 2024-08-18 18:46:20 字数 1447 浏览 3 评论 0原文

我最近一直在思考这个问题,在我看来,JIT 编译的大部分优势应该或多或少地归因于中间格式,而 jitting 本身并不算什么。生成代码的好方法。

因此,这些是我经常听到的主要支持 JIT 编译论点:

  1. 即时编译允许更大的可移植性。这不是归因于中间格式吗?我的意思是,一旦您将虚拟字节码安装到计算机上,就没有什么可以阻止您将其编译为本机字节码。可移植性是“分发”阶段的问题,而不是“运行”阶段的问题。
  2. 好吧,那么在运行时生成代码怎么样?嗯,这同样适用。没有什么可以阻止您将真正的即时需求的即时编译器集成到您的本机程序中。
  3. 但是运行时无论如何都会将其编译为本机代码一次,并将生成的可执行文件存储在硬盘驱动器上某处的某种缓存中。是的,当然。但它在时间限制下优化了你的程序,并且并没有让程序从此变得更好。请参阅下一段。

提前编译也不是没有优势。 即时编译有时间限制:您不能让最终用户在程序启动时永远等待,因此需要在某个地方进行权衡。大多数时候,他们只是优化得较少。我的一位朋友的分析证据表明,“手动”内联函数和展开循环(在此过程中混淆源代码)对他的 C# 数字运算性能产生了积极影响程序;我这边也做了同样的事情,我的 C 程序完成了相同的任务,但没有产生任何积极的结果,我相信这是由于我的编译器被允许进行广泛的转换。

然而我们却被紧张不安的程序所包围。 C#Java 无处不在,Python 脚本可以编译为某种字节码,而且我确信许多其他编程语言也会这样做。我失踪一定有充分的理由。那么,是什么让即时编译比提前编译如此优越呢?


编辑为了消除一些混乱,也许重要的是要声明我完全支持可执行文件的中间表示。这有很多优点(实际上,即时编译的大多数参数实际上都是中间表示的参数)。我的问题是如何将它们编译为本机代码。

大多数运行时(或就此而言的编译器)更喜欢即时或提前编译它们。由于提前编译对我来说似乎是一个更好的选择,因为编译器有更多的时间来执行优化,我想知道为什么 Microsoft、Sun 和所有其他公司都采取相反的方式。我对与分析相关的优化有点怀疑,因为我对即时编译程序的经验显示基本优化很差。

我使用 C 代码的示例只是因为我需要一个提前编译与即时编译的示例。 C 代码未发送到中间表示的事实与情况无关,因为我只需要表明提前编译可以产生更好的即时结果结果。

I've been thinking about it lately, and it seems to me that most advantages given to JIT compilation should more or less be attributed to the intermediate format instead, and that jitting in itself is not much of a good way to generate code.

So these are the main pro-JIT compilation arguments I usually hear:

  1. Just-in-time compilation allows for greater portability. Isn't that attributable to the intermediate format? I mean, nothing keeps you from compiling your virtual bytecode into native bytecode once you've got it on your machine. Portability is an issue in the 'distribution' phase, not during the 'running' phase.
  2. Okay, then what about generating code at runtime? Well, the same applies. Nothing keeps you from integrating a just-in-time compiler for a real just-in-time need into your native program.
  3. But the runtime compiles it to native code just once anyways, and stores the resulting executable in some sort of cache somewhere on your hard drive. Yeah, sure. But it's optimized your program under time constraints, and it's not making it better from there on. See the next paragraph.

It's not like ahead-of-time compilation had no advantages either. Just-in-time compilation has time constraints: you can't keep the end user waiting forever while your program launches, so it has a tradeoff to do somewhere. Most of the time they just optimize less. A friend of mine had profiling evidence that inlining functions and unrolling loops "manually" (obfuscating source code in the process) had a positive impact on performance on his C# number-crunching program; doing the same on my side, with my C program filling the same task, yielded no positive results, and I believe this is due to the extensive transformations my compiler was allowed to make.

And yet we're surrounded by jitted programs. C# and Java are everywhere, Python scripts can compile to some sort of bytecode, and I'm sure a whole bunch of other programming languages do the same. There must be a good reason that I'm missing. So what makes just-in-time compilation so superior to ahead-of-time compilation?


EDIT To clear some confusion, maybe it would be important to state that I'm all for an intermediate representation of executables. This has a lot of advantages (and really, most arguments for just-in-time compilation are actually arguments for an intermediate representation). My question is about how they should be compiled to native code.

Most runtimes (or compilers for that matter) will prefer to either compile them just-in-time or ahead-of-time. As ahead-of-time compilation looks like a better alternative to me because the compiler has more time to perform optimizations, I'm wondering why Microsoft, Sun and all the others are going the other way around. I'm kind of dubious about profiling-related optimizations, as my experience with just-in-time compiled programs displayed poor basic optimizations.

I used an example with C code only because I needed an example of ahead-of-time compilation versus just-in-time compilation. The fact that C code wasn't emitted to an intermediate representation is irrelevant to the situation, as I just needed to show that ahead-of-time compilation can yield better immediate results.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

≈。彩虹 2024-08-25 18:46:20
  1. 更高的便携性:
    可交付成果(字节码)保留
    便携

  2. 同时,更多的平台特定性:因为
    JIT 编译发生在
    代码运行的系统相同,它
    可以非常非常微调
    那个特定的系统。如果你这样做
    提前编译(并且仍然
    想要将同一个包裹运送到
    每个人),你必须妥协。

  3. 编译器技术的改进可能会产生影响
    现有的计划。更好的C
    编译器根本帮不了你
    已部署程序。一个
    更好的 JIT 编译器将改善
    现有计划的绩效。
    您十年前编写的 Java 代码今天将运行得更快。

  4. 适应运行时指标。 JIT 编译器不仅可以查看
    代码和目标系统,但是
    还包括如何使用代码。它可以
    检测正在运行的代码,以及
    做出有关如何优化的决定
    例如,根据什么
    通常对方法参数进行赋值
    碰巧有。

你说得对,JIT 会增加启动成本,因此它有时间限制,
而提前编译可能会花费它想要的所有时间。这使得
更适合启动时间不太重要的服务器类型应用程序
在代码变得非常快之前的“预热阶段”是可以接受的。

我想可以将 JIT 编译的结果存储在某个地方,以便下次可以重复使用。这将为您提供第二个程序运行的“提前”编译。也许 Sun 和 Microsoft 的聪明人认为,新的 JIT 已经足够好了,额外的复杂性不值得麻烦。

  1. Greater portability: The
    deliverable (byte-code) stays
    portable

  2. At the same time, more platform-specific: Because the
    JIT-compilation takes place on the
    same system that the code runs, it
    can be very, very fine-tuned for
    that particular system. If you do
    ahead-of-time compilation (and still
    want to ship the same package to
    everyone), you have to compromise.

  3. Improvements in compiler technology can have an impact on
    existing programs. A better C
    compiler does not help you at all
    with programs already deployed. A
    better JIT-compiler will improve the
    performance of existing programs.
    The Java code you wrote ten years ago will run faster today.

  4. Adapting to run-time metrics. A JIT-compiler can not only look at
    the code and the target system, but
    also at how the code is used. It can
    instrument the running code, and
    make decisions about how to optimize
    according to, for example, what
    values the method parameters usually
    happen to have.

You are right that JIT adds to start-up cost, and so there is a time-constraint for it,
whereas ahead-of-time compilation can take all the time that it wants. This makes it
more appropriate for server-type applications, where start-up time is not so important
and a "warm-up phase" before the code gets really fast is acceptable.

I suppose it would be possible to store the result of a JIT compilation somewhere, so that it could be re-used the next time. That would give you "ahead-of-time" compilation for the second program run. Maybe the clever folks at Sun and Microsoft are of the opinion that a fresh JIT is already good enough and the extra complexity is not worth the trouble.

紧拥背影 2024-08-25 18:46:20

ngen 工具页面泄露了秘密(或在至少提供了本机映像与 JIT 编译映像的良好比较)。提前编译的可执行文件通常具有以下优点:

  1. 本机映像加载速度更快,因为它们没有太多启动活动,并且需要较少的静态内存(JIT 编译器所需的内存);
  2. 本机映像可以共享库代码,而 JIT 编译的映像则不能。

在这些情况下,即时编译的可执行文件通常占据上风:

  1. 本机映像比其对应的字节码更大;
  2. 每当修改原始程序集或其依赖项之一时,都必须重新生成本机映像。

对于本机映像来说,每次其组件之一都需要重新生成提前编译的映像,这是一个巨大缺点。另一方面,JIT 编译的映像无法共享库代码这一事实可能会导致严重的内存占用。操作系统可以在一个物理位置加载任何本机库,并与每个想要使用它的进程共享它的不可变部分,从而节省大量内存,尤其是对于几乎每个程序都使用的系统框架。 (我想这在一定程度上被 JIT 编译的程序只编译它们实际使用的内容这一事实所抵消。)

微软在这个问题上的总体考虑是,大型应用程序通常受益于提前编译,而小型应用程序则受益于提前编译。一般不会。

The ngen tool page spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Executables that are compiled ahead-of-time typically have the following benefits:

  1. Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler);
  2. Native images can share library code, while JIT-compiled images cannot.

Just-in-time compiled executables typically have the upper hand in these cases:

  1. Native images are larger than their bytecode counterpart;
  2. Native images must be regenerated whenever the original assembly or one of its dependencies is modified.

The need to regenerate an image that is ahead-of-time compiled every time one of its components is a huge disadvantage for native images. On the other hand, the fact that JIT-compiled images can't share library code can cause a serious memory hit. The operating system can load any native library at one physical location and share the immutable parts of it with every process that wants to use it, leading to significant memory savings, especially with system frameworks that virtually every program uses. (I imagine that this is somewhat offset by the fact that JIT-compiled programs only compile what they actually use.)

The general consideration of Microsoft on the matter is that large applications typically benefit from being compiled ahead-of-time, while small ones generally don't.

愚人国度 2024-08-25 18:46:20

简单的逻辑告诉我们,即使从字节码编译巨大的 MS Office 大小的程序也会花费太多时间。你最终会花费很长的启动时间,这会吓跑任何人离开你的产品。当然,您可以在安装过程中预编译,但这也会产生后果。

另一个原因是并非应用程序的所有部分都会被使用。 JIT 将仅编译用户关心的部分,从而使 80% 的代码保持不变,从而节省时间和内存。

最后,JIT 编译可以应用普通编译器无法应用的优化。就像使用 跟踪树。从理论上讲,这可以使它们更快。

Simple logic tell us that compiling huge MS Office size program even from byte-codes will simply take too much time. You'll end up with huge starting time and that will scare anyone off your product. Sure, you can precompile during installation but this also has consequences.

Another reason is that not all parts of application will be used. JIT will compile only those parts that user care about, leaving potentially 80% of code untouched, saving time and memory.

And finally, JIT compilation can apply optimizations that normal compilators can't. Like inlining virtual methods or parts of the methods with trace trees. Which, in theory, can make them faster.

错々过的事 2024-08-25 18:46:20
  1. 更好的反射支持。原则上这可以在提前编译的程序中完成,但在实践中似乎从未发生过。

  2. 通常只能通过动态观察程序才能找出优化。例如,内联虚拟函数、将堆栈分配转换为堆分配的逃逸分析以及锁粗化。

  1. Better reflection support. This could be done in principle in an ahead-of-time compiled program, but it almost never seems to happen in practice.

  2. Optimizations that can often only be figured out by observing the program dynamically. For example, inlining virtual functions, escape analysis to turn stack allocations into heap allocations, and lock coarsening.

鸵鸟症 2024-08-25 18:46:20

也许这与现代编程方法有关。你知道,很多年前你会把你的程序写在一张纸上,其他一些人会把它变成一堆打孔卡并输入计算机,明天早上你会在一卷纸上得到一个崩溃转储半磅。所有这些都迫使您在编写第一行代码之前思考很多。

那些日子已经一去不复返了。当使用 PHP 或 JavaScript 等脚本语言时,您可以立即测试任何更改。 Java 的情况并非如此,尽管应用程序服务器可以为您提供热部署。因此,Java 程序可以快速编译,这非常方便,因为字节码编译器非常简单。

但是,不存在纯 JIT 语言这样的东西。 提前编译器 已经为 Java 提供了相当长的时间一段时间以来,Mono 将其引入了 CLR。事实上,由于 AOT 编译,MonoTouch 完全可以作为非原生应用程序使用苹果应用商店禁止使用。

Maybe it has to do with the modern approach to programming. You know, many years ago you would write your program on a sheet of paper, some other people would transform it into a stack of punched cards and feed into THE computer, and tomorrow morning you would get a crash dump on a roll of paper weighing half a pound. All that forced you to think a lot before writing the first line of code.

Those days are long gone. When using a scripting language such as PHP or JavaScript, you can test any change immediately. That's not the case with Java, though appservers give you hot deployment. So it is just very handy that Java programs can be compiled fast, as bytecode compilers are pretty straightforward.

But, there is no such thing as JIT-only languages. Ahead-of-time compilers have been available for Java for quite some time, and more recently Mono introduced it to CLR. In fact, MonoTouch is possible at all because of AOT compilation, as non-native apps are prohibited in Apple's app store.

杀手六號 2024-08-25 18:46:20

我也一直在尝试理解这一点,因为我看到 Google 正在朝着用 Android Run Time (ART)(AOT 编译器)取代他们的 Dalvik 虚拟机(本质上是另一个 Java 虚拟机,如 HotSpot),但 Java 通常使用 HotSpot ,这是一个 JIT 编译器。显然,ARM 比 Dalvik 快大约 2 倍……所以我心想“为什么 Java 不使用 AOT 呢?”。
无论如何,据我所知,主要区别在于 JIT 在运行时使用自适应优化,例如,它只允许那些经常执行的字节码部分编译为本机代码;而AOT则将整个源代码编译为本机代码,数量较少的代码比数量较多的代码运行得更快。
我不得不想象大多数 Android 应用程序都是由少量代码组成的,因此平均而言,将整个源代码编译为本机代码 AOT 并避免与解释/优化相关的开销更有意义。

I have been trying to understand this as well because I saw that Google is moving towards replacing their Dalvik Virtual Machine (essentially another Java Virtual Machine like HotSpot) with Android Run Time (ART), which is a AOT compiler, but Java usually uses HotSpot, which is a JIT compiler. Apparently, ARM is ~ 2x faster than Dalvik... so I thought to myself "why doesn't Java use AOT as well?".
Anyways, from what I can gather, the main difference is that JIT uses adaptive optimization during run time, which (for example) allows ONLY those parts of the bytecode that are being executed frequently to be compiled into native code; whereas AOT compiles the entire source code into native code, and code of a lesser amount runs faster than code of a greater amount.
I have to imagine that most Android apps are composed of a small amount of code, so on average it makes more sense to compile the entire source code to native code AOT and avoid the overhead associated from interpretation / optimization.

坏尐絯 2024-08-25 18:46:20

看来这个想法已经用Dart语言实现了:

https://hackernoon.com/为什么-flutter-uses-dart-dd635a054ebf

JIT编译是在开发过程中使用的,使用的编译器速度特别快。然后,当应用程序准备好发布时,它会被编译为 AOT。因此,借助先进的工具和编译器,Dart 可以实现两全其美:极快的开发周期以及快速的执行和启动时间。

It seems that this idea has been implemented in Dart language:

https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf

JIT compilation is used during development, using a compiler that is especially fast. Then, when an app is ready for release, it is compiled AOT. Consequently, with the help of advanced tooling and compilers, Dart can deliver the best of both worlds: extremely fast development cycles, and fast execution and startup times.

后eg是否自 2024-08-25 18:46:20

我在这里没有看到的 JIT 的一个优点是能够跨单独的程序集/dll/jar 进行内联/优化(为简单起见,我将从这里开始使用“程序集”)。

如果您的应用程序引用安装后可能会更改的程序集(例如预安装的库、框架库、插件),则“安装时编译”模型必须避免跨程序集边界内联方法。否则,当更新引用的程序集时,我们必须在系统上引用程序集时找到所有此类内联代码位,并将它们替换为更新的代码。

在 JIT 模型中,我们可以自由地跨程序集内联,因为我们只关心为单次运行生成有效的机器代码,在此期间底层代码不会更改。

One advantage of JIT which I don't see listed here is the ability to inline/optimize across separate assemblies/dlls/jars (for simplicity I'm just going to use "assemblies" from here on out).

If your application references assemblies which might change after install (e. g. pre-installed libraries, framework libraries, plugins), then a "compile-on-install" model must refrain from inlining methods across assembly boundaries. Otherwise, when the referenced assembly is updated we would have to find all such inlined bits of code in referencing assemblies on the system and replace them with the updated code.

In a JIT model, we can freely inline across assemblies because we only care about generating valid machine code for a single run during which the underlying code isn't changing.

傲娇萝莉攻 2024-08-25 18:46:20

platform-b​​rowser-dynamic 和 platform-b​​rowser 之间的区别在于 Angular 应用程序的编译方式。
使用动态平台可以将即时编译器发送到前端以及您的应用程序。这意味着您的应用程序正在客户端编译。
另一方面,使用平台浏览器会导致应用程序的提前预编译版本发送到浏览器。这通常意味着发送到浏览器的包要小得多。
用于引导的 angular2 文档位于 https://angular.io /docs/ts/latest/guide/ngmodule.html#!#bootstrap 更详细地解释了它。

The difference between platform-browser-dynamic and platform-browser is the way your angular app will be compiled.
Using the dynamic platform makes angular sending the Just-in-Time compiler to the front-end as well as your application. Which means your application is being compiled on client-side.
On the other hand, using platform-browser leads to an Ahead-of-Time pre-compiled version of your application being sent to the browser. Which usually means a significantly smaller package being sent to the browser.
The angular2-documentation for bootstrapping at https://angular.io/docs/ts/latest/guide/ngmodule.html#!#bootstrap explains it in more detail.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文