Google Closure Compiler 是否会降低性能?

发布于 2024-12-14 14:50:05 字数 494 浏览 4 评论 0原文

我正在编写一个 Google Chrome 扩展程序。由于 JavaScript 文件是从磁盘加载的,因此它们的大小几乎不重要。

无论如何,我一直在使用 Google Closure Compiler,因为显然它可以进行性能优化并减少代码大小。

但我在闭包编译器的输出顶部注意到了这一点:

var i = true, m = null, r = false;

这样做的目的显然是为了减少文件大小(所有后续使用 true/null/false 整个脚本可以用单个字符替换)。

但这肯定会对性能造成轻微影响吗?仅仅读取文字 true 关键字肯定比通过名称查找变量并发现其值为 true 更快...?

这种性能打击值得担心吗? Google Closure Compiler 还有其他什么可能实际上会减慢执行速度吗?

I'm writing a Google Chrome extension. As the JavaScript files are loaded from disk, their size barely matters.

I've been using Google Closure Compiler anyway, because apparently it can make performance optimizations as well as reducing code size.

But I noticed this at the top of my output from Closure Compiler:

var i = true, m = null, r = false;

The point of this is obviously to reduce the filesize (all subsequent uses of true/null/false throughout the script can be replaced by single characters).

But surely there's a slight performance hit with that? It must be quicker to just read a literal true keyword than look up a variable by name and find its value is true...?

Is this performance hit worth worrying about? And is there anything else Google Closure Compiler does that might actually slow down execution?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

甜警司 2024-12-21 14:50:05

答案是也许

让我们看看关闭团队对此有何评论。

来自常见问题解答

编译器是否会在应用程序的执行速度和下载代码大小之间进行权衡?

是的。任何优化编译器都会做出权衡。 某些大小优化确实会带来较小的速度开销。但是,闭包编译器的开发人员一直很小心,不要引入大量额外的运行时。 某些编译器的优化甚至会减少运行时间(请参阅下一个问题)。

编译器是否优化了速度?

在大多数情况下,较小的代码是更快的代码,因为下载时间通常是 Web 应用程序中最重要的速度因素。减少冗余的优化也可以加快代码的运行时间。

我断然质疑他们在这里做出的第一个假设。使用的变量名称的大小不会直接影响各种 JavaScript 引擎处理代码的方式 - 事实上,JS 引擎并不关心您是否将变量称为 supercalifragilisticexpialidociousx(但我作为一名程序员肯定会这样做)。如果您担心交付,下载时间是最重要的部分 - 运行缓慢的脚本可能是由数百万个我怀疑该工具根本无法解释的事情引起的。

要真正理解为什么你的问题是可能的,你需要问的第一件事是“是什么让 JavaScript 变得更快或更慢?”

然后当然我们会遇到这个问题,“我们是什么 JavaScript 引擎 SpiderMonkey

我们有:

  • Carakan (Opera)
  • Chakra (IE9+)
  • (Mozilla/FireFox)
  • SquirrelFish (Apple 的 webkit)
  • V8 (Chrome)
  • Futhark (Opera)
  • JScript (所有版本9 之前的 IE)
  • JavaScriptCore(Konqueror、Safari)
  • 我跳过了一些

这里有人真的认为它们的工作原理都是一样的吗?尤其是 JScript 和 V8?哎呀不!

那么,当谷歌闭包编译代码时,它是为哪个引擎构建东西的?你觉得幸运吗?

好吧,因为我们永远不会涵盖所有这些基础,所以让我们尝试更广泛地查看“旧”代码与“新”代码。

以下是来自 关于 JS 引擎的最佳演示之一的此特定部分的快速摘要见过

较旧的 JS 引擎

  • 代码直接解释并编译为字节码
  • 没有优化:所获即所得
  • 由于松散类型语言,代码很难快速运行

新 JS 引擎

  • 引入即时 (JIT) 编译器以实现快速执行
  • 引入类型-优化 JIT 编译器以获得真正快速的代码(考虑接近 C 代码的速度)

这里的主要区别是新引擎引入了 JIT 编译器。

从本质上讲,JIT 会优化您的代码执行,使其运行得更快,但如果发生它不喜欢的事情,它就会逆转并使其再次变慢。

您可以通过使用两个这样的函数来完成这样的事情:

var FunctionForIntegersOnly = function(int1, int2){
    return int1 + int2;
}

var FunctionForStringsOnly = function(str1, str2){
    return str1 + str2;
}

alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b"));

通过 google 闭包运行它实际上将整个代码简化为:

alert("3ab");

并且根据书中的每个指标,速度更快。这里真正发生的是它简化了我非常简单的示例,因为它做了一些部分执行。然而,这是您需要小心的地方。

假设我们的代码中有一个 y 组合器,编译器将其转换为如下所示

(function(a) {
 return function(b) {
    return a(a)(b)
  }
})(function(a) {
  return function(b) {
    if(b > 0) {
      return console.log(b), a(a)(b - 1)
    }
  }
})(5);

:确实更快,只是缩小了代码。

JIT 通常会看到,在实践中,您的代码只需要该函数的两个字符串输入,并返回一个字符串(或第一个函数的整数),这会将其放入特定于类型的 JIT 中,这使得它非常快。现在,如果 google 闭包做了一些奇怪的事情,比如将那些具有几乎相同签名的函数转换为一个函数(对于不平凡的代码),如果编译器做了 JIT 不喜欢的事情,您可能会失去 JIT 速度。

那么,我们学到了什么?

  • 你可能有 JIT 优化的代码,但编译器将你的代码重新组织成其他东西
  • 旧浏览器没有 JIT,但仍然运行你的代码
  • 闭包编译的 JS 通过对简单函数的代码进行部分执行来调用更少的函数调用。

那么你做什么呢?

  • 编写小而中肯的函数,编译器将能够更好地处理它们。
  • 如果您对 JIT 有非常深入的了解,手工优化代码,并使用了这些知识,那么闭包编译器可能不值得使用。
  • 如果您希望代码在较旧的浏览器上运行得更快一点,那么它是一个出色的工具。
  • 权衡通常是值得的,但要小心检查,不要一直盲目信任它。

一般来说,您的代码速度更快。您可能会引入各种 JIT 编译器不喜欢的东西,但如果您的代码使用较小的函数和正确的原型面向对象设计,那么它们就会很少见。如果您考虑编译器正在执行的全部操作(更短的下载和更快的执行),那么诸如 var i = true, m = null, r = false; 之类的奇怪事情可能是值得的尽管编译器运行速度较慢,但​​总生命周期却更快。

还值得注意的是,Web 应用程序执行中最常见的瓶颈是文档对象模型,如果您的代码很慢,我建议您在这方面投入更多精力。

The answer is maybe.

Lets look at what the closure team says about it.

From the FAQ:

Does the compiler make any trade-off between my application's execution speed and download code size?

Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).

Does the compiler optimize for speed?

In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.

I flatly challenge the first assumption they've made here. The size of vars names used does not directly impact how the various JavaScript engines treat the code-- in fact, JS engines don't care if you call your variables supercalifragilisticexpialidocious or x (but I as a programmer sure do). Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for.

To truthfully understand why your question is maybe, first thing you need to ask is "What makes JavaScript fast or slow?"

Then of course we run into the question, "What JavaScript engine are we talking about?"

We have:

  • Carakan (Opera)
  • Chakra (IE9+)
  • SpiderMonkey (Mozilla/FireFox)
  • SquirrelFish (Apple's webkit)
  • V8 (Chrome)
  • Futhark (Opera)
  • JScript (All versions of IE before 9)
  • JavaScriptCore (Konqueror, Safari)
  • I've skipped out on a few.

Does anyone here really think they all work the same? Especially JScript and V8? Heck no!

So again, when google closure compiles code, which engine is it building stuff for? Are you feeling lucky?

Okay, because we'll never cover all these bases lets try to look more generally here, at "old" vs "new" code.

Here's a quick summary for this specific part from one of the best presentations on JS Engines I've ever seen.

Older JS engines

  • Code is interpreted and compiled directly to byte code
  • No optimization: you get what you get
  • Code is hard to run fast because of the loosely typed language

New JS Engines

  • Introduce Just-In-Time(JIT) compilers for fast execution
  • Introduce type-optimizing JIT compilers for really fast code (think near C code speeds)

Key difference here being that new engines introduce JIT compilers.

In essence, JIT will optimize your code execution such that it can run faster, but if something it doesn't like happens it turns around and makes it slow again.

You can do such things by having two functions like this:

var FunctionForIntegersOnly = function(int1, int2){
    return int1 + int2;
}

var FunctionForStringsOnly = function(str1, str2){
    return str1 + str2;
}

alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b"));

Running that through google closure actually simplifies the whole code down to:

alert("3ab");

And by every metric in the book that's way faster. What really happened here is it simplified my really simple example, because it does a bit of partial-execution. This is where you need to be careful however.

Lets say we have a y-combinator in our code, the compiler turns it into something like this:

(function(a) {
 return function(b) {
    return a(a)(b)
  }
})(function(a) {
  return function(b) {
    if(b > 0) {
      return console.log(b), a(a)(b - 1)
    }
  }
})(5);

Not really faster, just minified the code.

JIT would normally see that in practice your code only ever takes two string inputs to that function, and returns a string (or integer for the first function), and this put it into the type-specific JIT, which makes it really quick. Now, if google closure does something strange like transform both those functions that have nearly identical signatures into one function (for code that is non-trivial) you may lose JIT speed if the compiler does something JIT doesn't like.

So, what did we learn?

  • You might have JIT-optimized code, but the compiler re-organizes your code into something else
  • Old browsers don't have JIT but still run your code
  • Closure compiled JS invokes less function calls by doing partial-execution of your code for simple functions.

So what do you do?

  • Write small and to-the-point functions, the compiler will be able to deal with them better
  • If you have a very deep understanding of JIT, hand optimizing code, and used that knowledge then closure compiler may not be worthwhile to use.
  • If you want the code to run a bit faster on older browsers, it's an excellent tool
  • Trade-offs are generally worth-while, but just be careful to check things over and not blindly trust it all the time.

In general, your code is faster. You may introduce things that various JIT compilers don't like but they're going to be rare if your code uses smaller functions and correct prototypical object-oriented-design. If you think about the full scope of what the compiler is doing (shorter download AND faster execution) then strange things like var i = true, m = null, r = false; may be a worth-while trade off that the compiler made even though they're running slower, the total lifespan was faster.

It's also worthwhile to note the most common bottle neck in web-app execution is the Document Object model, and I suggest you put more effort over there if your code is slow.

蝶舞 2024-12-21 14:50:05

看起来,在现代浏览器中,使用文字 truenull 与变量相比,在几乎所有情况下都绝对没有区别(例如零;它们是完全相同的)。在极少数情况下,变量实际上更快。

因此,节省这些额外的字节是值得的,而且不需要任何成本。

true 与变量 (http://jsperf.com/true-vs-variable ):

true 与变量

null 与变量 (http://jsperf.com/null-vs-variable):

null vs 变量

It would appear that in modern browsers using the literal true or null vs a variable makes absolutely no difference in almost all cases (as in zero; they are exactly the same). In very few cases, the variable is actually faster.

So, those extra bytes in saving are worth it and cost nothing.

true vs variable (http://jsperf.com/true-vs-variable):

true vs variable

null vs variable (http://jsperf.com/null-vs-variable):

null vs variable

GRAY°灰色天空 2024-12-21 14:50:05

我认为性能会有非常轻微的损失,但在较新的现代浏览器中不太可能产生太大影响。

请注意,闭包编译器的标准别名变量都是全局变量。这意味着,在旧版浏览器中,JavaScript 引擎需要线性时间来导航功能范围(例如 IE < 9),嵌套函数调用越深,找到包含“true”或“true”的变量所需的时间就越长。几乎所有现代 JavaScript 引擎都会优化全局变量访问,因此在许多情况下这种惩罚不应再存在。

此外,除了赋值或参数之外,实际上不应该在很多地方直接在编译代码中看到“true”或“false”或“null”。例如: if (someFlag == true) ... 大部分只是写成 if (someFlag) ... ,由编译器编译为 a & ;& ...。您大多只在赋值 (someFlag = true;) 和参数 (someFunc(true);) 中看到它们,这种情况实际上并不经常发生。

结论是:尽管许多人怀疑闭包编译器的标准别名(包括我)的有用性,但您不应该期望任何实质性的性能影响。不过,您也不应该指望 gzipped 大小会带来任何物质上的好处。

I think there will be a very slight performance penalty, but unlikely to matter much in newer, modern browsers.

Notice that the Closure Compiler's standard alias variables are all global variables. Which means that, in an old browser where the JavaScript engine takes linear time to navigate functional scopes (e.g. IE < 9), the deeper you are within nested function calls, the longer it takes to find that variable which holds "true" or "false" etc. Almost all modern JavaScript engines optimize global variable access so this penalty should no longer hold in many cases.

In addition, there really shouldn't be many places where you'd be seeing "true" or "false" or "null" directly in compiled code, except for assignments or arguments. For example: if (someFlag == true) ... is mostly just written if (someFlag) ... which is compiled by the compiler into a && .... You mostly only see them in assignments (someFlag = true;) and arguments (someFunc(true);), which really do not happen very frequently.

Conclusion is: although many people doubt the usefulness of the Closure Compiler's standard aliases (me included), you shouldn't expect any material performance hit. You also shouldn't expect any material benefits in gzipped sizes, though.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文