带有转义分析的静态java字节码优化器(如proguard)?

发布于 2024-09-05 13:22:51 字数 76 浏览 4 评论 0原文

基于逃逸分析的优化是 Proguard 的一项计划功能。与此同时,是否有像 proguard 这样的现有工具已经进行了需要逃逸分析的优化?

Optimizations based on escape analysis is a planned feature for Proguard. In the meantime, are there any existing tools like proguard that already do optimizations which require escape analysis?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

放飞的风筝 2024-09-12 13:22:51

是的,我认为 Soot 框架 执行逃逸分析。

Yes, I think the Soot framework performs escape analysis.

娇纵 2024-09-12 13:22:51

您对编译器级别的转义分析有何期望? Java 类更像是 C 语言中的对象文件 - 它们在 JVM 中链接,因此转义分析只能在单一方法级别上执行,这限制了可用性并且会妨碍调试(例如,您将通过你不能踩)。

在 Java 的设计中,编译器非常愚蠢——它检查正确性(如 Lint),但不尝试优化。智能部分被放置在 JVM 中 - 它使用多种优化技术在当前条件下在当前平台上生成性能良好的代码。由于 JVM 知道当前加载的所有代码,因此它可以比编译器假设更多的代码,并执行推测性优化,这些优化会在假设失效时恢复。 HotSpot JVM 可以在函数运行时用更优化的版本即时替换代码(例如,当代码变得“更热”时,在循环中间)。

当不在调试器中时,具有不重叠生命周期的变量将被折叠,不变量将被提升出循环,循环将被展开,等等。所有这些都发生在 JIT-ted 代码中,并且完成取决于在此函数中花费了多少时间(花时间优化从不运行的代码没有多大意义)。如果我们预先执行其中一些优化,JIT 的自由度就会减少,总体结果可能是负面的。

另一个优化是不逃逸当前方法的对象的堆栈分配 - 这是在某些情况下完成的,尽管我在某处读到一篇论文,执行严格的逃逸分析的时间与优化获得的时间表明这是不值得的,所以目前的策略更具启发性。

总的来说,JVM 拥有的关于原始代码的信息越多,它就越能优化它。而且 JVM 所做的优化正在不断改进,因此只有在谈论像手机这样非常受限制和基本的 JVM 时,我才会考虑编译代码优化。在这些情况下,您无论如何都希望通过混淆器运行应用程序(以缩短类名等)

What do you expect from escape analysis on compiler level? Java classes are more like object files in C - they are linked in the JVM, hence the escape analysis can be performed only on single-method level, which is of limited usability and will hamper the debugging (e.g. you will have lines of code through which you can not step).

In Java's design, the compiler is quite dumb - it checks for correctness (like Lint), but doesn't try to optimize. The smart pieces are put in the JVM - it uses multiple optimization techniques to yield well performing code on the current platform, under the current conditions. Since the JVM knows all the code that is currently loaded it can assume a lot more than the compiler and perform speculative optimizations which are reverted the moment the assumptions are invalidated. HotSpot JVM can replace code with more optimized version on the fly while the function is running (e.g. in the middle of a loop as the code gets 'hotter').

When not in debugger, variables with non-overlapping lifetimes are collapsed, invariants are hoisted out of loops, loops are unrolled, etc. All this happens in the JIT-ted code and is done dependent on how much time is spent in this function (it does not make much sense to spend time optimizing code that never runs). If we perform some of these optimizations upfront, the JIT will have less freedom and the overall result might be a net negative.

Another optimization is stack allocation of objects that do not escape the current method - this is done in certain cases, though I read a paper somewhere that the time to perform rigorous escape analysis vs the time gained by optimizations suggests that it's not worth it, so the current strategy is more heuristic.

Overall, the more information the JVM has about your original code, the better it can optimize it. And the optimizations the JVM does are constantly improving, hence I would think about compiled code optimizations only when speaking about very restricted and basic JVMs like mobile phones. In these cases you want to run your application through obfuscator anyway (to shorten class names, etc.)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文