Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 2 years ago and left it closed:
Original close reason(s) were not resolved
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(1)
对于其他操作系统,我想理论上如果您构建了一堆您的转译可执行文件可以使用的兼容性库。但这意味着还需要转译每个程序使用的每个库,并且您会这样做,以便它们也可以作为新操作系统上其他本机程序的本机库吗?而不是仅仅供翻译后的二进制文件使用。 WINE 使用本机 DLL 的能力只需要插入一些低级 Windows DLL 的自己的特殊版本,并以其他方式执行 Windows 风格的操作。
现有的“模拟器”框架(例如 Rosetta-2)已经可以动态转换为本机机器代码。在这种情况下,无需模拟不同的系统调用接口,因为无论哪种情况都是 MacOS。与 WINE 非常不同,WINE 中的 Windows 系统调用 API 具有不同的语义(不仅仅是相同函数的不同名称),尤其是在绘制 UI 时。
正如 @ecm 指出的,QEMU 也进行动态翻译,类似于 Rosetta-2 所做的,但它纯粹是动态 JIT,而不是像 Rosetta-2 那样将优化的翻译机器代码缓存在文件中以供将来运行时使用。 JIT 动态翻译是一种标准模拟技术,如果做得好的话,其性能会比纯解释更好,就像 JVM 用于在真实硬件上运行 Java 字节码一样。
跨运行缓存结果可以让在翻译过程中花费更多时间进行优化是值得的,就像 Rosetta-2 所做的那样。
让像 Qemu 或 Rosetta-2 这样的主机框架参与运行外部二进制文件是有意义的,而不是将其副本嵌入到每个单独的翻译二进制文件中。这样占用的磁盘空间更少。
并且它避免了手动缓存/更新问题;用户可以直接使用外部二进制文件,而不必先手动翻译它们。系统负责翻译它们。
二进制到二进制的转换通常无法获得与从目标机器的源代码编译一样好的结果,因为很难知道寄存器或内存上的副作用何时是某些后续代码将实际读取的内容,或者这是否只是一个私人临时的。 (关于标准调用约定的假设可能会有所帮助,但是带有一些手写汇编的模糊二进制文件可能会使这些假设无效。)
To other operating-systems, I guess in theory if you built a bunch of compatibility libraries that your transpiled executables could use. But that would mean also transpiling every library used by every program, and would you do that such that they also work as native libraries for other native programs on the new OS? Instead of just for use by translated binaries. WINE's ability to use native DLLs just requires interposing its own special versions of a few low-level Windows DLLs, and otherwise doing things Windows-style.
Existing "emulator" frameworks like Rosetta-2 already do dynamic translation to native machine code. In that case there's no need to emulate a different system-call interface, since it's MacOS in either case. Very much unlike WINE, where the Windows system-call API has different semantics (not just different names for the same functions), especially when it comes to drawing a UI.
As @ecm points out, QEMU also does dynamic translation, similar to what Rosetta-2 does, but it's purely JIT on the fly, not caching the optimized translated machine code in a file for later use on future runs like Rosetta-2 does. JIT dynamic translation is a standard emulation technique that performs better than pure interpreting if done well, like JVMs use to run Java bytecode on real hardware.
Caching the results across runs can make it worthwhile to spend more time optimizing during the translation process, like Rosetta-2 does.
Having a host framework like Qemu or Rosetta-2 involved in running foreign binaries makes sense, instead of embedding a copy of it into every separate translated binary. It takes less disk space that way.
And it avoids a manual caching / update problem; users can just use foreign binaries directly, instead of having to manually translate them first. The system takes care of translating them.
Binary to binary translation usually can't achieve results as good as compiling from source for the target machine, because it can be hard to know when a side-effect on a register or memory is something that some later code is going to actually read, or whether that was just a private temporary. (Assumptions about standard calling conventions could help, but an obfuscated binary with some hand-written asm might invalidate those assumptions.)