基于堆栈的机器依赖于基于寄存器的机器?
普通 CPU(例如 Android 设备)是基于寄存器的机器。 Java虚拟机是一个基于堆栈的机器。但是基于堆栈的机器是否依赖于基于寄存器的机器来工作?基于堆栈的机器不能孤独运行吗,因为它不是操作系统?除了 JVM 之外,还有其他基于堆栈的机器示例吗?有的说1个操作数,有的说2个操作数;你为什么需要这个?
Normal CPUs (for example, Android devices) are register-based machines. The Java virtual Machine is a stack based machine. But does a stack-based machine depend on a register-based machine to work? Can't stack-based machines run lonely, because it is not a OS? Are there any stack-based machine examples except the JVM? Some are saying 1 operand, 2 operand; why do you need this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
JVM 在任何地方都没有提及寄存器的存在。从它的角度来看,内存只存在于少数几个地方,例如每线程堆栈、方法区、运行时常量池等。也就是说,如果您想实际实现一个依附于 JVM 的物理设备,那么您需要d 几乎肯定需要寄存器来保存执行某些字节码时生成的一些临时值,或者在侧面维护一些额外的暂存信息。例如,尝试查找
multianewarray
指令,看看是否可以在没有寄存器的情况下实现它。 :-)如今,您在实际 CPU 中可以发现的一个相似之处是,虽然有一组专用寄存器可供程序员使用,但大多数 CPU 拥有更多的寄存器,这些寄存器在内部用于各种目的。例如,大多数 MIPS 芯片都有大量用于流水线的寄存器。它们保存诸如先前指令中的控制位之类的东西。如果 x86 有任何不同,我会大吃一惊。
要记住的是,真正定义基于寄存器的机器与基于堆栈的机器如何工作的并不是寄存器。在大多数架构中,您有 O(1) 个专用于内部使用的寄存器。甚至 JVM 也有这些 - 每个方法都有一个“局部变量数组”,最初保存函数的参数,但如果需要也可以用作临时空间。堆栈机器与其他机器的区别更重要的部分是可扩展内存的工作原理。在大多数计算机中,内存是随机访问的,您可以随时从任何位置读取。也就是说,对于 n 个内存位置,您在任何时候都有 O(n) 内存可读。在基于堆栈的机器中,您只能访问堆栈的前几个位置,因此每次只能读取 O(1) 内存位置。
理论上,因为 JVM 应该代表一个完整的虚拟机,所以您可以拥有一台启动并运行 JVM 的计算机,而无需任何操作系统(或者更确切地说,JVM 将是操作系统,而您的“程序”将只是Java 字节码和类文件)。
还有一些其他基于堆栈的语言,其中首先想到的是 Forth。我提到 Forth 是因为它显然是一种基于堆栈的语言;你所做的一切都是通过操作操作数堆栈来表达的。就您最初的问题而言,最酷的是 Forth 曾经在业余爱好者中非常受欢迎,因为您可以轻松地将其移植到嵌入式设备。要使完整的 Forth 解释器工作,您不需要真正强大的操作系统 - 您只需要命令解释器。 Forth 如今不再那么流行,但它仍然是一种非常酷的语言。
另一种广泛使用的基于堆栈的语言是 PostScript,它已经输给了 PDF但仍然广泛用于需要在各种平台上渲染可扩展图形的环境中。从技术上讲,它是一种图灵完备的编程语言,尽管很少有人这样使用它。
The JVM does not mention the existence of registers anywhere. From its perspective, memory exists in only a few places, such as the per-thread stack, the method area, runtime constant pools, etc. That said, if you wanted to actually implement a physical device that adhered to the JVM, you'd almost certainly need registers to hold some of the temporary values generated when executing certain bytecodes, or to maintain some extra scratch information on the side. For example, try looking up the
multianewarray
instruction and see if you could implement it without registers. :-)One parallel you can find in real CPUs these days is that while there are dedicated set of registers available to programmers, most CPUs have substantially more registers that are used internally for various purposes. For example, most MIPS chips have a huge number of registers used for pipelining. They hold things like the control bits from previous instructions. I would be blown away if x86 was any different.
The thing to remember is that it's not registers that really define how a register-based machine versus a stack-based machine work. In most architectures, you have O(1) registers that are dedicated for internal use. Even the JVM has these - each method has a "local variables array" that originally hold the function's parameters, but can also be used as scratch space if need be. The more important part about stack machines that differentiates them from other machines is how the extensible memory works. In most computers, memory is random-access and you can read from any location you want at any time. That is, with n memory locations, you have O(n) memory readable at any time. In stack-based machines, you only have access to the top few spots of the stack, so you only have O(1) memory locations readable at any one time.
In theory, because the JVM is supposed to represent a full virtual machine, you could have a computer that booted up and just ran a JVM without any OS (or rather, the JVM would be the OS, and your "programs" would just be Java bytecodes and class files).
There are a few other stack-based languages, of which the first that jumps to mind is Forth. I mention Forth because it's explicitly a stack-based language; everything you do is phrased in terms of manipulating an operand stack. What's cool about this with regards to your original question is that Forth used to be extremely popular among hobbyists because you could really easily port it to embedded devices. To get a full Forth interpreter working you don't need a really powerful OS - you just need the command interpreter. Forth isn't as popular these days, but it's still a really cool language.
Another stack-based language that's in wide use is PostScript, which has lost a lot of ground to PDF but is still used extensively in environments where you need to render scalable graphics on a variety of platforms. It technically is a Turing-complete programming language, though few people use it that way.
我知道您已经选择了答案,但我想解决整个“堆栈机器”问题。
事实上,虽然大多数物理 CPU 都是寄存器机,但也有堆栈机作为物理 CPU。例如,Burroughs 的 B5000 和 B6000 系列机器,或 用于太空飞行的RTX2000系列芯片(最初由Chuck Moore在门阵列逻辑中实现,后来商业化)。 UCSD Pascal p -Machine 也由各种实现者直接在硬件中实现。
就计算强度而言,寄存器机和堆栈机大致相当。 (当然,这取决于您正在处理的寄存器或堆栈机的精确模型。)堆栈机具有简单、尺寸小和可扩展性的优点。注册机往往速度更快。寄存器机可以模拟堆栈机(这就是 x86 架构中的 BP 和 SP 寄存器的用途!),如果需要,堆栈机本身也可以模拟寄存器机。
编辑添加
我差点忘了向您指出深入讨论堆栈计算机的书。库普曼是堆栈计算机的狂热爱好者,他对堆栈计算机是“新浪潮”的预测是严重错误的,但这本书读起来很有趣。
I know you've already selected your answer, but I'd like to address the whole "stack machine" thing.
While most physical CPUs are, in fact, register machines, there have been stack machines as physical CPUs. Burroughs' B5000- and B6000-series machines, for example, or the RTX2000-series chips used in space flight (originally implemented by Chuck Moore in gate array logic and later commercialized). The UCSD Pascal p-Machine was implemented straight up in hardware as well by a variety of implementers.
In terms of computational strength, register and stack machines are roughly equivalent. (It depends, of course, on which precise models of register or stack machines you're dealing with.) Stack machines have the advantage of simplicity, small size and expandability. Register machines tend to be faster. Register machines can emulate stack machines (that's what the BP and SP registers in the x86 architecture are for, after all!) and stack machines can themselves imitate register machines if need be.
edited to add
I'd almost forgotten to point you to a book that discusses stack computers in depth. Koopman is a bit of a fanboi for stack computers and his prediction that they were "the new wave" was woefully wrong, but it's an interesting read.