为什么同时使用小端和大端?
为什么在二进制计算机科学发展了约 40 年之后,小端字节序和大端字节序仍然在使用今天?是否有算法或存储格式与其中一种配合使用效果更好,而与另一种配合使用则较差?如果我们都改用一种并坚持使用不是更好吗?
Why are both little- and big-endian still in use today, after ~40 years of binary computer-science? Are there algorithms or storage formats that work better with one and much worse with the other? Wouldn't it be better if we all switched to one and stick with it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
大端和小端都有各自的优点和缺点。即使一个明显优越(事实并非如此),任何遗留架构都无法切换字节顺序,所以我担心您将不得不学会忍受它。
Both big and little endian have their advantages and disadvantages. Even if one were clearly superior (which is not the case), there is no way that any legacy architecture would ever be able to switch endianness, so I'm afraid you're just going to have to learn to live with it.
Little Endian 使类型转换变得更容易。例如,如果您有一个 16 位数字,则可以简单地将同一内存地址视为指向 8 位数字的指针,因为它包含最低 8 位。因此,您不需要知道您正在处理的确切数据类型(尽管在大多数情况下您无论如何都知道)。
Big Endian 更易于人类阅读。位按照逻辑顺序(最高有效值在前)存储在内存中,就像任何人类使用的数字系统一样。
但在有很多很多抽象层的时代,这些论点实际上已经不再重要了。我认为我们仍然拥有两者的主要原因是没有人愿意转换。这两个系统都没有明显的原因,那么如果您的旧系统运行良好,为什么要进行任何更改呢?
Little Endian makes typecasts easier. For example, if you have a 16-bit number you can simply treat the same memory address as a pointer to an 8-bit number, as it contains the lowest 8 bits. So you do not need to know the exact data type you are dealing with (although in most cases you do know anyway).
Big Endian is a bit more human-readable. Bits are stored in memory as they appear in logical order (most-significant values first), just like for any human-used number system.
In times of many, many abstraction layers these arguments don't really count anymore though. I think the main reason we still have both is that nobody wants to switch. There is no obvious reason for either system, so why change anything if your old system works perfectly well?
将两个数字相加(在纸上或在机器中)时,您从最低有效数字开始,然后向最高有效数字努力。 (许多其他操作也是如此)。
在具有 16 位寄存器和 8 位数据总线的 Intel 8088 上,小尾数允许此类指令在第一个内存周期后开始操作。 (当然,应该可以按递减顺序而不是递增顺序读取一个字的内存,但我怀疑这会使设计变得有点复杂。)
在大多数处理器上,总线宽度与寄存器宽度相匹配,因此这不再是赋予优势。
另一方面,大端数字可以从 MSB 开始进行比较(尽管许多比较指令实际上执行减法,无论如何都需要从 LSB 开始)。符号位也很容易获得。
不。到处都有一些小优势,但没有什么大的优势。
我实际上认为小端更自然和一致:位的意义是
2 ^ (位位置 + 8 * 字节位置)。而对于大尾数法来说,一位的重要性是
2 ^ (bit_pos + 8 * (word_size - byte_pos - 1))。
由于 x86 的主导地位,我们肯定倾向于小端。许多移动设备中的 ARM 芯片都具有可配置的字节顺序,但通常设置为 LE 以与 x86 世界更加兼容。这对我来说很好。
When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations).
On the Intel 8088, which had 16-bit registers but an 8-bit data bus, being little-endian allowed such instructions to start operation after the first memory cycle. (Of course it should be possible for the memory fetches of a word to be done in decreasing order rather than increasing but I suspect this would have complicated the design a little.)
On most processors the bus width matches the register width so this no longer confers an advantage.
Big-endian numbers, on the other hand, can be compared starting with the MSB (although many compare instructions actually do a subtract which needs to start with the LSB anyway). The sign bit is also very easy to get.
No. There are small advantages here and there but nothing major.
I actually think litte-endian is more natural and consistent: the significance of a bit is
2 ^ (bit_pos + 8 * byte_pos). Whereas with with big endian the significance of a bit is
2 ^ (bit_pos + 8 * (word_size - byte_pos - 1)).
Due to the dominance of x86, we've definitely gravitated towards little-endian. The ARM chips in many mobile devices have configurable endianness but are often set to LE to be more compatible with the x86 world. Which is fine by me.