关于有符号字节的二进制转换为十六进制的问题
1) 据我所知,当您将二进制转换为十进制时,最左边的位代表 0、1...等等。例如,要将 0001 转换为十进制,它是 0*2^0+0*2^1+0*2^2+1*2^3,因此十进制值为 8。2
) 例如,当您签署了十六进制 0x80 时,它将转换为二进制 1000 0000,但是为了计算此二进制表示形式的十进制值,它是有符号的,因此我们必须反转 7 位,这样我们就得到 1111111 并加 1,这给了我们 10000000是-128。
我的问题是为什么在第二种情况下,当我们计算有符号字节的小数时,我们必须从最右边的位开始为 0,所以我们有...+1*2^8。为什么 2^0 不是我们在 1) 中计算的第二种情况的最左边的位?
谢谢。
1)
I understand that when you're converting binary to decimal the left most bit represents 0, 1...so on. So for example to convert 0001 to decimal it is 0*2^0+0*2^1+0*2^2+1*2^3 so the decimal value would be 8.
2)
When for example you have signed hex 0x80 which will be converted to binary 1000 0000 however in order to compute the decimal value for this binary representation it is signed so we have to invert 7 bits so we get 1111111 and add 1 which gives us 10000000 which is -128.
My question is why in the second case when we're computing the decimal for the signed byte we had to start from right most bits as 0 so we have ....+1*2^8. Why isn't the 2^0 the left most bit as we computed in 1) for the second case?
Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
不,通常二进制以另一种方式表示...0001 是 1,1000 是 8。
No, usually binary is stated the other way...0001 is 1, 1000 is 8.
我回答第一点,不完全是。
0001
实际上是1
,而1000
是8
。你似乎是从错误的方向来的。例如,二进制数1101
为:对于第 2 点,将位模式转换为有符号数的最简单方法是首先将其转换为无符号值(
0x80 = 128
),然后减去偏差(8 位为 256,16 位为 65536,依此类推)得到 -128。偏差应该只影响过程结束时的计算,这是一种将范围
0..255
映射到-128..127
或的方法0..65535
到-32768..32767
。I answer to point 1, not quite.
0001
is actually1
while1000
is8
. You appear to be coming from the wrong end. The binary number1101
, for example would be:For point 2, the easiest way to turn a bit pattern into a signed number is to first turn it into an unsigned value (
0x80 = 128
), then subtract the bias (256 for eight bits, 65536 for 16 bits and so on) to get -128.The bias should only affect the calculation at the end of the process, it's a way to map the range
0..255
to-128..127
, or0..65535
to-32768..32767
.