C#:2 4数字分别为2个字节,然后将其转换为UINT32
我得到了2个4位数字。
我需要将它们分为2个字节,然后将它们转换为UINT32。
我做正确的吗?
byte[] data = new byte[4];
byte b1 = (byte)OldPin, b2 = (byte)(OldPin >> 8);
byte b3 = (byte)NewPin, b4 = (byte)(NewPin >> 8);
data[0] = b1;
data[1] = b2;
data[2] = b3;
data[3] = b4;
var result = BitConverter.ToUInt32(data, 0)
此外,我需要做同样的事情,但是4位数字中的1个是字符串,另一个数字为0。
byte[] data = new byte[4];
byte b1 = (byte)0, b2 = (byte)0;
byte b3 = (byte)Convert.ToInt64(enteredPin), b4 = (byte)(Convert.ToInt64(enteredPin) >> 8);
data[0] = b1;
data[1] = b2;
data[2] = b3;
data[3] = b4;
var result = BitConverter.ToUInt32(data, 0)
I got 2 4 digit numbers.
I need to get them in 2 Bytes each and convert them afterwards to Uint32.
Am i doing it correct?
byte[] data = new byte[4];
byte b1 = (byte)OldPin, b2 = (byte)(OldPin >> 8);
byte b3 = (byte)NewPin, b4 = (byte)(NewPin >> 8);
data[0] = b1;
data[1] = b2;
data[2] = b3;
data[3] = b4;
var result = BitConverter.ToUInt32(data, 0)
And additionally i need to do the same, but 1 of the 4 digit numbers is a string and the other 0.
byte[] data = new byte[4];
byte b1 = (byte)0, b2 = (byte)0;
byte b3 = (byte)Convert.ToInt64(enteredPin), b4 = (byte)(Convert.ToInt64(enteredPin) >> 8);
data[0] = b1;
data[1] = b2;
data[2] = b3;
data[3] = b4;
var result = BitConverter.ToUInt32(data, 0)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
你问的本质上是模棱两可的。该行没有单一的含义:
我的意思是假设我们有4660的4位数字 - 至少从人类含义上,字节(在十六进制上显示)
12 34
作为16位有效载荷(和字节00 00 12 34
作为32位有效载荷)。问题是:字节12 34
只是一种解释;人类倾向于用大型术语思考(意思是:当我们写一个数字时,我们首先写“大末端”,即,在十进制中,我们写了成千上万的,然后是数百,然后是数字,然后是数字);那不是大多数计算机的工作方式;它们通常在八位位级别上实际上是 ,因此对于大多数计算机而言,小数4660实际上是字节34 12
(假设我们是指16位编号; IT;将是34 12 00 00
作为32位编号)。因此:我们需要讨论并定义我们想要的的 。这也是为什么
bitConverter
通常没有用的原因 - 因为它是cpu -endian,意思是:“当前CPU使用的任何终极性”。这通常对于过程中的工作来说是可以的,但是在与世界共享数据时,以任何机器的相同结果的方式没有用!因此:我们需要使用Endianness-Aware感知转换。幸运的是,这在现代.NET中非常容易。考虑:
如果我们想使用大型转换:只需将
littleendian
替换为bigendian
,就完成了。我在这里使用了Little -Endian符号,因为这是IO工作中的常见选择,但实际上:您需要咨询您的规格并使用 recript endianness-我无法告诉您哪个那是。对于该操作的不同部分,它甚至可能有所不同!What you ask is inherently ambiguous; there is no single meaning of the line:
By which I mean; say we have the 4-digit number 4660 - which is, at least by human meaning, the bytes (displayed on hex)
12 34
as a 16-bit payload (and the bytes00 00 12 34
as a 32-bit payload). The problem is: the bytes12 34
is just one interpretation; humans tend to think in big-endian terms (meaning: when we write a number, we write the "big end" first, i.e. in decimal we write the thousands then the hundreds then the tens then the digits); that's not how most computers work; they are often actually little endian at the octet level, so to most computers, the decimal 4660 is actually the bytes34 12
(assuming we mean a 16-bit number; it would be34 12 00 00
as a 32-bit number).So: we need to discuss and define what endianness we want at every stage. That's also why
BitConverter
is usually useless - since it is CPU-endian, meaning: "whatever endianness the current CPU uses". This is often fine for in-process work, but is useless when sharing data with the world in a way that needs to give the same result on any machine!So: we need to use endianness-aware conversions. Fortunately, this is pretty easy in modern .NET; consider:
If we want to use big-endian conversions instead: just replace
LittleEndian
withBigEndian
and you're done. I've used little-endian notation here because it is a common choice in IO work, but in reality: you need to consult your specification and use the correct endianness - and I can't tell you which that is. It might even be different for different parts of that operation!