.net 十进制值 1m 和 1.0000m 之间有什么实际区别吗?

发布于 2024-11-03 02:29:11 字数 368 浏览 3 评论 0原文

.net 十进制值 1m 和 1.0000m 之间有什么实际区别吗?

内部存储不同:

1m      : 0x00000001 0x00000000 0x00000000 0x00000000
1.0000m : 0x000186a0 0x00000000 0x00000000 0x00050000

但是,是否存在 BCL 中的方法会使用“有效数字”知识的情况?

我问这个问题是因为我正在研究一种压缩磁盘存储或网络传输的十进制值所需空间的方法,并且正在考虑在存储该值之前“规范化”该值以提高其可压缩性。但是,我想知道这是否可能会导致问题。我猜它应该没问题,但只是因为我没有看到任何公开该值精度的方法或属性。还有人知道吗?

Is there any practical difference between the .net decimal values 1m and 1.0000m?

The internal storage is different:

1m      : 0x00000001 0x00000000 0x00000000 0x00000000
1.0000m : 0x000186a0 0x00000000 0x00000000 0x00050000

But, is there a situation where the knowledge of "significant digits" would be used by a method in the BCL?

I ask because I'm working on a means of compressing the space required for decimal values for disk storage or network transport and am toying with the idea of "normalizing" the value before I store it to improve it's compressability. But, I'd like to know if it is likely to cause issues down the line. I'm guessing that it should be fine, but only because I don't see any methods or properties that expose the precision of the value. Does anyone know otherwise?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

笨笨の傻瓜 2024-11-10 02:29:11

编码差异的原因是 Decimal 数据类型将数字存储为整数(96 位整数),并使用一个小数位数来形成除数以获取小数。该值本质上是

integer / 10^scale

内部 Decimal 类型表示为 4 Int32,请参阅 Decimal.GetBits 了解更多详细信息。总之,GetBits 返回一个由 4 个 Int32 组成的数组,其中每个元素代表十进制编码的后续部分

Element 0,1,2 - Represent the low, middle and high 32 bits on the 96 bit integer
Element 3     - Bits 0-15 Unused
                Bits 16-23 exponent which is the power of 10 to divide the integer by
                Bits 24-30 Unused 
                Bit 31 the sign where 0 is positive and 1 is negative

因此,在您的示例中,非常简单地说,当 1.0000m 被编码为十进制时,实际表示为 10000 / 10^4 而 1m 在数学上表示为 1 / 10^0 ,只是编码方式不同而已。

如果您对十进制类型使用本机 .NET 运算符,并且不自己操作/比较位/字节,那么您应该是安全的。

您还会注意到,字符串转换也会考虑这种二进制表示形式并生成不同的字符串,因此如果您依赖字符串表示形式,则在这种情况下需要小心。

The reason for the difference in encoding is because the Decimal data type stores the number as a whole number (96 bit integer), with a scale which is used to form the divisor to get the fractional number. The value is essentially

integer / 10^scale

Internally the Decimal type is represented as 4 Int32, see the documentation of Decimal.GetBits for more detail. In summary, GetBits returns an array of 4 Int32s, where each element represents the follow portion of the Decimal encoding

Element 0,1,2 - Represent the low, middle and high 32 bits on the 96 bit integer
Element 3     - Bits 0-15 Unused
                Bits 16-23 exponent which is the power of 10 to divide the integer by
                Bits 24-30 Unused 
                Bit 31 the sign where 0 is positive and 1 is negative

So in your example, very simply put when 1.0000m is encoded as a decimal the actual representation is 10000 / 10^4 while 1m is represented as 1 / 10^0 mathematically the same value just encoded differently.

If you use the native .NET operators for the decimal type and do not manipulate/compare the bit/bytes yourself you should be safe.

You will also notice that the string conversions will also take this binary representation into consideration and produce different strings so you need to be careful in that case if you ever rely on the string representation.

雨的味道风的声音 2024-11-10 02:29:11

decimal 类型跟踪比例,因为它在算术中很重要。如果您手动对两个数字进行长乘法(例如 3.14 * 5.00),结果的精度为 6 位,小数位为 4

。乘法,忽略小数点(暂时)并将两个数字视为整数。

  3.14
* 5.00
------
  0000 -- 0 * 314 (0 in the one's place)
 00000 -- 0 * 314 (0 in the 10's place)
157000 -- 5 * 314 (5 in the 100's place)
------
157000

这会给你未缩放的结果。现在,计算表达式中小数点右侧的总位数(即 4),并将小数点向左插入 4 位:

15.7000

该结果虽然值相当于 15.7,但更加 比值 15.7 更精确。值 15.7000 的精度为 6 位,小数位数为 4; 15.7 的精度为 3 位,小数位数为 1。

如果尝试进行精度算术,则跟踪值和结果的精度和小数位数非常重要,因为它会告诉您有关结果精度的信息(请注意,精度与精度不同:用刻度为 1/10 英寸的尺子进行测量,无论您在小数点右侧添加多少个尾随零,您对所得测量结果的最好评价就是:最多精确到 1/10 英寸 另一种说法是,您的测量结果最多在指定值的 +/- 5/100 范围内。

The decimal type tracks scale because it's important in arithmetic. If you do long multiplication, by hand, of two numbers — for instance, 3.14 * 5.00 — the result has 6 digits of precision and a scale of 4.

To do the multiplication, ignore the decimal points (for now) and treat the two numbers as integers.

  3.14
* 5.00
------
  0000 -- 0 * 314 (0 in the one's place)
 00000 -- 0 * 314 (0 in the 10's place)
157000 -- 5 * 314 (5 in the 100's place)
------
157000

That gives you the unscaled results. Now, count the total number of digits to the right of the decimal point in the expression (that would be 4) and insert the decimal point 4 places to the left:

15.7000

That result, while equivalent in value to 15.7, is more precise than the value 15.7. The value 15.7000 has 6 digits of precision and a scale of 4; 15.7 has 3 digits of precision and a scale of 1.

If one is trying to do precision arithmetic, it is important to track the precision and scale of your values and results as it tells you something about the precision of your results (note that precision isnt' the same as accuracy: measure something with a ruler graduated in 1/10ths of an inch and the best you can say about the resulting measurement, no matter how many trailing zeros you put to the right of the decimal point is that it is accurate to, at best, a 1/10th of an inch. Another way of putting it would be to say that your measurement is accurate, at best, within +/- 5/100ths of the stated value.

决绝 2024-11-10 02:29:11

我能想到的唯一原因是调用 `ToString 返回源代码中的确切文本表示。

Console.WriteLine(1m); // 1
Console.WriteLine(1.000m); // 1.000

The only reason I can think of is so invoking `ToString returns the exact textual representation in the source code.

Console.WriteLine(1m); // 1
Console.WriteLine(1.000m); // 1.000
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文