为什么 Decimal 在 C# 中使用比 Double 更多的内存来存储更窄的数字范围?

发布于 2025-01-20 16:04:35 字数 199 浏览 3 评论 0 原文

9.0中的数字类型段

这是C # https://i.sstatic.net/gmuds.png“ alt =”在此处输入图像说明“>

我想知道为什么 DECIMAL double> double> double 使用更多的空间来存储较窄的数字范围,而不是C#中的double?

This is the Numeric Types section in C# 9.0 in a Nutshell book that shows numeric types in c#

enter image description here

I want to know why decimal in comparison with double uses more space to store narrower number range than double in C#?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

转角预定愛 2025-01-27 16:04:36

double 相比,decimal 类型具有更高的精度和更小的指数范围。当您需要准确的结果> > 的情况下,它非常有用。 16 位(double 类型的有效精度极限),同时接近或高于 ±1。

.NET Decimal 类型由一个 96 位无符号整数值(有效数或尾数)、一个符号位和一个 8 位小数位值(称为指数,但实际上不是)组成,其中仅使用 6 位。其余位未使用,必须为零/未设置。

96 位可以存储的最大整数值为 (2^96)-179,228,162,514,264,337,593,543,950,335。这是可以存储在十进制中的绝对最大值,所有位都设置在尾数中,符号和指数都设置为全零。就整数值而言,我们可以准确地存储 ±(2^96)-1 之间的任何数字,不会出现任何错误。

比例值采用这些整数并将它们右移若干小数位。当比例 = 1 时,精度值除以 10;当比例 = 2 时,精度除以 100,依此类推。我们一直这样做,直到scale=28,其中除了最高可能的数字(上面那个大数字最左边的7)之外的所有数字都是整数,其余数字都是十进制数字。就规模而言,这就是。但是,如果您的值很低并且您将其除以 10^28,那么您会更接近于零(接近于 1e-28),但是您可以没有超过 的数字小数点后第 28 位。

事实上,任何小于 1 的绝对值都会失去精度。 0.1 <= v << 范围内的值1 最多有 27 位数字,范围为 0.01 <= v << 0.1有26位数字等等。小数点后的零越多,剩下的精度位数就越少。

相比之下,double 是一个 64 位 IEEE 754“binary64”浮点值,由 52 位小数、11 位二进制指数(来自 2^-10222 的幂)组成代码> 到 2^1022,大致为 10^-32310^308)和一个符号 少量。有效正值(非零)范围从 5e-324 (double.Epsilon) 到 1.7976931348623157e+308 (double .MaxValue)...但是您在计算中获得的精确度永远不会超过 16 位小数。

在某些情况下,decimal 优于 double,主要是因为精度,但在几乎所有正常情况下,double 因其精度而受到青睐。更大的绝对范围和更快的速度。根据您的用例,如果速度比精度更重要,您甚至可能更喜欢 float 而不是 double

The decimal type has a higher precision with a smaller range of exponents compared to double. It's useful in situations where you need accurate results out to > 16 digits (the effective limit of precision of the double type) while being close to or above ±1.

The .NET Decimal type consists of a 96-bit unsigned integer value (the significand or mantissa), a sign bit and an 8-bit scale value (called the exponent, although it's really not) of which only 6 bits are used. The rest of the bits are unused and must be zero/unset.

The largest integer value that can be stored in 96 bits is (2^96)-1 or 79,228,162,514,264,337,593,543,950,335. This is the absolute largest value that can be stored in a decimal, with all bits set in the mantissa and both the sign and exponent set to all-zeroes. In terms of integer values we can store any number between ±(2^96)-1 accurately with no inaccuracy.

The scale value takes those integers and shifts them right by a number of decimal places. At scale = 1 the accuracy the value is divided by 10, by 100 at scale = 2 and so on. We continue this all the way up until scale=28 where all but the top possible digit (the 7 on the far left of that big number above) is the integer and the rest of the digits are decimal digits. And that's as far as scale goes. However if your value is low and you're dividing it by 10^28 then you get much closer to zero, (as close as 1e-28) but you can have no digits past the 28th decimal place.

In fact any absolute value less than 1 will lose precision. Values in the range 0.1 <= v < 1 have at most 27 digits, in the range 0.01 <= v < 0.1 there are 26 digits and so on. The more zeroes you have after the decimal point the fewer digits of precision you have left.

By comparison, double is a 64-bit IEEE 754 'binary64' floating point value composed of 52 bits of fraction, 11 bits of binary exponent (powers of 2 from 2^-1022 through 2^1022, roughly 10^-323 through 10^308) and a sign bit. Valid positive (and non-zero) values range from 5e-324 (double.Epsilon) to 1.7976931348623157e+308 (double.MaxValue)... but you won't ever get more than about 16 decimal digits worth of accuracy in your calculations.

There are a few cases where decimal is preferred over double due mostly to the precision, but in almost all normal cases the double is preferred for its greater absolute range and much greater speed. Depending on your use case you might even prefer float over double if speed is more important than precision.

太阳男子 2025-01-27 16:04:36

A 十进制具有约28-29位的精度,并且A double 具有约15-17位的精度。因此,十进制需要16个字节, double 需要8个字节。

参见

A decimal has about 28-29 digits of precision and a double has about 15-17 digits of precision. Therefor a decimal needs 16 bytes and double needs 8 bytes.

See https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文