浮点数学破裂了吗?

发布于 2025-01-17 21:35:50 字数 237 浏览 3 评论 0 原文

考虑以下代码:

0.1 + 0.2 == 0.3  ->  false
0.1 + 0.2         ->  0.30000000000000004

为什么这些不准确发生?

Consider the following code:

0.1 + 0.2 == 0.3  ->  false
0.1 + 0.2         ->  0.30000000000000004

Why do these inaccuracies happen?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(30

等往事风中吹 2025-01-24 21:35:51

0.10.20.3 等十进制数在二进制编码的浮点类型中无法准确表示。 0.10.2 的近似值之和与 0.3 使用的近似值不同,因此 0.1 + 0.2 == 是错误的0.3 在这里可以更清楚地看到:

#include <stdio.h>

int main() {
    printf("0.1 + 0.2 == 0.3 is %s\n", 0.1 + 0.2 == 0.3 ? "true" : "false");
    printf("0.1 is %.23f\n", 0.1);
    printf("0.2 is %.23f\n", 0.2);
    printf("0.1 + 0.2 is %.23f\n", 0.1 + 0.2);
    printf("0.3 is %.23f\n", 0.3);
    printf("0.3 - (0.1 + 0.2) is %g\n", 0.3 - (0.1 + 0.2));
    return 0;
}

输出:

0.1 + 0.2 == 0.3 is false
0.1 is 0.10000000000000000555112
0.2 is 0.20000000000000001110223
0.1 + 0.2 is 0.30000000000000004440892
0.3 is 0.29999999999999998889777
0.3 - (0.1 + 0.2) is -5.55112e-17

为了更可靠地评估这些计算,您需要使用基于十进制的浮点值表示。 C 标准默认情况下不指定此类类型,而是作为 技术报告

_Decimal32_Decimal64_Decimal128 类型可能在您的系统上可用(例如,GCC选定的目标,但是 Clang操作系统 X)。

Decimal numbers such as 0.1, 0.2, and 0.3 are not represented exactly in binary encoded floating point types. The sum of the approximations for 0.1 and 0.2 differs from the approximation used for 0.3, hence the falsehood of 0.1 + 0.2 == 0.3 as can be seen more clearly here:

#include <stdio.h>

int main() {
    printf("0.1 + 0.2 == 0.3 is %s\n", 0.1 + 0.2 == 0.3 ? "true" : "false");
    printf("0.1 is %.23f\n", 0.1);
    printf("0.2 is %.23f\n", 0.2);
    printf("0.1 + 0.2 is %.23f\n", 0.1 + 0.2);
    printf("0.3 is %.23f\n", 0.3);
    printf("0.3 - (0.1 + 0.2) is %g\n", 0.3 - (0.1 + 0.2));
    return 0;
}

Output:

0.1 + 0.2 == 0.3 is false
0.1 is 0.10000000000000000555112
0.2 is 0.20000000000000001110223
0.1 + 0.2 is 0.30000000000000004440892
0.3 is 0.29999999999999998889777
0.3 - (0.1 + 0.2) is -5.55112e-17

For these computations to be evaluated more reliably, you would need to use a decimal-based representation for floating point values. The C Standard does not specify such types by default but as an extension described in a technical Report.

The _Decimal32, _Decimal64 and _Decimal128 types might be available on your system (for example, GCC supports them on selected targets, but Clang does not support them on OS X).

呆头 2025-01-24 21:35:51

算术是基本10,因此小数代表十分之一,百分之十。

正常 积分存储为整数mantissas和指数。 Mantissa代表着重要的数字。指数就像科学符号一样,但它使用2的碱基而不是10。例如,64.0将以1的mantissa表示,指数为6。0.125将以1的Mantissa为1,指数为-3。

浮点小数必须将负2

0.1b = 0.5d
0.01b = 0.25d
0.001b = 0.125d
0.0001b = 0.0625d
0.00001b = 0.03125d

等负功率添加。

在处理浮点算术时,通常使用错误的delta而不是使用平等运算符。而不是

if(a==b) ...

你会使用

delta = 0.0001; // or some arbitrarily small amount
if(a - b > -delta && a - b < delta) ...

Normal arithmetic is base-10, so decimals represent tenths, hundredths, etc. When you try to represent a floating-point number in binary base-2 arithmetic, you are dealing with halves, fourths, eighths, etc.

In the hardware, floating points are stored as integer mantissas and exponents. Mantissa represents the significant digits. Exponent is like scientific notation but it uses a base of 2 instead of 10. For example 64.0 would be represented with a mantissa of 1 and exponent of 6. 0.125 would be represented with a mantissa of 1 and an exponent of -3.

Floating point decimals have to add up negative powers of 2

0.1b = 0.5d
0.01b = 0.25d
0.001b = 0.125d
0.0001b = 0.0625d
0.00001b = 0.03125d

and so on.

It is common to use a error delta instead of using equality operators when dealing with floating point arithmetic. Instead of

if(a==b) ...

you would use

delta = 0.0001; // or some arbitrarily small amount
if(a - b > -delta && a - b < delta) ...
巷子口的你 2025-01-24 21:35:51

有一些项目致力于解决浮点实现问题。

看看 Unum &例如,Posit,它展示了一种名为 posit 的数字类型(及其前身 unum),承诺以更少的位数提供更高的准确性。如果我的理解是正确的,它也解决了问题中的问题。这是一个相当有趣的项目,背后的人是一位数学家,Dr.约翰·古斯塔夫森。

整个东西是开源的,有许多 C/C++、Python、Julia 和 C# 的实际实现(https://hastlayer. com/arithmetics)。

There are projects on fixing floating point implementation issues.

Take a look at Unum & Posit for example, which showcases a number type called posit (and its predecessor unum) that promises to offer better accuracy with fewer bits. If my understanding is correct, it also fixes the kind of problems in the question. It is a quite interesting project, and the person behind is a mathematician, Dr. John Gustafson.

The whole thing is open source, with many actual implementations in C/C++, Python, Julia and C# (https://hastlayer.com/arithmetics).

蓝眼泪 2025-01-24 21:35:50

二进制浮点数学类似。在大多数编程语言中,它基于 ieee 754标准。问题的症结在于,数字以这种格式代表了整个数量,倍数的功率为两个。理性数字(例如 0.1 ,是 1/10 ),其分母不是两个功能,不能完全表示。

对于 0.1 在标准 binary64 格式中,可以完全按照

  • 0.10000000000000000000000005555151515123125782782782118158158158404545415415415415625在Decimal中或
  • 0x1.999999999999999999999999999999999999999999999999999999999999999999.999999999999999. /code> in c99 hexfloat符号

相比之下,有理数 0.1 1/10 ,可以完全

  • 用小数为 0.1 编写,或者
  • 0x1。 9999999999999 ... p-4 在c99六链符号的类似物中,其中 ... 代表了9秒的无效序列。

常数 0.2 0.3 在您的程序中也将是其真实值的近似值。碰巧的是,最接近的 double to 0.2 大于理性号码 0.2 ,但最接近的 double to to <代码> 0.3 小于理性编号 0.3 0.1 0.2 的总和大于理性编号 0.3 ,因此与代码中的常数不同意。

浮点算术问题的相当全面的处理是每个计算机科学家对浮点算术的了解 。有关易于消化的说明,请参见 floating-point-gui.de

旁注:所有位置(base-n)编号系统与精确的

普通旧小数(基本10)数字共享此问题,这就是为什么1/3之类的数字最终成0.3333333333 ...

您刚刚偶然发现了在数字(3/10)上,该数字很容易用小数系统表示,但不符合二进制系统。这两种方式也(在某种程度上)也是如此:1/16是十进制的丑陋数字(0.0625),但在二进制中,它看起来像十分位数(0.0001)的10,000次(0.0001)** - 如果我们在在我们的日常生活中使用基本2号系统的习惯,您甚至会考虑到这个数字,并本能地理解您可以通过将某些东西减半,一次又一次地将其减半。

当然,这并不是将浮点数存储在记忆中的方式(它们使用科学符号的形式)。但是,它确实说明了二进制浮点精度错误往往会出现的观点,因为我们通常对使用的“现实世界”数字通常是十大的功能 - 但这仅仅是因为我们使用小数次数字系统 - 今天。这也是为什么我们会说诸如71%而不是“每7分中的5”之类的东西(71%是一个近似值,因为5/7不能完全用任何小数组来代表)。

因此,否:二进制浮点数没有打破,它们恰好与其他每个基本n号系统一样不完美::)

侧面注意:在实践中使用浮子的浮子

,此精确问题意味着您需要使用舍入在显示之前,您可以将浮点数围绕浮点数的功能,但是您在显示它们之前感兴趣的许多小数点。

您还需要将平等测试替换为允许一定耐受性的比较,这意味着:

do not do 如果(x == y){... ...}

而不是 if(abs(x -y)&lt; mytoleranceValue){...}

其中 abs 是绝对值。 myToleranceValue 需要为您的特定应用程序选择 - 这与您准备允许的“ Wiggle Room”有很大的关系,以及您要比较的最大数字可能(由于精确问题的丧失)。当您选择的语言中提防“ Epsilon”风格常数。这些可以用作公差值,但它们的有效性取决于您正在使用的数字的大小(大小),因为数量较大的计算可能超过Epsilon阈值。

Binary floating point math works like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1, which is 1/10) whose denominator is not a power of two cannot be exactly represented.

For 0.1 in the standard binary64 format, the representation can be written exactly as

  • 0.1000000000000000055511151231257827021181583404541015625 in decimal, or
  • 0x1.999999999999ap-4 in C99 hexfloat notation.

In contrast, the rational number 0.1, which is 1/10, can be written exactly as

  • 0.1 in decimal, or
  • 0x1.99999999999999...p-4 in an analog of C99 hexfloat notation, where the ... represents an unending sequence of 9's.

The constants 0.2 and 0.3 in your program will also be approximations to their true values. It happens that the closest double to 0.2 is larger than the rational number 0.2 but that the closest double to 0.3 is smaller than the rational number 0.3. The sum of 0.1 and 0.2 winds up being larger than the rational number 0.3 and hence disagreeing with the constant in your code.

A fairly comprehensive treatment of floating-point arithmetic issues is What Every Computer Scientist Should Know About Floating-Point Arithmetic. For an easier-to-digest explanation, see floating-point-gui.de.

Side Note: All positional (base-N) number systems share this problem with precision

Plain old decimal (base 10) numbers have the same issues, which is why numbers like 1/3 end up as 0.333333333...

You've just stumbled on a number (3/10) that happens to be easy to represent with the decimal system but doesn't fit the binary system. It goes both ways (to some small degree) as well: 1/16 is an ugly number in decimal (0.0625), but in binary it looks as neat as a 10,000th does in decimal (0.0001)** - if we were in the habit of using a base-2 number system in our daily lives, you'd even look at that number and instinctively understand you could arrive there by halving something, halving it again, and again and again.

Of course, that's not exactly how floating-point numbers are stored in memory (they use a form of scientific notation). However, it does illustrate the point that binary floating-point precision errors tend to crop up because the "real world" numbers we are usually interested in working with are so often powers of ten - but only because we use a decimal number system day-to-day. This is also why we'll say things like 71% instead of "5 out of every 7" (71% is an approximation since 5/7 can't be represented exactly with any decimal number).

So, no: binary floating point numbers are not broken, they just happen to be as imperfect as every other base-N number system :)

Side Note: Working with Floats in Programming

In practice, this problem of precision means you need to use rounding functions to round your floating point numbers off to however many decimal places you're interested in before you display them.

You also need to replace equality tests with comparisons that allow some amount of tolerance, which means:

Do not do if (x == y) { ... }

Instead do if (abs(x - y) < myToleranceValue) { ... }.

where abs is the absolute value. myToleranceValue needs to be chosen for your particular application - and it will have a lot to do with how much "wiggle room" you are prepared to allow, and what the largest number you are going to be comparing may be (due to loss of precision issues). Beware of "epsilon" style constants in your language of choice. These can be used as tolerance values but their effectiveness depends on the magnitude (size) of the numbers you're working with, since calculations with large numbers may exceed the epsilon threshold.

葬花如无物 2025-01-24 21:35:50

硬件设计师的视角

我相信我应该为此添加硬件设计师的观点,因为我设计和构建了浮点硬件。了解错误的起源可能有助于理解软件中发生的事情,最终,我希望这有助于解释浮点错误发生并随着时间而积累的原因。

1。

从工程的角度来看,大多数浮点操作都会有一些错误元素,因为执行浮点计算的硬件仅需要在最后一个位置的错误少于一个单元的一半。因此,许多硬件将停止的精确度仅是仅在最后一个位置的一个单元的误差少于一个单元的误差所必需的,而单一操作,这在浮点分区中尤其有问题。构成单个操作的是取决于单元采用多少操作数。对于大多数人来说,这是两个,但是有些单元需要3个或更多操作数。因此,不能保证重复操作会导致理想的错误,因为这些错误会随着时间的推移加起来。

2。标准

大多数处理器遵循 ieee-754 标准
。例如,IEEE-754中有一个符合模式,该模式允许以精度为代价来表示非常小的浮点数。但是,以下将涵盖IEEE-754的归一化模式,这是典型的操作模式。

在IEEE-754标准中,只要在最后一个位置少于一个单位的一半,硬件设计人员就可以允许任何错误/epsilon的价值放置一个操作。这解释了为什么当重复操作时,错误会加起来。对于IEEE-754双重精度,这是第54位,因为使用53位表示浮点数的数字部分(归一化),也称为Mantissa(例如5.3 in 5.3e5)。接下来的部分详细介绍了各种浮点操作上硬件错误的原因。

3。造成舍入误差的原因,

浮点划分误差的主要原因是用于计算商的分裂算法。大多数计算机系统使用乘法乘以乘法,主要在 z = x/y , z = x *(1/y)中计算划分。在迭代中计算一个划分,即每个周期都计算一些商的一些位,直到达到所需的精度为止,对于IEEE-754而言,这在最后一个位置的错误少于一个单元。 y(1/y)的倒数表被称为慢划分中的商选择表(QST),并且商选择表中的大小通常是radix的宽度,或者是许多位的宽度在每次迭代中计算的商,以及几个后卫。对于IEEE-754标准,双重精度(64位),它将是分隔器的radix的大小,再加上几个后卫位K,其中 k&gt; = 2 。因此,例如,一次计算2位商的分隔线的典型商选择表(Radix 4)将为 2+2 = 4 位(加上一些可选位)。

3.1分区四舍五入错误:倒数的近似

商选择表中的倒数取决于分区方法:慢速分区,例如SRT部门或快速分裂,例如Goldschmidt division;根据除法算法修改每个条目,以尝试产生最低的误差。但是,无论如何,所有倒数都是实际倒数的近似值,并引入了一些错误元素。缓慢的除法和快速除法方法都计算出商迭代,即每个步骤都计算出一些商的一些位,然后从股息中减去结果,并且分隔器重复步骤,直到误差小于一半的一半在最后一个地方。缓慢的除法方法计算每个步骤中商数的固定数字,并且通常构建价格便宜,快速分裂方法计算每个步骤数量的数字数量,并且构建通常更昂贵。分区方法中最重要的部分是,它们中的大多数依赖于相互量的近似重复乘法,因此它们很容易出错。

4。其他操作中的四舍五入错误:截断

所有操作中舍入错误的另一个原因是IEEE-754允许的最终答案的截断模式的不同模式。有截短的,圆形的 - 零,“ noreferrer”> round to-nearest(default),回合 - 下达和综述。所有方法都在最后一个地方引入了少于一个单元的误差元素,以进行单个操作。随着时间的流逝和重复操作,截断还会累积地增加结果错误。这种截断误差在指示中尤其有问题,涉及某种形式的重复乘法。

5。重复操作,

因为执行浮点计算的硬件只需要产生一个结果,而错误在最后一个位置的一个单位的错误少于一个单元的一半,因此,如果不观察,该错误将会在重复的操作中增长。这就是在需要有界错误的计算中的原因,数学家使用诸如使用fromn-neart IEEE-754的最后一个位置的数字,因为随着时间的流逝,错误更有可能互相取消,而间隔算术结合预测四舍五入错误并纠正它们。由于与其他圆形模式相比,其相对误差较低,因此圆形到最近的数字(在最后一个位置)是IEEE-754的默认舍入模式。

请注意,默认的圆形模式,圆头到nearest 甚至在最后一个位置的数字,保证在最后一个位置的一个单位的错误少于一个单元的一半进行一次操作。使用截断,综述和单独的圆形可能会导致一个错误大于最后一个单元的一半,但在最后一个位置少于一个单元,因此不建议使用这些模式用于间隔算术。

6.总而言之

,浮点操作错误的基本原因是硬件截断的结合,以及在划分的情况下的倒数。由于IEEE-754标准仅需要在最后一个位置少于一个单元的一半误差即可进行一次操作,因此除非纠正,否则重复操作的浮点误差将加起来。

A Hardware Designer's Perspective

I believe I should add a hardware designer’s perspective to this since I design and build floating point hardware. Knowing the origin of the error may help in understanding what is happening in the software, and ultimately, I hope this helps explain the reasons for why floating point errors happen and seem to accumulate over time.

1. Overview

From an engineering perspective, most floating point operations will have some element of error since the hardware that does the floating point computations is only required to have an error of less than one half of one unit in the last place. Therefore, much hardware will stop at a precision that's only necessary to yield an error of less than one half of one unit in the last place for a single operation which is especially problematic in floating point division. What constitutes a single operation depends upon how many operands the unit takes. For most, it is two, but some units take 3 or more operands. Because of this, there is no guarantee that repeated operations will result in a desirable error since the errors add up over time.

2. Standards

Most processors follow the IEEE-754 standard but some use denormalized, or different standards
. For example, there is a denormalized mode in IEEE-754 which allows representation of very small floating point numbers at the expense of precision. The following, however, will cover the normalized mode of IEEE-754 which is the typical mode of operation.

In the IEEE-754 standard, hardware designers are allowed any value of error/epsilon as long as it's less than one half of one unit in the last place, and the result only has to be less than one half of one unit in the last place for one operation. This explains why when there are repeated operations, the errors add up. For IEEE-754 double precision, this is the 54th bit, since 53 bits are used to represent the numeric part (normalized), also called the mantissa, of the floating point number (e.g. the 5.3 in 5.3e5). The next sections go into more detail on the causes of hardware error on various floating point operations.

3. Cause of Rounding Error in Division

The main cause of the error in floating point division is the division algorithms used to calculate the quotient. Most computer systems calculate division using multiplication by an inverse, mainly in Z=X/Y, Z = X * (1/Y). A division is computed iteratively i.e. each cycle computes some bits of the quotient until the desired precision is reached, which for IEEE-754 is anything with an error of less than one unit in the last place. The table of reciprocals of Y (1/Y) is known as the quotient selection table (QST) in the slow division, and the size in bits of the quotient selection table is usually the width of the radix, or a number of bits of the quotient computed in each iteration, plus a few guard bits. For the IEEE-754 standard, double precision (64-bit), it would be the size of the radix of the divider, plus a few guard bits k, where k>=2. So for example, a typical Quotient Selection Table for a divider that computes 2 bits of the quotient at a time (radix 4) would be 2+2= 4 bits (plus a few optional bits).

3.1 Division Rounding Error: Approximation of Reciprocal

What reciprocals are in the quotient selection table depend on the division method: slow division such as SRT division, or fast division such as Goldschmidt division; each entry is modified according to the division algorithm in an attempt to yield the lowest possible error. In any case, though, all reciprocals are approximations of the actual reciprocal and introduce some element of error. Both slow division and fast division methods calculate the quotient iteratively, i.e. some number of bits of the quotient are calculated each step, then the result is subtracted from the dividend, and the divider repeats the steps until the error is less than one half of one unit in the last place. Slow division methods calculate a fixed number of digits of the quotient in each step and are usually less expensive to build, and fast division methods calculate a variable number of digits per step and are usually more expensive to build. The most important part of the division methods is that most of them rely upon repeated multiplication by an approximation of a reciprocal, so they are prone to error.

4. Rounding Errors in Other Operations: Truncation

Another cause of the rounding errors in all operations are the different modes of truncation of the final answer that IEEE-754 allows. There's truncate, round-towards-zero, round-to-nearest (default), round-down, and round-up. All methods introduce an element of error of less than one unit in the last place for a single operation. Over time and repeated operations, truncation also adds cumulatively to the resultant error. This truncation error is especially problematic in exponentiation, which involves some form of repeated multiplication.

5. Repeated Operations

Since the hardware that does the floating point calculations only needs to yield a result with an error of less than one half of one unit in the last place for a single operation, the error will grow over repeated operations if not watched. This is the reason that in computations that require a bounded error, mathematicians use methods such as using the round-to-nearest even digit in the last place of IEEE-754, because, over time, the errors are more likely to cancel each other out, and Interval Arithmetic combined with variations of the IEEE 754 rounding modes to predict rounding errors, and correct them. Because of its low relative error compared to other rounding modes, round to nearest even digit (in the last place), is the default rounding mode of IEEE-754.

Note that the default rounding mode, round-to-nearest even digit in the last place, guarantees an error of less than one half of one unit in the last place for one operation. Using the truncation, round-up, and round down alone may result in an error that is greater than one half of one unit in the last place, but less than one unit in the last place, so these modes are not recommended unless they are used in Interval Arithmetic.

6. Summary

In short, the fundamental reason for the errors in floating point operations is a combination of the truncation in hardware, and the truncation of a reciprocal in the case of division. Since the IEEE-754 standard only requires an error of less than one half of one unit in the last place for a single operation, the floating point errors over repeated operations will add up unless corrected.

听,心雨的声音 2025-01-24 21:35:50

浮点表示法的破坏方式与您在小学学到并每天使用的十进制(以 10 为底)表示法的破坏方式完全相同,只是以 2 为底。

为了理解这一点,请考虑将 2/3 表示为十进制值。完全不可能做到!在你写完小数点后的 6 之前世界就结束了,因此我们会写到一些位置,四舍五入到最后的 7,并认为它足够准确。

同样的,1/10(十进制0.1)在以2为底(二进制)的情况下也不能精确地表示为“十进制”值;小数点后的重复模式永远持续下去。该值不精确,因此您无法使用普通浮点方法对其进行精确数学运算。就像以 10 为基数一样,其他值也存在此问题。

Floating point notation is broken in the exact same way the decimal (base-10) notation you learned in grade school and use every day is broken, just for base-2.

To understand, think about representing 2/3 as a decimal value. It's impossible to do exactly! The world will end before you finish writing the 6's after the decimal point, and so instead we write to some number of places, round to a final 7, and consider it sufficiently accurate.

In the same way, 1/10 (decimal 0.1) cannot be represented exactly in base 2 (binary) as a "decimal" value; a repeating pattern after the decimal point goes on forever. The value is not exact, and therefore you can't do exact math with it using normal floating point methods. Just like with base 10, there are other values that exhibit this problem as well.

梦里南柯 2025-01-24 21:35:50

这里的大多数答案都用非常枯燥的技术术语来解决这个问题。我想用普通人可以理解的术语来解决这个问题。

想象一下,您正在尝试切披萨。您有一个机器人披萨切割机,可以将披萨片准确地切成两半。它可以将整个披萨减半,也可以将现有的披萨减半,但无论如何,减半总是准确的。

那个披萨刀的动作非常精细,如果你从整个披萨开始,然后将其减半,然后每次继续将最小的切片减半,你可以在切片太小之前减半53次甚至其高精度能力。那时,您不能再将那个非常薄的切片减半,而必须按原样包含或排除它。

现在,您如何将所有切片拼凑成披萨的十分之一 (0.1) 或五分之一 (0.2)?认真思考一下,并尝试解决它。如果您手边有一个神话般的精密披萨刀,您甚至可以尝试使用真正的披萨。 :-)


当然,大多数经验丰富的程序员都知道真正的答案,即无论你切片得多么精细,都无法使用这些切片将披萨精确地拼凑成十分之一或五分之一他们。你可以做一个相当好的近似,如果你将 0.1 的近似值与 0.2 的近似值相加,你会得到一个相当好的 0.3 的近似值,但它仍然只是一个近似值。

对于双精度数字(允许将披萨减半 53 次的精度),紧邻 0.1 的数字分别为 0.09999999999999999167332731531132594682276248931884765625 和0.1000000000000000055511151231257827021181583404541015625。后者比前者更接近 0.1,因此在给定输入 0.1 的情况下,数字解析器将倾向于后者。

(这两个数字之间的差异是我们必须决定包含的“最小切片”,这会引入向上偏差,或排除它,这会引入向下偏差。该最小切片的技术术语是 ulp。)

在 0.2 的情况下,数字都是相同的,只是放大了 2 倍。我们再次强调这个值稍微高一点大于0.2。

请注意,在这两种情况下,0.1 和 0.2 的近似值都有轻微的向上偏差。如果我们添加足够多的这些偏差,它们将使数字越来越远离我们想要的,事实上,在 0.1 + 0.2 的情况下,偏差足够高,导致结果数字不再是最接近的数字至 0.3。

特别是,0.1 + 0.2 实际上是 0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125 = 0.3000000000000000444089209850062616169452667236328125,而最接近的数字到 0.3 实际上是 0.299999999999999988897769753748434595763683319091796875。


PS 一些编程语言还提供披萨切割器,可以将切片精确地分割成十分之一。尽管这种披萨刀并不常见,但如果您确实有的话,当需要精确切出十分之一或五分之一的披萨时,您应该使用它。

(最初发布在 Quora 上。)

Most answers here address this question in very dry, technical terms. I'd like to address this in terms that normal human beings can understand.

Imagine that you are trying to slice up pizzas. You have a robotic pizza cutter that can cut pizza slices exactly in half. It can halve a whole pizza, or it can halve an existing slice, but in any case, the halving is always exact.

That pizza cutter has very fine movements, and if you start with a whole pizza, then halve that, and continue halving the smallest slice each time, you can do the halving 53 times before the slice is too small for even its high-precision abilities. At that point, you can no longer halve that very thin slice, but must either include or exclude it as is.

Now, how would you piece all the slices in such a way that would add up to one-tenth (0.1) or one-fifth (0.2) of a pizza? Really think about it, and try working it out. You can even try to use a real pizza, if you have a mythical precision pizza cutter at hand. :-)


Most experienced programmers, of course, know the real answer, which is that there is no way to piece together an exact tenth or fifth of the pizza using those slices, no matter how finely you slice them. You can do a pretty good approximation, and if you add up the approximation of 0.1 with the approximation of 0.2, you get a pretty good approximation of 0.3, but it's still just that, an approximation.

For double-precision numbers (which is the precision that allows you to halve your pizza 53 times), the numbers immediately less and greater than 0.1 are 0.09999999999999999167332731531132594682276248931884765625 and 0.1000000000000000055511151231257827021181583404541015625. The latter is quite a bit closer to 0.1 than the former, so a numeric parser will, given an input of 0.1, favour the latter.

(The difference between those two numbers is the "smallest slice" that we must decide to either include, which introduces an upward bias, or exclude, which introduces a downward bias. The technical term for that smallest slice is an ulp.)

In the case of 0.2, the numbers are all the same, just scaled up by a factor of 2. Again, we favour the value that's slightly higher than 0.2.

Notice that in both cases, the approximations for 0.1 and 0.2 have a slight upward bias. If we add enough of these biases in, they will push the number further and further away from what we want, and in fact, in the case of 0.1 + 0.2, the bias is high enough that the resulting number is no longer the closest number to 0.3.

In particular, 0.1 + 0.2 is really 0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125 = 0.3000000000000000444089209850062616169452667236328125, whereas the number closest to 0.3 is actually 0.299999999999999988897769753748434595763683319091796875.


P.S. Some programming languages also provide pizza cutters that can split slices into exact tenths. Although such pizza cutters are uncommon, if you do have access to one, you should use it when it's important to be able to get exactly one-tenth or one-fifth of a slice.

(Originally posted on Quora.)

伴梦长久 2025-01-24 21:35:50

浮点舍入错误。由于缺少 5 的质因数,0.1 在 2 进制中无法像在 10 进制中那样准确表示。就像 1/3 在十进制中需要无穷多个数字来表示,但在 3 进制中却是“0.1”, 0.1 可以采用以 2 为基数的无限个数字,而以 10 为基数则不能。而且计算机没有无限量的内存。

Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Just as 1/3 takes an infinite number of digits to represent in decimal, but is "0.1" in base-3, 0.1 takes an infinite number of digits in base-2 where it does not in base-10. And computers don't have an infinite amount of memory.

浪菊怪哟 2025-01-24 21:35:50

我的答案很长,所以我将其分为三个部分。由于问题是关于浮点数学的,因此我重点介绍了机器实际上的作用。我还将其特定于双重(64位)精度,但是该参数同样适用于任何浮点算术。

preamble

an

value =(-1)^s *(1.M 51 m 50 ... m 2 m 1 m 0 2 * 2 e-1023

在64位:

  • 第一个位是 sign bit 1 如果数字为负, 0 否则 1 1 < /sup>。
  • 接下来的11位是 exponent ,即 offset 1023。换句话说,在阅读了两倍精确号码的指数位后,必须减去1023,以获得两个的幂。
  • 其余的52位为显着(或Mantissa)。在Mantissa中,“暗示” 1。始终 2 省略,因为任何二进制值中最重要的位 1

1 -IEEE 754允许签名零 - - - <代码> +0 和 -0 的处理方式有所不同: 1/(+0)是正无限; 1 /(-0)< / code>是负无穷大。对于零值,Mantissa和指数位均为零。注意:零值(+0和-0)明确未归类为denormal 2

2 - denormal数字偏移零的指数(以及隐含的 0。)。 denormal双精度数字的范围为d min ≤| x | ≤d max ,其中d min (最小的代表性非零数字)为2 -1023-51 (≈4.94 * 10 - 324 )和d max (最大的变性号,曼蒂萨完全由 1 s组成)是2 -1023 + 1 -2 -1023-51 (≈2.225 * 10 -308 )。


将双重精度编号转换为二进制

存在许多在线转换器,以将双精度的浮点数转换为二进制(例如, binaryconvert.com ),但这里有一些示例C#代码以获取IEEE 754表示双重精度编号(我将三个部分与结肠分开(:> ):

public static string BinaryRepresentation(double value)
{
    long valueInLongType = BitConverter.DoubleToInt64Bits(value);
    string bits = Convert.ToString(valueInLongType, 2);
    string leadingZeros = new string('0', 64 - bits.Length);
    string binaryRepresentation = leadingZeros + bits;

    string sign = binaryRepresentation[0].ToString();
    string exponent = binaryRepresentation.Substring(1, 11);
    string mantissa = binaryRepresentation.Substring(12);

    return string.Format("{0}:{1}:{2}", sign, exponent, mantissa);
}

要点:原始问题

(跳到tl; dr版本的底部)

cato johnston (问题询问)问为什么0.1 + 0.2!=

0.3

0.1 => 0:01111111011:1001100110011001100110011001100110011001100110011010
0.2 => 0:01111111100:1001100110011001100110011001100110011001100110011010

0011的数字。 a 有限二进制位数量超过1/9、1/3或1/7可以精确地用 DECIMAL Digits

另请注意,我们可以将指数中的功率降低52,并将二进制代表中的点移到右侧的点52个位置(非常类似于10 -3 * 1.23 == 10 -5 * 123)。然后,这使我们能够将二进制表示形式表示为以a * 2 p 形式表示的确切值。其中“ a”是整数。

将指数转换为十进制,删除偏移,然后重新添加隐含的 1 (在方括号中),0.1和0.2是:

0.1 => 2^-4 * [1].1001100110011001100110011001100110011001100110011010
0.2 => 2^-3 * [1].1001100110011001100110011001100110011001100110011010
or
0.1 => 2^-56 * 7205759403792794 = 0.1000000000000000055511151231257827021181583404541015625
0.2 => 2^-55 * 7205759403792794 = 0.200000000000000011102230246251565404236316680908203125

要添加两个数字,指数必须相同,即:

0.1 => 2^-3 *  0.1100110011001100110011001100110011001100110011001101(0)
0.2 => 2^-3 *  1.1001100110011001100110011001100110011001100110011010
sum =  2^-3 * 10.0110011001100110011001100110011001100110011001100111
or
0.1 => 2^-55 * 3602879701896397  = 0.1000000000000000055511151231257827021181583404541015625
0.2 => 2^-55 * 7205759403792794  = 0.200000000000000011102230246251565404236316680908203125
sum =  2^-55 * 10808639105689191 = 0.3000000000000000166533453693773481063544750213623046875

由于总和不是表格2 n * 1. {bbb}我们将指数增加一个,然后移动小数( binary )点:

sum = 2^-2  * 1.0011001100110011001100110011001100110011001100110011(1)
    = 2^-54 * 5404319552844595.5 = 0.3000000000000000166533453693773481063544750213623046875

现在有Mantissa中的53位(第53位在上线的方括号中)。默认圆形模式' - ie如果一个数字 x 属于两个值 a b ,则最小显着的位为零的值是选择。

a = 2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875
  = 2^-2  * 1.0011001100110011001100110011001100110011001100110011

x = 2^-2  * 1.0011001100110011001100110011001100110011001100110011(1)

b = 2^-2  * 1.0011001100110011001100110011001100110011001100110100
  = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125

请注意, a b 仅在最后位中有所不同。 ... 0011 + 1 = ... 0100 。在这种情况下,零值最低的值为 b ,因此总和是:

sum = 2^-2  * 1.0011001100110011001100110011001100110011001100110100
    = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125

而0.3的二进制表示为:

0.3 => 2^-2  * 1.0011001100110011001100110011001100110011001100110011
    =  2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875

仅与0.1和0.2的二进制表示不同。由2 -54

二进制表示为0.1和0.2是IEEE 754允许的数字的最精确表示。由于默认的舍入模式,这些表示形式的添加至少仅导致一个值,这一点始终有所不同。 - 显着性。

tl; dr </

strong 0.1 + 0.2 在IEEE 754二进制表示中(分隔三个部分),并将其与 0.3 进行比较,这是(我将不同的位放在方括号中):

0.1 + 0.2 => 0:01111111101:0011001100110011001100110011001100110011001100110[100]
0.3       => 0:01111111101:0011001100110011001100110011001100110011001100110[011]

转换回小数,这些值是:

0.1 + 0.2 => 0.300000000000000044408920985006...
0.3       => 0.299999999999999988897769753748...

差异恰好是2 -54 ,即〜5.5515151231258×10 -17 > - 微不足道(对于许多应用程序)。

比较浮点数的最后几个位本质上是危险的,因为任何阅读著名的“ 每个计算机科学家对浮点算术的了解“(涵盖本答案的所有主要部分)将知道。

大多数计算器都使用其他 guard Digits 解决这个问题,这就是 0.1 + 0.2 + 0.2 + 0.2 将给出 0.3 :最后几个位是舍入的。

My answer is quite long, so I've split it into three sections. Since the question is about floating point mathematics, I've put the emphasis on what the machine actually does. I've also made it specific to double (64 bit) precision, but the argument applies equally to any floating point arithmetic.

Preamble

An IEEE 754 double-precision binary floating-point format (binary64) number represents a number of the form

value = (-1)^s * (1.m51m50...m2m1m0)2 * 2e-1023

in 64 bits:

  • The first bit is the sign bit: 1 if the number is negative, 0 otherwise1.
  • The next 11 bits are the exponent, which is offset by 1023. In other words, after reading the exponent bits from a double-precision number, 1023 must be subtracted to obtain the power of two.
  • The remaining 52 bits are the significand (or mantissa). In the mantissa, an 'implied' 1. is always2 omitted since the most significant bit of any binary value is 1.

1 - IEEE 754 allows for the concept of a signed zero - +0 and -0 are treated differently: 1 / (+0) is positive infinity; 1 / (-0) is negative infinity. For zero values, the mantissa and exponent bits are all zero. Note: zero values (+0 and -0) are explicitly not classed as denormal2.

2 - This is not the case for denormal numbers, which have an offset exponent of zero (and an implied 0.). The range of denormal double precision numbers is dmin ≤ |x| ≤ dmax, where dmin (the smallest representable nonzero number) is 2-1023 - 51 (≈ 4.94 * 10-324) and dmax (the largest denormal number, for which the mantissa consists entirely of 1s) is 2-1023 + 1 - 2-1023 - 51 (≈ 2.225 * 10-308).


Turning a double precision number to binary

Many online converters exist to convert a double precision floating point number to binary (e.g. at binaryconvert.com), but here is some sample C# code to obtain the IEEE 754 representation for a double precision number (I separate the three parts with colons (:):

public static string BinaryRepresentation(double value)
{
    long valueInLongType = BitConverter.DoubleToInt64Bits(value);
    string bits = Convert.ToString(valueInLongType, 2);
    string leadingZeros = new string('0', 64 - bits.Length);
    string binaryRepresentation = leadingZeros + bits;

    string sign = binaryRepresentation[0].ToString();
    string exponent = binaryRepresentation.Substring(1, 11);
    string mantissa = binaryRepresentation.Substring(12);

    return string.Format("{0}:{1}:{2}", sign, exponent, mantissa);
}

Getting to the point: the original question

(Skip to the bottom for the TL;DR version)

Cato Johnston (the question asker) asked why 0.1 + 0.2 != 0.3.

Written in binary (with colons separating the three parts), the IEEE 754 representations of the values are:

0.1 => 0:01111111011:1001100110011001100110011001100110011001100110011010
0.2 => 0:01111111100:1001100110011001100110011001100110011001100110011010

Note that the mantissa is composed of recurring digits of 0011. This is key to why there is any error to the calculations - 0.1, 0.2 and 0.3 cannot be represented in binary precisely in a finite number of binary bits any more than 1/9, 1/3 or 1/7 can be represented precisely in decimal digits.

Also note that we can decrease the power in the exponent by 52 and shift the point in the binary representation to the right by 52 places (much like 10-3 * 1.23 == 10-5 * 123). This then enables us to represent the binary representation as the exact value that it represents in the form a * 2p. where 'a' is an integer.

Converting the exponents to decimal, removing the offset, and re-adding the implied 1 (in square brackets), 0.1 and 0.2 are:

0.1 => 2^-4 * [1].1001100110011001100110011001100110011001100110011010
0.2 => 2^-3 * [1].1001100110011001100110011001100110011001100110011010
or
0.1 => 2^-56 * 7205759403792794 = 0.1000000000000000055511151231257827021181583404541015625
0.2 => 2^-55 * 7205759403792794 = 0.200000000000000011102230246251565404236316680908203125

To add two numbers, the exponent needs to be the same, i.e.:

0.1 => 2^-3 *  0.1100110011001100110011001100110011001100110011001101(0)
0.2 => 2^-3 *  1.1001100110011001100110011001100110011001100110011010
sum =  2^-3 * 10.0110011001100110011001100110011001100110011001100111
or
0.1 => 2^-55 * 3602879701896397  = 0.1000000000000000055511151231257827021181583404541015625
0.2 => 2^-55 * 7205759403792794  = 0.200000000000000011102230246251565404236316680908203125
sum =  2^-55 * 10808639105689191 = 0.3000000000000000166533453693773481063544750213623046875

Since the sum is not of the form 2n * 1.{bbb} we increase the exponent by one and shift the decimal (binary) point to get:

sum = 2^-2  * 1.0011001100110011001100110011001100110011001100110011(1)
    = 2^-54 * 5404319552844595.5 = 0.3000000000000000166533453693773481063544750213623046875

There are now 53 bits in the mantissa (the 53rd is in square brackets in the line above). The default rounding mode for IEEE 754 is 'Round to Nearest' - i.e. if a number x falls between two values a and b, the value where the least significant bit is zero is chosen.

a = 2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875
  = 2^-2  * 1.0011001100110011001100110011001100110011001100110011

x = 2^-2  * 1.0011001100110011001100110011001100110011001100110011(1)

b = 2^-2  * 1.0011001100110011001100110011001100110011001100110100
  = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125

Note that a and b differ only in the last bit; ...0011 + 1 = ...0100. In this case, the value with the least significant bit of zero is b, so the sum is:

sum = 2^-2  * 1.0011001100110011001100110011001100110011001100110100
    = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125

whereas the binary representation of 0.3 is:

0.3 => 2^-2  * 1.0011001100110011001100110011001100110011001100110011
    =  2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875

which only differs from the binary representation of the sum of 0.1 and 0.2 by 2-54.

The binary representation of 0.1 and 0.2 are the most accurate representations of the numbers allowable by IEEE 754. The addition of these representation, due to the default rounding mode, results in a value which differs only in the least-significant-bit.

TL;DR

Writing 0.1 + 0.2 in a IEEE 754 binary representation (with colons separating the three parts) and comparing it to 0.3, this is (I've put the distinct bits in square brackets):

0.1 + 0.2 => 0:01111111101:0011001100110011001100110011001100110011001100110[100]
0.3       => 0:01111111101:0011001100110011001100110011001100110011001100110[011]

Converted back to decimal, these values are:

0.1 + 0.2 => 0.300000000000000044408920985006...
0.3       => 0.299999999999999988897769753748...

The difference is exactly 2-54, which is ~5.5511151231258 × 10-17 - insignificant (for many applications) when compared to the original values.

Comparing the last few bits of a floating point number is inherently dangerous, as anyone who reads the famous "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (which covers all the major parts of this answer) will know.

Most calculators use additional guard digits to get around this problem, which is how 0.1 + 0.2 would give 0.3: the final few bits are rounded.

往事随风而去 2025-01-24 21:35:50

除了其他正确答案之外,您可能还需要考虑缩放值以避免浮点运算出现问题。

例如:

var result = 1.0 + 2.0;     // result === 3.0 returns true

... 而不是:

var result = 0.1 + 0.2;     // result === 0.3 returns false

JavaScript 中的表达式 0.1 + 0.2 === 0.3 返回 false,但幸运的是浮点中的整数运算是精确的,因此十进制可以通过缩放来避免表示错误。

作为一个实际示例,为了避免精度至关重要的浮点问题,建议1将货币处理为表示美分数量的整数:2550美分而不是25.50 美元。


1 道格拉斯·克罗克福德:JavaScript:好的部分:附录 A - 糟糕的部分(第 105 页)

In addition to the other correct answers, you may want to consider scaling your values to avoid problems with floating-point arithmetic.

For example:

var result = 1.0 + 2.0;     // result === 3.0 returns true

... instead of:

var result = 0.1 + 0.2;     // result === 0.3 returns false

The expression 0.1 + 0.2 === 0.3 returns false in JavaScript, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling.

As a practical example, to avoid floating-point problems where accuracy is paramount, it is recommended1 to handle money as an integer representing the number of cents: 2550 cents instead of 25.50 dollars.


1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).

陈独秀 2025-01-24 21:35:50

存储在计算机中的浮点数由两个部分组成,一个整数和基座被带到整数部分并乘以该基础的指数。

如果计算机在基本10中工作,则 0.1 将为 1 x10⁻⁻ 0.2 将为 2 x10⁻⁻ ,<代码> 0.3 将为 3 x10⁻⁻。整数数学是简单而精确的,因此添加 0.1 + 0.2 显然会导致 0.3

计算机通常在基本10中不起作用,而是在基本2中工作。 0.25 1 x2⁻²,并将它们添加为 3 x2⁻²,或 0.75 。确切地。

该问题带有可以在基本10中完全表示的数字,而在基本2中则不能。假设具有非常常见的IEEE 64位浮点格式,则最接近 0.1 3602879701896397 x2⁻⁵⁵,最接近 0.2 7205759403792794 x2⁻⁵⁵;将它们添加在一起 10808639105689191 x2⁻⁵⁵,或 0.30000000000000000000044444444440892098500626161616169452666672667236328125 <代码>的确切小数值。浮点数通常是舍入的,以显示。

Floating point numbers stored in the computer consist of two parts, an integer and an exponent that the base is taken to and multiplied by the integer part.

If the computer were working in base 10, 0.1 would be 1 x 10⁻¹, 0.2 would be 2 x 10⁻¹, and 0.3 would be 3 x 10⁻¹. Integer math is easy and exact, so adding 0.1 + 0.2 will obviously result in 0.3.

Computers don't usually work in base 10, they work in base 2. You can still get exact results for some values, for example 0.5 is 1 x 2⁻¹ and 0.25 is 1 x 2⁻², and adding them results in 3 x 2⁻², or 0.75. Exactly.

The problem comes with numbers that can be represented exactly in base 10, but not in base 2. Those numbers need to be rounded to their closest equivalent. Assuming the very common IEEE 64-bit floating point format, the closest number to 0.1 is 3602879701896397 x 2⁻⁵⁵, and the closest number to 0.2 is 7205759403792794 x 2⁻⁵⁵; adding them together results in 10808639105689191 x 2⁻⁵⁵, or an exact decimal value of 0.3000000000000000444089209850062616169452667236328125. Floating point numbers are generally rounded for display.

想你的星星会说话 2025-01-24 21:35:50

在简短是因为:

浮点数不能准确表示二进制中的所有小数

,因此就像10/3一样不存在 基本10中不存在(将是3.33 ...重复出现),以相同的方式1/10在二进制中不存在。

那呢?如何处理它?<​​/strong>是否有解决方法?

为了提供最佳解决方案,我可以说我发现了以下方法:

parseFloat((0.1 + 0.2).toFixed(10)) => Will return 0.3

让我解释一下为什么它是最好的解决方案。
正如上面答案中提到的其他人所述,使用准备使用JavaScript tofixed()函数解决问题是一个好主意。但是很可能您会遇到一些问题。

想象一下,您将添加两个浮点数,例如 0.2 0.7 在这里: 0.2 + 0.7 = 0.89999999999999999999

您的预期结果是 0.9 ,这意味着在这种情况下需要具有1位数字精度的结果。
因此,您应该使用(0.2 + 0.7)。Tofixed(1)
但是您不能仅将某个参数给tofixed(),因为它取决于给定的数字,例如,

0.22 + 0.7 = 0.9199999999999999

在此示例中,您需要2位数字精度,因此应为 tofix(2),那么什么应该适合每个给定的浮点数的参数吗?

您可能会说,在每种情况下,都会说是10:

(0.2 + 0.7).toFixed(10) => Result will be "0.9000000000"

该死! 9点以后,您将如何处理那些不需要的零?
现在是时候将其转换为Float以使其按照您的要求进行:

parseFloat((0.2 + 0.7).toFixed(10)) => Result will be 0.9

现在您找到了解决方案,最好将其作为这样的函数提供:

function floatify(number){
           return parseFloat((number).toFixed(10));
        }
    

让我们自己尝试一下:

function floatify(number){
       return parseFloat((number).toFixed(10));
    }
 
function addUp(){
  var number1 = +$("#number1").val();
  var number2 = +$("#number2").val();
  var unexpectedResult = number1 + number2;
  var expectedResult = floatify(number1 + number2);
  $("#unexpectedResult").text(unexpectedResult);
  $("#expectedResult").text(expectedResult);
}
addUp();
input{
  width: 50px;
}
#expectedResult{
color: green;
}
#unexpectedResult{
color: red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input id="number1" value="0.2" onclick="addUp()" onkeyup="addUp()"/> +
<input id="number2" value="0.7" onclick="addUp()" onkeyup="addUp()"/> =
<p>Expected Result: <span id="expectedResult"></span></p>
<p>Unexpected Result: <span id="unexpectedResult"></span></p>

您可以这样使用它:

var x = 0.2 + 0.7;
floatify(x);  => Result: 0.9

AS w3schools 乘以并分开以解决上述问题:

var x = (0.2 * 10 + 0.1 * 10) / 10;       // x will be 0.3

请记住,(0.2 + 0.1) * 10/10 < / code>根本无法工作,尽管看起来相同!
我更喜欢第一个解决方案,因为我可以将其应用于将输入浮子转换为准确输出浮点的函数。


fyi ,存在相同的问题:

乘法:实例 0.09 * 10 返回 0.89999999999999999999 。将浮动函数应用于解决方法: floatify(0.09 * 10)返回 0.9

:0.3/0.1 = 2.9999999999999999999966,但浮动(0.3-0.1-0.3-0.1 )回报0.2

减法:1-0.8 = 0.199999999999999996,但浮动(1-0.8)返回0.2

In short it's because:

Floating point numbers cannot represent all decimals precisely in binary

So just like 10/3 which does not exist in base 10 precisely (it will be 3.33... recurring), in the same way 1/10 doesn't exist in binary.

So what? How to deal with it? Is there any workaround?

In order to offer The best solution I can say I discovered following method:

parseFloat((0.1 + 0.2).toFixed(10)) => Will return 0.3

Let me explain why it's the best solution.
As others mentioned in above answers it's a good idea to use ready to use Javascript toFixed() function to solve the problem. But most likely you'll encounter with some problems.

Imagine you are going to add up two float numbers like 0.2 and 0.7 here it is: 0.2 + 0.7 = 0.8999999999999999.

Your expected result was 0.9 it means you need a result with 1 digit precision in this case.
So you should have used (0.2 + 0.7).tofixed(1)
but you can't just give a certain parameter to toFixed() since it depends on the given number, for instance

0.22 + 0.7 = 0.9199999999999999

In this example you need 2 digits precision so it should be toFixed(2), so what should be the paramter to fit every given float number?

You might say let it be 10 in every situation then:

(0.2 + 0.7).toFixed(10) => Result will be "0.9000000000"

Damn! What are you going to do with those unwanted zeros after 9?
It's the time to convert it to float to make it as you desire:

parseFloat((0.2 + 0.7).toFixed(10)) => Result will be 0.9

Now that you found the solution, it's better to offer it as a function like this:

function floatify(number){
           return parseFloat((number).toFixed(10));
        }
    

Let's try it yourself:

function floatify(number){
       return parseFloat((number).toFixed(10));
    }
 
function addUp(){
  var number1 = +$("#number1").val();
  var number2 = +$("#number2").val();
  var unexpectedResult = number1 + number2;
  var expectedResult = floatify(number1 + number2);
  $("#unexpectedResult").text(unexpectedResult);
  $("#expectedResult").text(expectedResult);
}
addUp();
input{
  width: 50px;
}
#expectedResult{
color: green;
}
#unexpectedResult{
color: red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input id="number1" value="0.2" onclick="addUp()" onkeyup="addUp()"/> +
<input id="number2" value="0.7" onclick="addUp()" onkeyup="addUp()"/> =
<p>Expected Result: <span id="expectedResult"></span></p>
<p>Unexpected Result: <span id="unexpectedResult"></span></p>

You can use it this way:

var x = 0.2 + 0.7;
floatify(x);  => Result: 0.9

As W3SCHOOLS suggests there is another solution too, you can multiply and divide to solve the problem above:

var x = (0.2 * 10 + 0.1 * 10) / 10;       // x will be 0.3

Keep in mind that (0.2 + 0.1) * 10 / 10 won't work at all although it seems the same!
I prefer the first solution since I can apply it as a function which converts the input float to accurate output float.


FYI, the same problem exists for:

Multiplication: for instance 0.09 * 10 returns 0.8999999999999999. Apply the floatify function as a workaround: floatify(0.09 * 10) returns 0.9

Division: 0.3 / 0.1 = 2.9999999999999996 but floatify(0.3 - 0.1) returns 0.2

Subtract: 1 - 0.8 = 0.19999999999999996 but floatify(1 - 0.8) returns 0.2

少跟Wǒ拽 2025-01-24 21:35:50

浮点舍入误差。来自每个计算机科学家应该了解的浮点运算知识

将无限多个实数压缩为有限数量的位数需要近似表示。虽然整数有无限多个,但在大多数程序中,整数计算的结果可以存储在 32 位中。相反,给定任何固定的位数,大多数实数计算将产生无法使用这么多位数精确表示的数量。因此,浮点计算的结果通常必须进行舍入,以适应其有限表示。这种舍入误差是浮点计算的特征。

Floating point rounding error. From What Every Computer Scientist Should Know About Floating-Point Arithmetic:

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.

南烟 2025-01-24 21:35:50

我的解决方法:

function add(a, b, precision) {
    var x = Math.pow(10, precision || 2);
    return (Math.round(a * x) + Math.round(b * x)) / x;
}

精度是指在加法过程中要保留小数点后的位数。

My workaround:

function add(a, b, precision) {
    var x = Math.pow(10, precision || 2);
    return (Math.round(a * x) + Math.round(b * x)) / x;
}

precision refers to the number of digits you want to preserve after the decimal point during addition.

苄①跕圉湢 2025-01-24 21:35:50

不,没有打破,但是大多数小数部分必须近似

摘要

浮点算术算术算术确切的,不幸的是,它与我们通常的base-10数字表示不太匹配,因此,事实证明,我们经常给出它的输入,这些输入与我们所写的内容相比有些偏离。

即使是简单的数字,例如0.01、0.02、0.03、0.04 ... 0.24也不能像二进制分数一样表示。如果您计数0.01,.02,.03 ...,直到到达0.25之前,您将获得第一个分数,可以在base 2 中表示。如果您尝试使用FP,则您的0.01将略有关闭,因此,将其中25个添加到精确的0.25的唯一方法是需要一系列涉及后卫碎片和圆形的因果关系。很难预测,所以我们举起双手说“ FP是不精确的”,,但这并不是真的。

我们不断地给出FP硬件在基本10中看起来很简单的东西,但是在基本2中重复分数。

这是如何发生的?

当我们以小数为单位编写时,每一个分数(具体来说,每个终止)十进制)是形式的合理数量

&nbsp;&nbsp;&nbsp; nbsp;&nbsp; nbsp;&nbsp;&nbsp;
a/(2 n x 5 m

在二进制中,我们只得到 2 n < /em>术语,也就是:

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; a/2 n

因此,在十进制中,我们不能表示 1 / 3 。因为基本10包括2作为主要因素,所以我们可以写入二进制分数 的每个数字也可以写为基本10分数。但是,我们几乎没有写任何作为基础 10 的分数在二进制中表示。在0.01、0.02、0.03 ... 0.99的范围内,只能以我们的FP格式表示三个数字:0.25、0.50和0.75,因为它们是1/4、1/2,和3/4,仅使用2 n 术语的所有数字。

在基本 10 中,我们不能表示 1 / 3 。但是在二进制中,我们不能做 1 / 10 1 / 3

因此,尽管每个二元分数都可以用小数为小数,但相反是不正确的。实际上,大多数小数分数在二进制中重复。

通常指示开发人员进行&lt; Epsilon 比较,更好的建议可能是圆形的积分值(在C库中:round()和roundf(),即以FP格式保持),然后进行比较。四舍五入到特定的小数分数长度可以解决大多数输出​​问题。

同样,关于实际数字迫切问题(FP是在早期,昂贵的计算机上发明的问题),宇宙的物理常数和所有其他测量值仅是相对较少的重要数字知道的,因此整个问题空间无论如何是“不精确的”。在这种应用中,FP“准确性”不是问题。

当人们试图将FP用于豆类计数时,整个问题确实会产生。它确实为此起作用,但是只有当您坚持积分价值时,哪种使用它的要点。 这就是为什么我们拥有所有那些小数分数软件库。

我喜欢 chris ,因为它描述了实际问题,而不仅仅是关于“不准确性”的通常手动挥舞。如果FP简单地“不准确”,那么我们可以修复,并且几十年前就可以做到这一点。我们没有的原因是因为FP格式紧凑且快速,这是处理大量数字的最佳方法。此外,这是空间年龄和军备竞赛的遗产,也是使用小型内存系统来解决非常缓慢的计算机来解决大问题的遗产。 (有时是1位存储的单个磁芯,但这是又故事。

结论,

如果您只是在银行计算豆子,那么首先使用小数字符串表示形式的软件解决方案非常有效。但是,您不能这样做量子染色体动力学或空气动力学。

No, not broken, but most decimal fractions must be approximated

Summary

Floating point arithmetic is exact, unfortunately, it doesn't match up well with our usual base-10 number representation, so it turns out we are often giving it input that is slightly off from what we wrote.

Even simple numbers like 0.01, 0.02, 0.03, 0.04 ... 0.24 are not representable exactly as binary fractions. If you count up 0.01, .02, .03 ..., not until you get to 0.25 will you get the first fraction representable in base2. If you tried that using FP, your 0.01 would have been slightly off, so the only way to add 25 of them up to a nice exact 0.25 would have required a long chain of causality involving guard bits and rounding. It's hard to predict so we throw up our hands and say "FP is inexact", but that's not really true.

We constantly give the FP hardware something that seems simple in base 10 but is a repeating fraction in base 2.

How did this happen?

When we write in decimal, every fraction (specifically, every terminating decimal) is a rational number of the form

a / (2n x 5m)

In binary, we only get the 2n term, that is:

           a / 2n

So in decimal, we can't represent 1/3. Because base 10 includes 2 as a prime factor, every number we can write as a binary fraction also can be written as a base 10 fraction. However, hardly anything we write as a base10 fraction is representable in binary. In the range from 0.01, 0.02, 0.03 ... 0.99, only three numbers can be represented in our FP format: 0.25, 0.50, and 0.75, because they are 1/4, 1/2, and 3/4, all numbers with a prime factor using only the 2n term.

In base10 we can't represent 1/3. But in binary, we can't do 1/10 or 1/3.

So while every binary fraction can be written in decimal, the reverse is not true. And in fact most decimal fractions repeat in binary.

Dealing with it

Developers are usually instructed to do < epsilon comparisons, better advice might be to round to integral values (in the C library: round() and roundf(), i.e., stay in the FP format) and then compare. Rounding to a specific decimal fraction length solves most problems with output.

Also, on real number-crunching problems (the problems that FP was invented for on early, frightfully expensive computers) the physical constants of the universe and all other measurements are only known to a relatively small number of significant figures, so the entire problem space was "inexact" anyway. FP "accuracy" isn't a problem in this kind of application.

The whole issue really arises when people try to use FP for bean counting. It does work for that, but only if you stick to integral values, which kind of defeats the point of using it. This is why we have all those decimal fraction software libraries.

I love the Pizza answer by Chris, because it describes the actual problem, not just the usual handwaving about "inaccuracy". If FP were simply "inaccurate", we could fix that and would have done it decades ago. The reason we haven't is because the FP format is compact and fast and it's the best way to crunch a lot of numbers. Also, it's a legacy from the space age and arms race and early attempts to solve big problems with very slow computers using small memory systems. (Sometimes, individual magnetic cores for 1-bit storage, but that's another story.)

Conclusion

If you are just counting beans at a bank, software solutions that use decimal string representations in the first place work perfectly well. But you can't do quantum chromodynamics or aerodynamics that way.

我们的影子 2025-01-24 21:35:50

并非所有数字都可以通过浮点数/双精度数表示。
例如,数字“0.2”在 IEEE 754 浮点标准。

用于在引擎盖下存储实数的模型将浮点数表示为

在此处输入图像描述

即使您可以轻松键入 0.2FLT_RADIXDBL_RADIX 为 2;对于具有 FPU 且使用“二进制浮点 IEEE 标准”的计算机,不是 10算术(ISO/IEEE 标准 754-1985)”。

所以要准确地表示这些数字有点困难。即使您显式指定此变量而无需任何中间计算。

Not all numbers can be represented via floats/doubles.
For example, the number "0.2" will be represented as "0.200000003" in single precision in the IEEE 754 float point standard.

The model for storing real numbers under the hood represents float numbers as

Enter image description here

Even though you can type 0.2 easily, FLT_RADIX and DBL_RADIX is 2; not 10 for a computer with an FPU which uses "IEEE Standard for Binary Floating-Point Arithmetic (ISO/IEEE Std 754-1985)".

So it is a bit hard to represent such numbers exactly. Even if you specify this variable explicitly without any intermediate calculation.

药祭#氼 2025-01-24 21:35:50

一些与这个著名的双重精度问题有关的统计数据。

当使用0.1的步骤(从0.1到100)添加所有值( a + b )时,我们有 〜15%的精度误差的机会。请注意,错误可能导致稍大或更小的值。
以下是一些示例:

0.1 + 0.2 = 0.30000000000000004 (BIGGER)
0.1 + 0.7 = 0.7999999999999999 (SMALLER)
...
1.7 + 1.9 = 3.5999999999999996 (SMALLER)
1.7 + 2.2 = 3.9000000000000004 (BIGGER)
...
3.2 + 3.6 = 6.800000000000001 (BIGGER)
3.2 + 4.4 = 7.6000000000000005 (BIGGER)

当减去所有值时( a -b 其中 a&gt; b )使用0.1的步骤(从100到0.1),我们有 〜〜 34%的精度错误的机会
这里有一些例子:

0.6 - 0.2 = 0.39999999999999997 (SMALLER)
0.5 - 0.4 = 0.09999999999999998 (SMALLER)
...
2.1 - 0.2 = 1.9000000000000001 (BIGGER)
2.0 - 1.9 = 0.10000000000000009 (BIGGER)
...
100 - 99.9 = 0.09999999999999432 (SMALLER)
100 - 99.8 = 0.20000000000000284 (BIGGER)

*15%和34%的确实很大,因此,当精度很重要时,请始终使用BigDecimal。使用2个小数位数(步骤0.01),情况会加剧一些(18%和36%)。

Some statistics related to this famous double precision question.

When adding all values (a + b) using a step of 0.1 (from 0.1 to 100) we have ~15% chance of precision error. Note that the error could result in slightly bigger or smaller values.
Here are some examples:

0.1 + 0.2 = 0.30000000000000004 (BIGGER)
0.1 + 0.7 = 0.7999999999999999 (SMALLER)
...
1.7 + 1.9 = 3.5999999999999996 (SMALLER)
1.7 + 2.2 = 3.9000000000000004 (BIGGER)
...
3.2 + 3.6 = 6.800000000000001 (BIGGER)
3.2 + 4.4 = 7.6000000000000005 (BIGGER)

When subtracting all values (a - b where a > b) using a step of 0.1 (from 100 to 0.1) we have ~34% chance of precision error.
Here are some examples:

0.6 - 0.2 = 0.39999999999999997 (SMALLER)
0.5 - 0.4 = 0.09999999999999998 (SMALLER)
...
2.1 - 0.2 = 1.9000000000000001 (BIGGER)
2.0 - 1.9 = 0.10000000000000009 (BIGGER)
...
100 - 99.9 = 0.09999999999999432 (SMALLER)
100 - 99.8 = 0.20000000000000284 (BIGGER)

*15% and 34% are indeed huge, so always use BigDecimal when precision is of big importance. With 2 decimal digits (step 0.01) the situation worsens a bit more (18% and 36%).

醉酒的小男人 2025-01-24 21:35:50

Python和Java等一些高级语言具有克服二进制浮点限制的工具。例如:

这些解决方案都不是完美的(尤其是如果我们看表演,或者如果需要很高的精度),但是它们仍然解决了二进制浮点算术的大量问题。

Some high level languages such as Python and Java come with tools to overcome binary floating point limitations. For example:

  • Python's decimal module and Java's BigDecimal class, that represent numbers internally with decimal notation (as opposed to binary notation). Both have limited precision, so they are still error prone, however they solve most common problems with binary floating point arithmetic.

    Decimals are very nice when dealing with money: ten cents plus twenty cents are always exactly thirty cents:

      >>> 0.1 + 0.2 == 0.3
      False
      >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.3')
      True
    

    Python's decimal module is based on IEEE standard 854-1987.

  • Python's fractions module and Apache Common's BigFraction class. Both represent rational numbers as (numerator, denominator) pairs and they may give more accurate results than decimal floating point arithmetic.

Neither of these solutions is perfect (especially if we look at performances, or if we require a very high precision), but still they solve a great number of problems with binary floating point arithmetic.

ぃ双果 2025-01-24 21:35:50

人们总是认为这是计算机问题,但是如果您用手数(基本10),则无法获得(1/3+1/3 = 2/3)= true ,除非您有无穷大要添加0.333 ...到0.333 ...因此,就像(1/10+2/10)一样0.333 + 0.333 = 0.666,可能会将其圆成0.667,这在技术上也是不准确的。

算在三元中,三分之二并不是问题 - 也许每只手上有15个手指的比赛会问为什么您的小数数学被打破...

People always assume this to be a computer problem, but if you count with your hands (base 10), you can't get (1/3+1/3=2/3)=true unless you have infinity to add 0.333... to 0.333... so just as with the (1/10+2/10)!==3/10 problem in base 2, you truncate it to 0.333 + 0.333 = 0.666 and probably round it to 0.667 which would be also be technically inaccurate.

Count in ternary, and thirds are not a problem though - maybe some race with 15 fingers on each hand would ask why your decimal math was broken...

咿呀咿呀哟 2025-01-24 21:35:50

出现这些怪异的数字是因为计算机使用二进制(基本2)编号系统来计算目的,而我们使用小数(基本10)。

大多数分数数量不能准确地以二进制或十进制或两者兼而有之表示。结果 - 舍入(但精确的)数字结果。

Those weird numbers appear because computers use the binary (base 2) number system for calculation purposes, while we use decimal (base 10).

There are a majority of fractional numbers that cannot be represented precisely either in binary or in decimal or both. Result - A rounded up (but precise) number results.

他夏了夏天 2025-01-24 21:35:50

您是否尝试过胶带解决方案?

尝试确定何时发生错误,并使用简短的修复语句。它不是很漂亮,但是对于某些问题,它是唯一的解决方案,这是其中之一。

 if( (n * 0.1) < 100.0 ) { return n * 0.1 - 0.000000000000001 ;}
                    else { return n * 0.1 + 0.000000000000001 ;}

我在C#的科学模拟项目中也遇到了同样的问题,我可以告诉您,如果您忽略蝴蝶效应,它将转向一条大胖龙,并在A **中咬住您。

Did you try the duct tape solution?

Try to determine when errors occur and fix them with short if statements. It's not pretty, but for some problems it is the only solution and this is one of them.

 if( (n * 0.1) < 100.0 ) { return n * 0.1 - 0.000000000000001 ;}
                    else { return n * 0.1 + 0.000000000000001 ;}

I had the same problem in a scientific simulation project in C#, and I can tell you that if you ignore the butterfly effect, it's going to turn to a big fat dragon and bite you in the a**.

沉溺在你眼里的海 2025-01-24 21:35:50

这个问题的许多重复项询问浮点舍入对特定数字的影响。在实践中,通过查看感兴趣的计算的确切结果而不是仅仅阅读它,更容易了解它是如何工作的。某些语言提供了执行此操作的方法 - 例如在 Java 中将 floatdouble 转换为 BigDecimal

由于这是一个与语言无关的问题,因此它需要与语言无关的工具,例如 小数到浮点数-点转换器

将其应用于问题中的数字,视为双精度数:

0.1 转换为 0.1000000000000000055511151231257827021181583404541015625,0.2

转换为0.200000000000000011102230246251565404236316680908203125,

0.3 转换为0.299999999999999988897769753748434595763683319091796875 和

0.30000000000000004 转换为0.3000000000000000444089209850062616169452667236328125。

手动或在小数计算器(例如全精度计算器)中添加前两个数字,会显示实际输入的精确总和是0.3000000000000000166533453693773481063544750213623046875。

如果向下舍入到相当于 0.3,则舍入误差将为 0.0000000000000000277555756156289135105907917022705078125。向上舍入到 0.30000000000000004 的等值还会产生舍入误差 0.0000000000000000277555756156289135105907917022705078125。采用圆平局决胜局。

返回到浮点转换器,0.30000000000000004 的原始十六进制为 3fd3333333333334,它以偶数结尾,因此是正确的结果。

Many of this question's numerous duplicates ask about the effects of floating point rounding on specific numbers. In practice, it is easier to get a feeling for how it works by looking at exact results of calculations of interest rather than by just reading about it. Some languages provide ways of doing that - such as converting a float or double to BigDecimal in Java.

Since this is a language-agnostic question, it needs language-agnostic tools, such as a Decimal to Floating-Point Converter.

Applying it to the numbers in the question, treated as doubles:

0.1 converts to 0.1000000000000000055511151231257827021181583404541015625,

0.2 converts to 0.200000000000000011102230246251565404236316680908203125,

0.3 converts to 0.299999999999999988897769753748434595763683319091796875, and

0.30000000000000004 converts to 0.3000000000000000444089209850062616169452667236328125.

Adding the first two numbers manually or in a decimal calculator such as Full Precision Calculator, shows the exact sum of the actual inputs is 0.3000000000000000166533453693773481063544750213623046875.

If it were rounded down to the equivalent of 0.3 the rounding error would be 0.0000000000000000277555756156289135105907917022705078125. Rounding up to the equivalent of 0.30000000000000004 also gives rounding error 0.0000000000000000277555756156289135105907917022705078125. The round-to-even tie breaker applies.

Returning to the floating point converter, the raw hexadecimal for 0.30000000000000004 is 3fd3333333333334, which ends in an even digit and therefore is the correct result.

靑春怀旧 2025-01-24 21:35:50

可以在数字计算机中实现的浮点数学必然使用实数的近似值及其运算。 (标准版本有超过五十页的文档,并有一个委员会来处理其勘误表和进一步完善。)

这种近似值是不同种类近似值的混合,每种近似值都可以被忽略或由于其特定方式偏离精确性而仔细考虑。它还涉及硬件和软件层面上的许多明显的异常情况,大多数人会假装没有注意到而直接忽略这些情况。

如果您需要无限精度(例如,使用数字 π,而不是其许多较短的替代值之一),您应该编写或使用符号数学程序。

但是,如果您认为有时浮点数学在值和逻辑上是模糊的,并且错误会快速累积,并且您可以编写需求和测试来实现这一点,那么您的代码通常可以满足其中的要求你的 FPU。

The kind of floating-point math that can be implemented in a digital computer necessarily uses an approximation of the real numbers and operations on them. (The standard version runs to over fifty pages of documentation and has a committee to deal with its errata and further refinement.)

This approximation is a mixture of approximations of different kinds, each of which can either be ignored or carefully accounted for due to its specific manner of deviation from exactitude. It also involves a number of explicit exceptional cases at both the hardware and software levels that most people walk right past while pretending not to notice.

If you need infinite precision (using the number π, for example, instead of one of its many shorter stand-ins), you should write or use a symbolic math program instead.

But if you're okay with the idea that sometimes floating-point math is fuzzy in value and logic and errors can accumulate quickly, and you can write your requirements and tests to allow for that, then your code can frequently get by with what's in your FPU.

如痴如狂 2025-01-24 21:35:50

只是为了好玩,我按照标准 C99 的定义来玩弄浮点数的表示,并编写了下面的代码。

该代码将浮点的二进制表示形式分为 3 个独立的组

SIGN EXPONENT FRACTION

,然后打印一个总和,当以足够的精度求和时,它将显示硬件中实际存在的值。

因此,当您编写 float x = 999... 时,编译器会将该数字转换为由函数 xx 打印的位表示形式,以便函数打印的总和 < code>yy 等于给定的数字。

事实上,这个总和只是一个近似值。对于数字 999,999,999,编译器将在浮点数的位表示形式中插入数字 1,000,000,000。

在代码之后,我附加了一个控制台会话,在其中计算硬件中实际存在的两个常量(减去 PI 和 999999999)的项之和,并由编译器插入其中。

#include <stdio.h>
#include <limits.h>

void
xx(float *x)
{
    unsigned char i = sizeof(*x)*CHAR_BIT-1;
    do {
        switch (i) {
        case 31:
             printf("sign: ");
             break;
        case 30:
             printf("exponent: ");
             break;
        case 23:
             printf("fraction: ");
             break;

        }
        char b = (*(unsigned long long*)x&((unsigned long long)1<<i)) != 0;
        printf("%d ", b);
    } while (i--);
    printf("\n");
}

void
yy(float a)
{
    int sign = !(*(unsigned long long*)&a&((unsigned long long)1<<31));
    int fraction = ((1<<23)-1)&(*(int*)&a);
    int exponent = (255&((*(int*)&a)>>23))-127;

    printf(sign ? "positive" " ( 1+" : "negative" " ( 1+");
    unsigned int i = 1 << 22;
    unsigned int j = 1;
    do {
        char b = (fraction&i) != 0;
        b && (printf("1/(%d) %c", 1 << j, (fraction&(i-1)) ? '+' : ')' ), 0);
    } while (j++, i >>= 1);

    printf("*2^%d", exponent);
    printf("\n");
}

void
main()
{
    float x = -3.14;
    float y = 999999999;
    printf("%lu\n", sizeof(x));
    xx(&x);
    xx(&y);
    yy(x);
    yy(y);
}

这是一个控制台会话,我在其中计算硬件中存在的浮点数的实际值。我使用 bc 来打印主程序输出的项之和。人们可以将这个总和插入到 python repl 或类似的东西中。

-- .../terra1/stub
@ qemacs f.c
-- .../terra1/stub
@ gcc f.c
-- .../terra1/stub
@ ./a.out
sign: 1 exponent: 1 0 0 0 0 0 0 fraction: 0 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 1 0 0 0 0 1 1
sign: 0 exponent: 1 0 0 1 1 1 0 fraction: 0 1 1 0 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0
negative ( 1+1/(2) +1/(16) +1/(256) +1/(512) +1/(1024) +1/(2048) +1/(8192) +1/(32768) +1/(65536) +1/(131072) +1/(4194304) +1/(8388608) )*2^1
positive ( 1+1/(2) +1/(4) +1/(16) +1/(32) +1/(64) +1/(512) +1/(1024) +1/(4096) +1/(16384) +1/(32768) +1/(262144) +1/(1048576) )*2^29
-- .../terra1/stub
@ bc
scale=15
( 1+1/(2) +1/(4) +1/(16) +1/(32) +1/(64) +1/(512) +1/(1024) +1/(4096) +1/(16384) +1/(32768) +1/(262144) +1/(1048576) )*2^29
999999999.999999446351872

就是这样。 999999999 的值实际上是。

999999999.999999446351872

您还可以使用 bc 检查 -3.14 是否也受到扰动。不要忘记在 bc 中设置比例因子。

显示的总和是硬件内部的总和。计算得到的值取决于您设置的比例。我确实将比例因子设置为 15。从数学上讲,以无限精度,它似乎是 1,000,000,000。

Just for fun, I played with the representation of floats, following the definitions from the Standard C99 and I wrote the code below.

The code prints the binary representation of floats in 3 separated groups

SIGN EXPONENT FRACTION

and after that it prints a sum, that, when summed with enough precision, it will show the value that really exists in hardware.

So when you write float x = 999..., the compiler will transform that number in a bit representation printed by the function xx such that the sum printed by the function yy be equal to the given number.

In reality, this sum is only an approximation. For the number 999,999,999, the compiler will insert in bit representation of the float the number 1,000,000,000.

After the code I attach a console session, in which I compute the sum of terms for both constants (minus PI and 999999999) that really exists in hardware, inserted there by the compiler.

#include <stdio.h>
#include <limits.h>

void
xx(float *x)
{
    unsigned char i = sizeof(*x)*CHAR_BIT-1;
    do {
        switch (i) {
        case 31:
             printf("sign: ");
             break;
        case 30:
             printf("exponent: ");
             break;
        case 23:
             printf("fraction: ");
             break;

        }
        char b = (*(unsigned long long*)x&((unsigned long long)1<<i)) != 0;
        printf("%d ", b);
    } while (i--);
    printf("\n");
}

void
yy(float a)
{
    int sign = !(*(unsigned long long*)&a&((unsigned long long)1<<31));
    int fraction = ((1<<23)-1)&(*(int*)&a);
    int exponent = (255&((*(int*)&a)>>23))-127;

    printf(sign ? "positive" " ( 1+" : "negative" " ( 1+");
    unsigned int i = 1 << 22;
    unsigned int j = 1;
    do {
        char b = (fraction&i) != 0;
        b && (printf("1/(%d) %c", 1 << j, (fraction&(i-1)) ? '+' : ')' ), 0);
    } while (j++, i >>= 1);

    printf("*2^%d", exponent);
    printf("\n");
}

void
main()
{
    float x = -3.14;
    float y = 999999999;
    printf("%lu\n", sizeof(x));
    xx(&x);
    xx(&y);
    yy(x);
    yy(y);
}

Here is a console session in which I compute the real value of the float that exists in hardware. I used bc to print the sum of terms outputted by the main program. One can insert that sum in python repl or something similar also.

-- .../terra1/stub
@ qemacs f.c
-- .../terra1/stub
@ gcc f.c
-- .../terra1/stub
@ ./a.out
sign: 1 exponent: 1 0 0 0 0 0 0 fraction: 0 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 1 0 0 0 0 1 1
sign: 0 exponent: 1 0 0 1 1 1 0 fraction: 0 1 1 0 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0
negative ( 1+1/(2) +1/(16) +1/(256) +1/(512) +1/(1024) +1/(2048) +1/(8192) +1/(32768) +1/(65536) +1/(131072) +1/(4194304) +1/(8388608) )*2^1
positive ( 1+1/(2) +1/(4) +1/(16) +1/(32) +1/(64) +1/(512) +1/(1024) +1/(4096) +1/(16384) +1/(32768) +1/(262144) +1/(1048576) )*2^29
-- .../terra1/stub
@ bc
scale=15
( 1+1/(2) +1/(4) +1/(16) +1/(32) +1/(64) +1/(512) +1/(1024) +1/(4096) +1/(16384) +1/(32768) +1/(262144) +1/(1048576) )*2^29
999999999.999999446351872

That's it. The value of 999999999 is in fact

999999999.999999446351872

You can also check with bc that -3.14 is also perturbed. Do not forget to set a scale factor in bc.

The displayed sum is what inside the hardware. The value you obtain by computing it depends on the scale you set. I did set the scale factor to 15. Mathematically, with infinite precision, it seems it is 1,000,000,000.

那些过往 2025-01-24 21:35:50

浮点数在硬件级别表示为二进制数(基数为 2)的分数。例如,十进制小数:

0.125

的值为 1/10 + 2/100 + 5/1000,同样,二进制小数:

0.001

的值为 0/2 + 0/4 + 1/8。这两个分数具有相同的值,唯一的区别是第一个是十进制分数,第二个是二进制分数。

不幸的是,大多数十进制分数不能用二进制分数精确表示。因此,一般来说,你给出的浮点数只是近似于要存储在机器中的二进制小数。

这个问题以 10 为底更容易解决。以分数 1/3 为例。您可以将其近似为小数:

0.3

或更好,

0.33

或更好,

0.333

等等。无论您写多少位小数,结果都不会精确到 1/3,但它是一个总是更接近的估计值。

同样,无论使用多少个以 2 为基数的小数位,十进制值 0.1 都无法精确地表示为二进制分数。在基数 2 中,1/10 是以下周期数:

0.0001100110011001100110011001100110011001100110011 ...

停在任何有限数量的位上,您将得到一个近似值。

对于Python,在典型的机器上,浮点数的精度使用53位,因此输入小数0.1时存储的值是二进制小数。

0.00011001100110011001100110011001100110011001100110011010

接近但不完全等于 1/10。

由于浮点数在解释器中的显示方式,很容易忘记存储的值是原始小数的近似值。 Python 仅显示以二进制存储的值的十进制近似值。如果 Python 要输出存储为 0.1 的二进制近似值的真实十进制值,它将输出:

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625

This is a much more Decimal days than most people’s预想,因此 Python 显示一个四舍五入的值以提高可读性:

>>> 0.1
0.1

理解这一点很重要实际上,这是一种错觉:存储的值并不完全是 1/10,只是在显示屏上显示存储的值被四舍五入。一旦您使用这些值执行算术运算,这一点就会变得明显:

>>> 0.1 + 0.2
0.30000000000000004

这种行为是机器浮点表示的本质所固有的:它不是 Python 中的错误,也不是代码中的错误。您可以在使用硬件支持计算浮点数的所有其他语言中观察到相同类型的行为(尽管某些语言默认情况下不会使差异可见,或者并非在所有显示模式下都可见)。

另一个惊喜是这个惊喜所固有的。例如,如果您尝试将值 2.675 四舍五入到小数点后两位,您将得到

>>> round (2.675, 2)
2.67

round() 原语的文档表明它四舍五入到远离零的最接近的值。由于小数正好是 2.67 和 2.68 之间的一半,因此您应该得到 2.68(二进制近似值)。然而,情况并非如此,因为当小数 2.675 转换为浮点数时,它是通过近似值存储的,其精确值为:

2.67499999999999982236431605997495353221893310546875

由于该近似值比 2.68 稍微更接近 2.67,因此舍入会向下舍入。

如果您遇到将十进制数字向下舍入一半很重要的情况,则应该使用十进制模块。顺便说一句,十进制模块还提供了一种方便的方法来“查看”为任何浮点存储的确切值。

>>> from decimal import Decimal
>>> Decimal (2.675)
>>> Decimal ('2.67499999999999982236431605997495353221893310546875')

0.1 并不完全存储在 1/10 中,这一事实的另一个后果是 0.1 的十个值的总和也不会给出 1.0:

>>> sum = 0.0
>>> for i in range (10):
... sum + = 0.1
...>>> sum
0.9999999999999999

二进制浮点数的算术有很多这样的惊喜。 “0.1”的问题将在下面的“表示错误”部分中详细解释。有关此类意外的更完整列表,请参阅浮点的危险。

确实,没有简单的答案,但是不要对浮动虚拟数字过度怀疑!在 Python 中,浮点数运算中的错误是由底层硬件引起的,在大多数机器上,每次运算的错误不超过 2 ** 53 中的 1。这对于大多数任务来说是不必要的,但您应该记住,这些不是十进制运算,并且对浮点数的每个运算都可能会遇到新的错误。

尽管存在病态情况,但对于大多数常见用例,您只需向上舍入到您想要在显示屏上显示的小数位数,即可最终获得预期结果。要精细控制浮点数的显示方式,请参阅字符串格式化语法以了解 str.format () 方法的格式化规范。

答案的这一部分详细解释了“0.1”的示例,并展示了如何自行对此类情况进行精确分析。我们假设您熟悉浮点数的二进制表示形式。“表示错误”一词意味着大多数十进制分数无法精确地用二进制表示。这是 Python(或 Perl、C、C++、Java、Fortran 等许多其他语言)通常不以十进制显示精确结果的主要原因:

>>> 0.1 + 0.2
0.30000000000000004

为什么? 1/10 和 2/10 不能用二进制分数精确表示。然而,今天(2010 年 7 月)的所有机器都遵循浮点数算术的 IEEE-754 标准。大多数平台使用“IEEE-754 双精度”来表示 Python 浮点数。双精度 IEEE-754 使用 53 位精度,因此在读取时,计算机会尝试将 0.1 转换为 J / 2 ** N 形式的最接近的分数,其中 J 恰好是 53 位的整数。重写:

1/10 ~ = J / (2 ** N)

在:

J ~ = 2 ** N / 10

记住 J 正好是 53 位(所以 > = 2 ** 52 但 <2 ** 53),N 的最佳可能值是 56:

>>> 2 ** 52
4503599627370496
>>> 2 ** 53
9007199254740992
>>> 2 ** 56/10
7205759403792793

因此 56 是 N 的唯一可能值,正好剩下 53 J 的位。 因此,J 的最佳可能值就是这个商,四舍五入:

>>> q, r = divmod (2 ** 56, 10)
>>> r
6

由于进位大于 10 的一半,因此通过向上舍入获得最佳近似值:

>>> q + 1
7205759403792794

因此,最佳可能近似值对于“IEEE-754 双精度”中的 1/10 来说,它高于 2 ** 56,即:

7205759403792794/72057594037927936

请注意,由于向上舍入,结果实际上略大于 1/10;如果我们没有四舍五入,商会略小于 1/10。但在任何情况下都不会恰好是 1/10!

因此计算机永远不会“看到”1/10:它看到的是上面给出的精确分数,使用“IEEE-754”中的双精度浮点数的最佳近似值:

>>>. 1 * 2 ** 56
7205759403792794.0

如果我们将此分数乘以 10 ** 30 ,我们可以观察到其小数点后30位的强权重值,

>>> 7205759403792794 * 10 ** 30 // 2 ** 56
100000000000000005551115123125L

这意味着计算机中存储的精确值大约等于小数点值。 0.100000000000000005551115123125,Python 将这些值四舍五入到小数点后 17 位,显示“0.10000000000000001”。转换回二进制时的表示形式相同,简单地说显示“0.1”。

Floating point numbers are represented, at the hardware level, as fractions of binary numbers (base 2). For example, the decimal fraction:

0.125

has the value 1/10 + 2/100 + 5/1000 and, in the same way, the binary fraction:

0.001

has the value 0/2 + 0/4 + 1/8. These two fractions have the same value, the only difference is that the first is a decimal fraction, the second is a binary fraction.

Unfortunately, most decimal fractions cannot have exact representation in binary fractions. Therefore, in general, the floating point numbers you give are only approximated to binary fractions to be stored in the machine.

The problem is easier to approach in base 10. Take for example, the fraction 1/3. You can approximate it to a decimal fraction:

0.3

or better,

0.33

or better,

0.333

etc. No matter how many decimal places you write, the result is never exactly 1/3, but it is an estimate that always comes closer.

Likewise, no matter how many base 2 decimal places you use, the decimal value 0.1 cannot be represented exactly as a binary fraction. In base 2, 1/10 is the following periodic number:

0.0001100110011001100110011001100110011001100110011 ...

Stop at any finite amount of bits, and you'll get an approximation.

For Python, on a typical machine, 53 bits are used for the precision of a float, so the value stored when you enter the decimal 0.1 is the binary fraction.

0.00011001100110011001100110011001100110011001100110011010

which is close, but not exactly equal, to 1/10.

It's easy to forget that the stored value is an approximation of the original decimal fraction, due to the way floats are displayed in the interpreter. Python only displays a decimal approximation of the value stored in binary. If Python were to output the true decimal value of the binary approximation stored for 0.1, it would output:

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625

This is a lot more decimal places than most people would expect, so Python displays a rounded value to improve readability:

>>> 0.1
0.1

It is important to understand that in reality this is an illusion: the stored value is not exactly 1/10, it is simply on the display that the stored value is rounded. This becomes evident as soon as you perform arithmetic operations with these values:

>>> 0.1 + 0.2
0.30000000000000004

This behavior is inherent to the very nature of the machine's floating-point representation: it is not a bug in Python, nor is it a bug in your code. You can observe the same type of behavior in all other languages ​​that use hardware support for calculating floating point numbers (although some languages ​​do not make the difference visible by default, or not in all display modes).

Another surprise is inherent in this one. For example, if you try to round the value 2.675 to two decimal places, you will get

>>> round (2.675, 2)
2.67

The documentation for the round() primitive indicates that it rounds to the nearest value away from zero. Since the decimal fraction is exactly halfway between 2.67 and 2.68, you should expect to get (a binary approximation of) 2.68. This is not the case, however, because when the decimal fraction 2.675 is converted to a float, it is stored by an approximation whose exact value is :

2.67499999999999982236431605997495353221893310546875

Since the approximation is slightly closer to 2.67 than 2.68, the rounding is down.

If you are in a situation where rounding decimal numbers halfway down matters, you should use the decimal module. By the way, the decimal module also provides a convenient way to "see" the exact value stored for any float.

>>> from decimal import Decimal
>>> Decimal (2.675)
>>> Decimal ('2.67499999999999982236431605997495353221893310546875')

Another consequence of the fact that 0.1 is not exactly stored in 1/10 is that the sum of ten values ​​of 0.1 does not give 1.0 either:

>>> sum = 0.0
>>> for i in range (10):
... sum + = 0.1
...>>> sum
0.9999999999999999

The arithmetic of binary floating point numbers holds many such surprises. The problem with "0.1" is explained in detail below, in the section "Representation errors". See The Perils of Floating Point for a more complete list of such surprises.

It is true that there is no simple answer, however do not be overly suspicious of floating virtual numbers! Errors, in Python, in floating-point number operations are due to the underlying hardware, and on most machines are no more than 1 in 2 ** 53 per operation. This is more than necessary for most tasks, but you should keep in mind that these are not decimal operations, and every operation on floating point numbers may suffer from a new error.

Although pathological cases exist, for most common use cases you will get the expected result at the end by simply rounding up to the number of decimal places you want on the display. For fine control over how floats are displayed, see String Formatting Syntax for the formatting specifications of the str.format () method.

This part of the answer explains in detail the example of "0.1" and shows how you can perform an exact analysis of this type of case on your own. We assume that you are familiar with the binary representation of floating point numbers.The term Representation error means that most decimal fractions cannot be represented exactly in binary. This is the main reason why Python (or Perl, C, C ++, Java, Fortran, and many others) usually doesn't display the exact result in decimal:

>>> 0.1 + 0.2
0.30000000000000004

Why? 1/10 and 2/10 are not representable exactly in binary fractions. However, all machines today (July 2010) follow the IEEE-754 standard for the arithmetic of floating point numbers. and most platforms use an "IEEE-754 double precision" to represent Python floats. Double precision IEEE-754 uses 53 bits of precision, so on reading the computer tries to convert 0.1 to the nearest fraction of the form J / 2 ** N with J an integer of exactly 53 bits. Rewrite:

1/10 ~ = J / (2 ** N)

in:

J ~ = 2 ** N / 10

remembering that J is exactly 53 bits (so> = 2 ** 52 but <2 ** 53), the best possible value for N is 56:

>>> 2 ** 52
4503599627370496
>>> 2 ** 53
9007199254740992
>>> 2 ** 56/10
7205759403792793

So 56 is the only possible value for N which leaves exactly 53 bits for J. The best possible value for J is therefore this quotient, rounded:

>>> q, r = divmod (2 ** 56, 10)
>>> r
6

Since the carry is greater than half of 10, the best approximation is obtained by rounding up:

>>> q + 1
7205759403792794

Therefore the best possible approximation for 1/10 in "IEEE-754 double precision" is this above 2 ** 56, that is:

7205759403792794/72057594037927936

Note that since the rounding was done upward, the result is actually slightly greater than 1/10; if we hadn't rounded up, the quotient would have been slightly less than 1/10. But in no case is it exactly 1/10!

So the computer never "sees" 1/10: what it sees is the exact fraction given above, the best approximation using the double precision floating point numbers from the "" IEEE-754 ":

>>>. 1 * 2 ** 56
7205759403792794.0

If we multiply this fraction by 10 ** 30, we can observe the values ​​of its 30 decimal places of strong weight.

>>> 7205759403792794 * 10 ** 30 // 2 ** 56
100000000000000005551115123125L

meaning that the exact value stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125. In versions prior to Python 2.7 and Python 3.1, Python rounded these values ​​to 17 significant decimal places, displaying “0.10000000000000001”. In current versions of Python, the displayed value is the value whose fraction is as short as possible while giving exactly the same representation when converted back to binary, simply displaying “0.1”.

你列表最软的妹 2025-01-24 21:35:50

带有浮点数的陷阱是它们看起来像十进制,但在二进制中起作用。

唯一的主要因素是2,而10个的主要因素为2和5。结果是,可以完全写成二进制分数的每个数字也可以完全写成小数分数,但只有一个子集可以写入小数分数的数字可以写成二进制分数。

浮点数本质上是二进制分数,其数量有限。如果您超越了这些重要数字,那么结果将被舍入舍入。

当您在代码中键入文字或调用该函数以将浮点数分解为字符串时,它会期望十进制数字,并且在变量中将该小数号的二进制近似值存储。

当您打印浮点号或调用函数以将一个函数转换为字符串时,它会打印浮点数号的小数近似值。它是 可以准确地将二进制号转换为十进制的,但是我默认不知道在转换为字符串*时这样做。某些语言使用固定数量的大数字,而另一些语言则使用最短的字符串,该字符串将“往返”回到相同的浮点值。

* python dis 将浮点数转换为“ Decimal.decimal”时,请准确转换。这是我知道获得浮点数的确切小数等效的最简单方法。

The trap with floating point numbers is that they look like decimal but they work in binary.

The only prime factor of 2 is 2, while 10 has prime factors of 2 and 5. The result of this is that every number that can be written exactly as a binary fraction can also be written exactly as a decimal fraction but only a subset of numbers that can be written as decimal fractions can be written as binary fractions.

A floating point number is essentially a binary fraction with a limited number of significant digits. If you go past those significant digits then the results will be rounded.

When you type a literal in your code or call the function to parse a floating point number to a string, it expects a decimal number and it stores a binary approximation of that decimal number in the variable.

When you print a floating point number or call the function to convert one to a string it prints a decimal approximation of the floating point number. It is possible to convert a binary number to decimal exactly, but no language I'm aware of does that by default when converting to a string*. Some languages use a fixed number of significant digits, others use the shortest string that will "round trip" back to the same floating point value.

* Python does convert exactly when converting a floating point number to a "decimal.Decimal". This is the easiest way I know of to obtain the exact decimal equivalent of a floating point number.

忆沫 2025-01-24 21:35:50

Since Python 3.5, you have been able to use the math.isclose() function for testing approximate equality:

>>> import math
>>> math.isclose(0.1 + 0.2, 0.3)
True
>>> 0.1 + 0.2 == 0.3
False
扎心 2025-01-24 21:35:50

另一种看待这个问题的方式是:使用 64 位来表示数字。因此,无法精确表示超过 2**64 = 18,446,744,073,709,551,616 个不同的数字。

然而,Math 表示 0 和 1 之间已经有无限多个小数。IEE 754 定义了一种编码,可以有效地将这些 64 位用于更大的数字空间以及 NaN 和 +/- Infinity,因此在精确表示的数字之间存在间隙数字只是近似值。

不幸的是 0.3 存在差距。

Another way to look at this: Used are 64 bits to represent numbers. As consequence there is no way more than 2**64 = 18,446,744,073,709,551,616 different numbers can be precisely represented.

However, Math says there are already infinitely many decimals between 0 and 1. IEE 754 defines an encoding to use these 64 bits efficiently for a much larger number space plus NaN and +/- Infinity, so there are gaps between accurately represented numbers filled with numbers only approximated.

Unfortunately 0.3 sits in a gap.

轻拂→两袖风尘 2025-01-24 21:35:50

想象一下以 10 为基数进行工作,精度达到 8 位数字。您检查

1/3 + 2 / 3 == 1

并得知这是否返回false。为什么?好吧,作为实数,我们有

1/3 = 0.333....2/3 = 0.666....

截断小数点后八位,我们得到

0.33333333 + 0.66666666 = 0.99999999

的是,当然,与 1.00000000 完全不同 0.00000001


具有固定位数的二进制数的情况完全类似。作为实数,我们有

1/10 = 0.0001100110011001100... (base 2)

1/5 = 0.0011001100110011001... (base 2)

如果我们将它们截断为,比如说,七位,那么我们会得到

0.0001100 + 0.0011001 = 0.0100101

另一方面,

3/10 = 0.01001100110011...(基数 2)

,截断为 7 位后,为 0.0100110,并且它们之间的差异恰好为 0.0000001


确切的情况稍微微妙一些,因为这些数字通常以科学记数法存储。因此,例如,我们可以将其存储为 1.10011 * 2^-4 之类的内容,而不是将 1/10 存储为 0.0001100,具体取决于我们分配了多少位对于指数和尾数。这会影响您计算时获得的精度位数。

结果是,由于这些舍入错误,您基本上不想在浮点数上使用 == 。相反,您可以检查它们差异的绝对值是否小于某个固定的小数。

Imagine working in base ten with, say, 8 digits of accuracy. You check whether

1/3 + 2 / 3 == 1

and learn that this returns false. Why? Well, as real numbers we have

1/3 = 0.333.... and 2/3 = 0.666....

Truncating at eight decimal places, we get

0.33333333 + 0.66666666 = 0.99999999

which is, of course, different from 1.00000000 by exactly 0.00000001.


The situation for binary numbers with a fixed number of bits is exactly analogous. As real numbers, we have

1/10 = 0.0001100110011001100... (base 2)

and

1/5 = 0.0011001100110011001... (base 2)

If we truncated these to, say, seven bits, then we'd get

0.0001100 + 0.0011001 = 0.0100101

while on the other hand,

3/10 = 0.01001100110011... (base 2)

which, truncated to seven bits, is 0.0100110, and these differ by exactly 0.0000001.


The exact situation is slightly more subtle because these numbers are typically stored in scientific notation. So, for instance, instead of storing 1/10 as 0.0001100 we may store it as something like 1.10011 * 2^-4, depending on how many bits we've allocated for the exponent and the mantissa. This affects how many digits of precision you get for your calculations.

The upshot is that because of these rounding errors you essentially never want to use == on floating-point numbers. Instead, you can check if the absolute value of their difference is smaller than some fixed small number.

别靠近我心 2025-01-24 21:35:50

实际上很简单。当您拥有10个基本10系统(如我们的基本系统)时,它只能表达使用基础主要因素的分数。 10的主要因素为2和5。因此,1/2、1/4、1/5、1/8和1/10的主要因素都可以干净地表达,因为分母都使用了10的主要因素。相反,相反,1 /3、1/6和1/7都是重复的小数仅包含2作为主要因素。在二进制中,1/2、1/4、1/8都将干净地表示为小数。而1/​​5或1/10将重复小数。因此,0.1和0.2和0.2(1/10和1/5)在基本10系统中的干净小数时,在基本2系统中重复小数计算机运行。当您对这些重复的小数进行数学计算时,您最终会剩下剩菜当您将计算机的基数2(二进制)编号转换为更人性化的基数10号时,它会延续。

来自 https://0.300000000000000000000004.com/

It's actually pretty simple. When you have a base 10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7. In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are repeating decimals in the base 2 system the computer is operating in. When you do math on these repeating decimals, you end up with leftovers which carry over when you convert the computer's base 2 (binary) number into a more human readable base 10 number.

From https://0.30000000000000004.com/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文