为什么是 0.1 + D 中的 0.2 == 0.3?

发布于 2024-11-27 06:09:55 字数 1727 浏览 5 评论 0原文

assert(0.1 + 0.2 != 0.3); // shall be true

我最喜欢检查语言是否使用本机浮点运算。

C++

#include <cstdio>

int main()
{
   printf("%d\n", (0.1 + 0.2 != 0.3));
   return 0;
}

输出:

1

http://ideone.com/ErBMd

Python

print(0.1 + 0.2 != 0.3)

输出:

True

http://ideone.com/TuKsd

其他

为什么对于 D 来说不是这样?据了解,D 使用本机浮点数。这是一个错误吗?他们使用一些特定的数字表示吗?还有别的事吗?相当混乱。

D

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3);
}

输出:

false

http://ideone.com/mX6zF


更新

感谢 卢克H。这是那里描述的浮点常量折叠的效果。

代码:

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision

   auto a = 0.1;
   auto b = 0.2;
   writeln(a + b != 0.3);     // standard calculation in double precision
}

输出:

false
true

http://ideone.com/z6ZLk

assert(0.1 + 0.2 != 0.3); // shall be true

is my favorite check that a language uses native floating point arithmetic.

C++

#include <cstdio>

int main()
{
   printf("%d\n", (0.1 + 0.2 != 0.3));
   return 0;
}

Output:

1

http://ideone.com/ErBMd

Python

print(0.1 + 0.2 != 0.3)

Output:

True

http://ideone.com/TuKsd

Other examples

Why is this not true for D? As understand D uses native floating point numbers. Is this a bug? Do they use some specific number representation? Something else? Pretty confusing.

D

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3);
}

Output:

false

http://ideone.com/mX6zF


UPDATE

Thanks to LukeH. This is an effect of Floating Point Constant Folding described there.

Code:

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision

   auto a = 0.1;
   auto b = 0.2;
   writeln(a + b != 0.3);     // standard calculation in double precision
}

Output:

false
true

http://ideone.com/z6ZLk

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

只为一人 2024-12-04 06:09:56

它可能被优化为 (0.3 != 0.3)。这显然是错误的。检查优化设置,确保它们已关闭,然后重试。

It's probably being optimized to (0.3 != 0.3). Which is obviously false. Check optimization settings, make sure they're switched off, and try again.

过去的过去 2024-12-04 06:09:56

根据我对D语言规范的解释,x86上的浮点运算将使用内部精度为 80 位,而不是仅 64 位。

然而,人们必须检查这是否足以解释您观察到的结果。

According to my interpretation of the D language specification, floating point arithmetic on x86 would use 80 bits of precision internally, instead of only 64 bits.

One would have to check however that that is enough to explain the result you observe.

感情废物 2024-12-04 06:09:55

(Flynn 的答案是正确的答案。这个答案更普遍地解决了这个问题。)


您似乎假设,OP,代码中的浮点不准确是确定性的,并且可以预见是错误的< /em> (在某种程度上,你的方法与那些还不了解浮点的人的方法截然相反)。

尽管(正如 Ben 指出的那样)浮点不准确确定性的,但从代码的角度来看,如果您没有非常仔细地了解每一步中值发生的情况,则这不会就这样吧。任何数量的因素都可能导致 0.1 + 0.2 == 0.3 成功,编译时优化是其中之一,调整这些文字的值是另一个。

这里不依赖成功,也不依赖失败;无论哪种方式都不要依赖浮点相等。

(Flynn's answer is the correct answer. This one addresses the problem more generally.)


You seem to be assuming, OP, that the floating-point inaccuracy in your code is deterministic and predictably wrong (in a way, your approach is the polar opposite of that of people who don't understand floating point yet).

Although (as Ben points out) floating-point inaccuracy is deterministic, from the point of view of your code, if you are not being very deliberate about what's happening to your values at every step, this will not be the case. Any number of factors could lead to 0.1 + 0.2 == 0.3 succeeding, compile-time optimisation being one, tweaked values for those literals being another.

Rely here neither on success nor on failure; do not rely on floating-point equality either way.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文