为什么是 0.1 + D 中的 0.2 == 0.3?
assert(0.1 + 0.2 != 0.3); // shall be true
我最喜欢检查语言是否使用本机浮点运算。
C++
#include <cstdio>
int main()
{
printf("%d\n", (0.1 + 0.2 != 0.3));
return 0;
}
输出:
1
Python
print(0.1 + 0.2 != 0.3)
输出:
True
其他
为什么对于 D 来说不是这样?据了解,D 使用本机浮点数。这是一个错误吗?他们使用一些特定的数字表示吗?还有别的事吗?相当混乱。
D
import std.stdio;
void main()
{
writeln(0.1 + 0.2 != 0.3);
}
输出:
false
更新
代码:
import std.stdio;
void main()
{
writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision
auto a = 0.1;
auto b = 0.2;
writeln(a + b != 0.3); // standard calculation in double precision
}
输出:
false
true
assert(0.1 + 0.2 != 0.3); // shall be true
is my favorite check that a language uses native floating point arithmetic.
C++
#include <cstdio>
int main()
{
printf("%d\n", (0.1 + 0.2 != 0.3));
return 0;
}
Output:
1
Python
print(0.1 + 0.2 != 0.3)
Output:
True
Other examples
Why is this not true for D? As understand D uses native floating point numbers. Is this a bug? Do they use some specific number representation? Something else? Pretty confusing.
D
import std.stdio;
void main()
{
writeln(0.1 + 0.2 != 0.3);
}
Output:
false
UPDATE
Thanks to LukeH. This is an effect of Floating Point Constant Folding described there.
Code:
import std.stdio;
void main()
{
writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision
auto a = 0.1;
auto b = 0.2;
writeln(a + b != 0.3); // standard calculation in double precision
}
Output:
false
true
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
它可能被优化为 (0.3 != 0.3)。这显然是错误的。检查优化设置,确保它们已关闭,然后重试。
It's probably being optimized to (0.3 != 0.3). Which is obviously false. Check optimization settings, make sure they're switched off, and try again.
根据我对D语言规范的解释,x86上的浮点运算将使用内部精度为 80 位,而不是仅 64 位。
然而,人们必须检查这是否足以解释您观察到的结果。
According to my interpretation of the D language specification, floating point arithmetic on x86 would use 80 bits of precision internally, instead of only 64 bits.
One would have to check however that that is enough to explain the result you observe.
(Flynn 的答案是正确的答案。这个答案更普遍地解决了这个问题。)
您似乎假设,OP,代码中的浮点不准确是确定性的,并且可以预见是错误的< /em> (在某种程度上,你的方法与那些还不了解浮点的人的方法截然相反)。
尽管(正如 Ben 指出的那样)浮点不准确是确定性的,但从代码的角度来看,如果您没有非常仔细地了解每一步中值发生的情况,则这不会就这样吧。任何数量的因素都可能导致
0.1 + 0.2 == 0.3
成功,编译时优化是其中之一,调整这些文字的值是另一个。这里既不依赖成功,也不依赖失败;无论哪种方式都不要依赖浮点相等。
(Flynn's answer is the correct answer. This one addresses the problem more generally.)
You seem to be assuming, OP, that the floating-point inaccuracy in your code is deterministic and predictably wrong (in a way, your approach is the polar opposite of that of people who don't understand floating point yet).
Although (as Ben points out) floating-point inaccuracy is deterministic, from the point of view of your code, if you are not being very deliberate about what's happening to your values at every step, this will not be the case. Any number of factors could lead to
0.1 + 0.2 == 0.3
succeeding, compile-time optimisation being one, tweaked values for those literals being another.Rely here neither on success nor on failure; do not rely on floating-point equality either way.