谁能向我解释一下这种浮点怪异现象吗?
我试图像这样循环遍历浮点数的所有可能值:
float i = 0.0F;
float epsilon = float.Epsilon;
while (i != float.MaxValue) {
i += epsilon;
}
但在达到值 2.3509887E-38F 后,它停止增加。
float init = 2.3509887E-38F;
float f = (init + float.Epsilon);
Console.WriteLine(f == init);
我只是好奇,谁能准确解释一下为什么?
因此,我可以在舍入误差之前将 epsilon 添加到浮点数 16777216 次,并且该数字看起来非常熟悉 (2^24)。
I was trying to loop through all possible values of a float like this:
float i = 0.0F;
float epsilon = float.Epsilon;
while (i != float.MaxValue) {
i += epsilon;
}
but after reaching the value 2.3509887E-38F it stops increasing.
float init = 2.3509887E-38F;
float f = (init + float.Epsilon);
Console.WriteLine(f == init);
I'm just curious, can anyone explain exactly why?
So, I can add epsilon to a float 16777216 times before the rounding error, and that number looks awfully familiar (2^24).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
浮点数不精确;它们只能保存这么多有效数字,并且如果需要存储其当前值,将简单地忽略被视为“太微不足道”的值。
关键是名称的“浮动”部分;该变量让“点”浮动到需要存储值的任何地方,这意味着浮点变量可以存储非常大或非常精确的值,因为它可以将点“移动”到任何需要的地方。但它通常无法存储既大又精确的值。
但“大”过于简单化了。任何具有大量较高有效数值的数字将无法存储太多的精确值。由于您尝试添加非常小的东西,因此您可能会很快失去处理这种精度的能力。
如果你取一个非常大的值,你会发现即使添加/减去整数也不会导致任何变化。
编辑:请参阅 Stephen Canon 的回答以获得更精确的答案。 ;)
Floating point numbers are imprecise; they can only hold so many significant digits, and will simply ignore values deemed 'too insignificant' if needed to store their current value.
The key is the 'floating' part of the name; The variable lets the 'point' float to wherever it is needed to store the value, meaning a floating point variable could store a very large or a very precise value, since it can 'move' the point where ever it needs to. But it usually can't store a value that is both large and precise.
But 'large' simplifies it too much; Any number which has a lot of significant numeric values higher up won't be able to store too much of a precise value. Since you are trying to add something so very small, you are likely to lose the ability to handle such precision very quickly.
If you took a very large value, you could find that even adding/subtracting whole numbers would still result in no change.
EDIT: See Stephen Canon's answer for a more precise answer, too. ;)
这里有很多非常模糊的想法。浮点数并不是“不精确”的。没有“可以”。它是一个确定性系统,就像计算机上的任何其他系统一样。
不要通过查看十进制表示来分析发生了什么。如果您查看这些二进制或十六进制数字,则此行为的根源是完全显而易见的。让我们使用二进制:
如果我们将这两个数字加在一起,则无限精确(未舍入)的总和为:
请注意,该总和的有效数为 25 位宽(我将二进制数字分为四组,以便更容易计数)。这意味着它不能以单精度表示,因此该和的结果不是这个值,而是这个值四舍五入到接近可表示的
float
。两个最接近的可表示数字是:我们的数字正好在它们之间。由于您尚未在程序中设置舍入模式,因此我们处于默认舍入模式,即所谓“舍入到最接近的舍入到偶数”。由于这两个选项同样接近,因此通过选择最低位为零的选项来打破平局。因此,2^-125 + 2^-149 四舍五入为 2^-125,这就是“它停止增加”的原因。
There's a lot of very wooly thinking here. Floating point numbers are not "imprecise". There is no "may". It's a deterministic system, like anything else on a computer.
Don't to analyze what's going on by looking at decimal representations. The source of this behavior is completely obvious if you look at these numbers in binary or hexadecimal. Let's use binary:
If we add these two numbers together, the infinitely precise (unrounded) sum is:
Note that the significand of this sum is 25 bits wide (I've grouped the binary digits into sets of four to make them easier to count). This means that it cannot be represented in single-precision, so the result of this sum is not this value, but instead this value rounded to the closes representable
float
. The two closest representable numbers are:Our number is exactly halfway in between them. Since you haven't set the rounding mode in your program, we are in the default rounding mode, which is called "round to nearest, ties to even". Because the two options are equally close, the tie is broken by choosing the one whose lowest-order bit is zero. Thus, 2^-125 + 2^-149 is rounded to 2^-125, which is why "it stops increasing".
因为 epsilon (1.401298E-45) 与 2.3509887E-38F 相比太小,并且当将两者相加时,浮点数中没有足够的位来准确表示总和,因此整个 epsilon 都会丢失。
计算机上的浮点数学并不像我们在学校教授数学的方式那样工作,因为这里的数字用有限的位数表示,这将您的数学限制在一定的值范围(最小值和最大值)和某些有限的精度(尾数的位数)。
Because epsilon (1.401298E-45) is too small compared to 2.3509887E-38F and when added the two together there're not enough bits in float to represent the sum exactly and the entire epsilon is lost.
Floating-point math on computers doesn't work the way we're taught math at school because numbers here are represented with a finite number of bits, which restricts your math to a certain range of values (minimum and maximum) and certain limited precision (number of digits in mantissa).