释放模式即使对于浮点变量也使用双精度
我的算法正在计算单精度浮点运算的epsilon。应该是 1.1921e-007 左右。代码如下:
static void Main(string[] args) {
// start with some small magic number
float a = 0.000000000000000013877787807814457f;
for (; ; ) {
// add the small a to 1
float temp = 1f + a;
// break, if a + 1 really is > '1'
if (temp - 1f != 0f) break;
// otherwise a is too small -> increase it
a *= 2f;
Console.Out.WriteLine("current increment: " + a);
}
Console.Out.WriteLine("Found epsilon: " + a);
Console.ReadKey();
}
在调试模式下,它给出以下合理的输出(缩写):
current increment: 2,775558E-17
current increment: 5,551115E-17
...
current increment: 2,980232E-08
current increment: 5,960464E-08
current increment: 1,192093E-07
Found epsilon: 1,192093E-07
但是,当切换到发布模式时(无论有/没有优化!),代码给出以下结果:
current increment: 2,775558E-17
current increment: 5,551115E-17
current increment: 1,110223E-16
current increment: 2,220446E-16
Found epsilon: 2,220446E-16
对应于 < 的值强>双精度。所以我假设,一些优化导致计算在双精度值上完成。当然,在这种情况下结果是错误的!
另外:仅当在项目选项中针对 X86 版本时,才会发生这种情况。再次强调:优化开/关并不重要。我使用的是64位WIN7,VS 2010 Ultimate,目标是.NET 4.0。
什么可能导致这种行为?一些哇的问题?如何以可靠的方式绕过它?如何防止 CLR 生成使用双精度而不是单精度计算的代码?
注意:切换到“任何 CPU”甚至“X64”作为平台目标都是不可行的 - 即使这里没有出现问题。但我们有一些本机库,有针对 32/64 位的不同版本。所以目标一定要具体。
My algorithm is calculating the epsilon for single precision floating point arithmetic. It is supposed to be something around 1.1921e-007. Here is the code:
static void Main(string[] args) {
// start with some small magic number
float a = 0.000000000000000013877787807814457f;
for (; ; ) {
// add the small a to 1
float temp = 1f + a;
// break, if a + 1 really is > '1'
if (temp - 1f != 0f) break;
// otherwise a is too small -> increase it
a *= 2f;
Console.Out.WriteLine("current increment: " + a);
}
Console.Out.WriteLine("Found epsilon: " + a);
Console.ReadKey();
}
In debug mode, it gives the following reasonable output (abbreviated):
current increment: 2,775558E-17
current increment: 5,551115E-17
...
current increment: 2,980232E-08
current increment: 5,960464E-08
current increment: 1,192093E-07
Found epsilon: 1,192093E-07
However, when switching to release mode (no matter with/ Without optimization!), the code gives the following result:
current increment: 2,775558E-17
current increment: 5,551115E-17
current increment: 1,110223E-16
current increment: 2,220446E-16
Found epsilon: 2,220446E-16
which corresponds to the value for double precision. So I assume, some optimizations cause the computations to be done on double values. Of course the result is wrong in this case!
Also: this happens only, if targeting X86 Release in the project options. Again: optimization on/off does not matter. I am on 64 bit WIN7, VS 2010 Ultimate, targeting .NET 4.0.
What might cause that behaviour? Some WOW issue? How to get around it in a reliable way? How to prevent the CLR to generate code which makes use of double precision instead of single precision calculations?
Note: switching to "Any CPU" or even "X64" as platform target is no option - even if the problem does not occur here. But we have some native libraries, in different versions for 32/64 bit. So the target must be specific.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
正如评论中所讨论的,这是预期的。可以通过消除 JIT 将值保留在寄存器中的能力(这将比实际值更宽)来回避它 - 通过强制它下降到一个字段(具有明确定义的大小):
有趣的是,我尝试了这个首先使用结构体,但是 JIT 能够看穿我的作弊行为(大概是因为它全部在堆栈上)。
As discussed in the comments, this is expected. It can be side-stepped by removing the JIT's ability to keep the value in a register (which will be wider than the actual value) - by forcing it down to a field (which has clearly-defined size):
Interestingly, I tried this with a struct first, but the JIT was able to see past my cheating (presumably because it is all on the stack).