双打除法不准确 (Visual C++ 2008)
我有一些代码将从 QueryPerformanceCounter 返回的时间值转换为以毫秒为单位的双精度值,因为这样更方便计数。
该函数如下所示:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
我最近遇到的问题(我认为我以前没有遇到过这个问题,并且没有对代码进行任何更改)是结果不是很准确。 结果不包含任何小数,但精确度甚至低于 1 毫秒。
当我在调试器中输入表达式时,结果与我预期的一样准确。
我知道双精度数无法保存64位整数的精度,但此时PerformanceCounter只需要46位(而双精度数应该能够无损地存储52位) 此外,调试器使用不同的格式来进行划分似乎很奇怪。
这是我得到的一些结果。 该程序是在调试模式下编译的,C++ 选项中的浮点模式设置为默认值(精确 (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
有谁知道为什么调试器的 Watch 中的精度与我的程序中的结果不同?
更新:在进行转换和除法之前,我尝试从timerPerformanceCounter.QuadPart中扣除30270310439445,现在在所有情况下它似乎都是准确的。 也许我现在才看到这种行为的原因可能是因为我的计算机的正常运行时间现在是 16 天,所以该值比我习惯的要大? 因此,这似乎确实是大数除法精度问题,但这仍然无法解释为什么“观察”窗口中的除法仍然正确。 它的结果是否使用比 double 更高精度的类型?
I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.
When I enter the expression in the debugger, the result is as accurate as I would expect.
I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss)
Furthermore it seems odd that the debugger would use a different format to do the division.
Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?
Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now.
Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to?
So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window.
Does it use a higher-precision type than double for it's results?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Adion,
如果您不介意性能下降,请在执行除法之前将 QuadPart 数字转换为十进制而不是双精度。 然后将结果数字转换回双倍。
您对数字大小的判断是正确的。 它降低了浮点计算的准确性。
有关这方面的更多信息,您可能想知道,请参阅:
每个计算机科学家应该了解浮点运算
http://docs.sun.com/source/806-3568/ncg_goldberg。 html
Adion,
If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.
You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.
For more about this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html
谢谢,使用十进制可能也是一个解决方案。
现在我采取了一种稍微不同的方法,这种方法也很有效,至少只要我的程序在不重新启动的情况下运行时间不超过一周左右。
我只记得程序启动时的性能计数器,并在转换为双精度并进行除法之前从当前计数器中减去它。
我不确定哪种解决方案最快,我想我必须首先对其进行基准测试。
Thanks, using decimal would probably be a solution too.
For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting.
I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.
I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.