使用小数估计计算误差
目前,我们正在开发的 .NET 应用程序中使用 System.Decimals 来表示数字。我知道小数的设计是为了最大限度地减少因舍入而产生的误差,但我也知道某些数字(例如 1/3)不能表示为小数,因此某些计算会有较小的舍入误差。我相信这个错误的幅度会非常小并且可以忽略不计,但一位同事不同意。因此,我希望能够估计由于我们的应用程序中的舍入而导致的误差的数量级。举例来说,我们正在计算“交易”的运行总计,每天将进行大约 10,000 个“交易”,并且大约有 5-10 个小数运算(加、减、除、乘等)来计算运行总计收到的每笔交易的新运行总计,回合误差巨头的顺序是多少?包含计算过程的答案也很好,这样我将来就可以学习如何为自己做到这一点。
We're currently using System.Decimals to represent numbers in a .NET application we're developing. I know that decimals are design to minimize errors due to rounding, but I also know that certain numbers, 1/3 for example, cannot be represented as a decimal so some calculations will have small rounding error. I believe the magnitude of this error will be very small and insignificant, however a colleague disagrees. I would therefore like to be able to estimate the order of magnitude of the error due to rounding in our app. Say, for example, we are calculating a running total of “deals” and will do about 10,000 “deals” per day and there are about 5-10 decimal operations (add, sub, div, mul etc.) to calculate the running total new running total for each deal received, what would be the order of magnate of round error? An answer with a procedure for calculating this would also be nice, so I can learn how to do this for myself in the future.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
每个计算机科学家应该了解的浮点运算内容详细介绍在给定浮点类型的精度的情况下,估计一系列浮点运算结果中的误差。不过,我还没有在任何实际程序中尝试过这个,所以我有兴趣知道它是否可行。
What Every Computer Scientist Should Know About Floating-Point Arithmetic goes into detail on estimating the error in the result of a sequence of floating point operations, given the precision of the floating point type. I haven't tried this on any practical program, though, so I'd be interested to know if it's feasible.