在 Python 中减去 numpy 数组时出现不需要的舍入
从单个浮点数中减去数组时,我遇到了 python 自动舍入非常小的数字(小于 1e-8)的问题。举个例子:
import numpy as np
float(1) - np.array([1e-10, 1e-5])
关于如何强制 python 不舍入有什么想法吗?在某些情况下,这迫使我除以零,并成为一个问题。从 numpy 数组中减去时也会出现同样的问题。
I'm running into an issue with python automatically rounding very small numbers (smaller than 1e-8) when subtracting an array from an single float. Take this example:
import numpy as np
float(1) - np.array([1e-10, 1e-5])
Any thoughts on how to force python not to round? This is forcing me to divide by zero in some cases, and becoming a problem. The same problem arises when subtracting from an numpy array.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
大多数情况下,这只是 numpy 数组的
repr
欺骗了你。考虑上面的例子:
这会产生:
所以第一个元素实际上并不是零,它只是 numpy 数组的漂亮打印以这种方式显示它。
这可以通过
numpy.set_printoptions
来控制.当然,numpy 从根本上使用有限精度的浮点数。 numpy 的全部意义在于成为类似数据数组的内存高效容器,因此 numpy 中没有与 Decimal 类等效的类。
然而,64 位浮点数具有相当大的精度范围。使用 1e-10 和 1e-5 不会遇到太多问题。如果需要,还有一个 numpy.float128 dtype,但操作会比使用本机浮点数慢得多。
Mostly, it's just the
repr
of numpy arrays that's fooling you.Consider your example above:
This yields:
So the first element isn't actually zero, it's just the pretty-printing of numpy arrays that's showing it that way.
This can be controlled by
numpy.set_printoptions
.Of course, numpy is fundementally using limited precision floats. The whole point of numpy is to be a memory-efficient container for arrays of similar data, so there's no equivalent of the
decimal
class in numpy.However, 64-bit floats have a decent range of precision. You won't hit too many problems with 1e-10 and 1e-5. If you need, there's also a
numpy.float128
dtype, but operations will be much slower than using native floats.我想这一切都取决于 Python 和底层 C 库对非常小的浮点数的处理,在某种程度上往往会失去精度。
如果您需要这种精度,恕我直言,您应该依赖不同的东西,例如小数等。
我不知道是否已经有东西可以处理这个问题,但是如果您可以设法以不同的方式表示这些数字(例如
1/10000000000
和1/100000
),然后仅在所有计算结束时计算浮点结果,您应该避免所有这些问题。(当然,您需要一些能够自动处理小数计算的类,以避免重新实现公式等。)
I guess all that depends on the handling of very small float numbers, by Python and by the underlying C libraries, that at a certain point tend to loose precision.
If you need that level of precision, imho you should rely on something different, such as fractionary numbers etc.
I don't know whether there already is something to handle that, but if you could manage to represent that numbers in a different way (such as
1/10000000000
and1/100000
) and then calculate the floating point result only at the end of all calculations, you should avoid all these problems.(Of course, you need some class that automagically handles fractionary calculations to avoid having to reimplement formulas, etc.)