Python 字典浮点数

发布于 2024-12-12 02:14:37 字数 383 浏览 0 评论 0原文

我在 Python (2.6.1) 字典中遇到了一个奇怪的行为:

我的代码是:

new_item = {'val': 1.4}
print new_item['val']
print new_item

结果是:

1.4
{'val': 1.3999999999999999}

这是为什么?某些数字会发生这种情况,但其他数字则不会。例如:

  • 0.1 变为 0.1000...001
  • 0.4 变为 0.4000...002
  • 0.7 变为 0.6999...996
  • 1.9 变为 1.8888...889

I came across a strange behavior in Python (2.6.1) dictionaries:

The code I have is:

new_item = {'val': 1.4}
print new_item['val']
print new_item

And the result is:

1.4
{'val': 1.3999999999999999}

Why is this? It happens with some numbers, but not others. For example:

  • 0.1 becomes 0.1000...001
  • 0.4 becomes 0.4000...002
  • 0.7 becomes 0.6999...996
  • 1.9 becomes 1.8888...889

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

我一向站在原地 2024-12-19 02:14:37

这不是 Python 特有的,每个使用二进制浮点的语言(几乎是每个主流语言)都会出现这个问题。

来自浮点指南

因为计算机内部使用一种格式(二进制浮点)
根本无法准确表示 0.1、0.2 或 0.3 这样的数字。

当代码被编译或解释时,你的“0.1”已经是
四舍五入到该格式中最接近的数字,这会导致一个小的
即使在计算发生之前也会出现舍入误差。

某些值可以精确地表示为二进制分数,并且输出格式化例程通常会显示比任何其他浮点数更接近实际值的最短数字,这掩盖了一些舍入错误。

This is not Python-specific, the issue appears with every language that uses binary floating point (which is pretty much every mainstream language).

From the Floating-Point Guide:

Because internally, computers use a format (binary floating-point)
that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.

Some values can be exactly represented as binary fraction, and output formatting routines will often display the shortest number that is closer to the actual value than to any other floating-point number, which masks some of the rounding errors.

墨落成白 2024-12-19 02:14:37

正如其他人指出的那样,这个问题与二进制浮点表示有关。

但我认为你可能想要一些可以帮助你解决 Python 中隐含问题的东西。

它与词典无关,所以如果我是你,我会删除该标签。

如果您可以根据需要使用固定精度的十进制数,我建议您查看 Python小数模块。从页面(强调我的):

  • 十进制“基于浮点模型,该模型是以人为本设计的,并且必然具有一个最重要的指导原则 - 计算机必须提供一种适用的算术就像人们在学校学习算术一样”。 – 摘自十进制算术规范。

  • 十进制数可以精确表示。相比之下,像 1.1 和 2.2 这样的数字没有二进制浮点的精确表示。最终用户通常不会期望 1.1 + 2.2 像二进制浮点那样显示为 3.3000000000000003。

  • 精确性延续到算术中。在十进制浮点数中,0.1 + 0.1 + 0.1 - 0.3 恰好等于零。以二进制浮点数表示,结果为 5.5511151231257827e-017。虽然接近于零,但差异会妨碍可靠的平等测试,并且差异会累积。因此,在具有严格相等不变量的会计应用程序中,十进制是首选。

This problem is related to floating point representations in binary, as others have pointed out.

But I thought you might want something that would help you solve your implied problem in Python.

It's unrelated to dictionaries, so if I were you, I would remove that tag.

If you can use a fixed-precision decimal number for your purposes, I would recommend you check out the Python decimal module. From the page (emphaisis mine):

  • Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.

  • Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have an exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.

  • The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文