NSDecimalNumber 不是应该能够进行以 10 为基数的算术吗?
NSDecimalNumber *minVal = [NSDecimalNumber decimalNumberWithString:@"0.0"];
NSDecimalNumber *maxVal = [NSDecimalNumber decimalNumberWithString:@"111.1"];
NSDecimalNumber *valRange = [maxVal decimalNumberBySubtracting:minVal];
CGFloat floatRange = [valRange floatValue];
NSLog(@"%f", floatRange); //prints 111.099998
难道 NSDecimalNumber
不应该能够正确地进行以 10 为基数的算术吗?
NSDecimalNumber *minVal = [NSDecimalNumber decimalNumberWithString:@"0.0"];
NSDecimalNumber *maxVal = [NSDecimalNumber decimalNumberWithString:@"111.1"];
NSDecimalNumber *valRange = [maxVal decimalNumberBySubtracting:minVal];
CGFloat floatRange = [valRange floatValue];
NSLog(@"%f", floatRange); //prints 111.099998
Isn't NSDecimalNumber
supposed to be able to do base-10 arithmetic correctly?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
是的,
NSDecimalNumber
以 10 为基数进行运算,但CGFloat
则不然。Yes,
NSDecimalNumber
operates in base-10, butCGFloat
doesn't.是的,
NSDecimalNumber
是以 10 为基数的。转换为浮点类型会降低准确性。在他们的例子中,因为示例只使用了
NSLog
只是NSLog
的NSDecimalNumber
:NSLog 输出:
Yes,
NSDecimalNumber
is base-10.Converting to a floating point type will can loose accuracy. In their case since the example just used
NSLog
justNSLog
theNSDecimalNumber
:NSLog output:
好的,只要这样做,
CGFloat aNumber = 111.1;
就会在调试器中显示为111.099998
,甚至在对其执行任何操作之前。因此,当将其分配给不太精确的数据类型时,无论稍后发生任何算术运算,精度都会丢失。OK, just doing that
CGFloat aNumber = 111.1;
shows as111.099998
in the debugger, even before any operation has been performed on it. Therefore the precision is lost right when it is assigned to the less precise data type, regardless of any arithmetic operations occurring later.