常见的算术转换——一套更好的规则?
考虑以下代码:
void f(byte x) {print("byte");}
void f(short x) {print("short");}
void f(int x) {print("int");}
void main() {
byte b1, b2;
short s1, s2;
f(b1 + b2); // byte + byte = int
f(s1 + s2); // short + short = int
}
在 C++、C#、D 和 Java 中,两个函数调用都解析为“int”重载...我已经意识到这是“规范中的”,但为什么语言是这样的是这样设计的? 我正在寻找更深层次的原因。
对我来说,结果是能够表示两个操作数的所有可能值的最小类型,例如:
byte + byte --> byte
sbyte + sbyte --> sbyte
byte + sbyte --> short
short + short --> short
ushort + ushort --> ushort
short + ushort --> int
// etc...
这将消除不方便的代码,例如 short s3 = (short) (s1 + s2)
,而且 IMO 更直观、更容易理解。
这是 C 时代遗留下来的遗产,还是有更好的理由导致当前的行为?
Consider the following code:
void f(byte x) {print("byte");}
void f(short x) {print("short");}
void f(int x) {print("int");}
void main() {
byte b1, b2;
short s1, s2;
f(b1 + b2); // byte + byte = int
f(s1 + s2); // short + short = int
}
In C++, C#, D, and Java, both function calls resolve to the "int" overloads... I already realize this is "in the specs", but why are languages designed this way? I'm looking for a deeper reason.
To me, it makes sense for the result to be the smallest type able to represent all possible values of both operands, for example:
byte + byte --> byte
sbyte + sbyte --> sbyte
byte + sbyte --> short
short + short --> short
ushort + ushort --> ushort
short + ushort --> int
// etc...
This would eliminate inconvenient code such as short s3 = (short)(s1 + s2)
, as well as IMO being far more intuitive and easier to understand.
Is this a left-over legacy from the days of C, or are there better reasons for the current behavior?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
引自这篇 MSDN 博客文章:
另外,值得注意的是,添加这些强制转换仅意味着额外的输入,仅此而已。 一旦 JIT(或者可能是静态编译器本身)将算术运算简化为基本处理器指令,就没有什么聪明的事情发生了 - 只是数字是否被视为
int
还是byte< /代码>。
这是一个很好的问题,但是……根本不是一个显而易见的问题。 希望现在您已经清楚原因了。
Quoted from this MSDN blog post:
Also, it's worth noting that adding in these casts only means extra typing, and nothing more. Once the JIT (or possibly the static compiler itself) reduces the arithmetic operation to a basic processor instruction, there's nothing clever going on - it's just whether the number gets treated as an
int
orbyte
.This is a good question, however... not at all an obvious one. Hope that makes the reasons clear for you now.
恕我直言,如果规定移位运算符只能与恒定移位值一起使用(对可变移位量使用移位函数),则一组更好的规则是任何算术表达式的结果应始终评估,就好像它是用最大可能的有符号或无符号类型,前提是可以静态保证给出正确的结果(在最大有符号类型可能不够的情况下,将应用稍微棘手的规则)。 如果移位操作数只允许为常量,则可以在编译时很容易地确定任何操作数的最大有意义值可能是什么,因此我认为编译器没有任何充分的理由不考虑如何使用运算符的结果在决定运营商的实施。
A better set of rules IMHO, if one provided that the shifting operators could only be used with constant shift values (use shifting functions for variable shift amounts), would be that the results of any arithmetic expression should always evaluate as though it were processed with the largest possible signed or unsigned type, provided either could be statically guaranteed to give correct results (slightly tricky rules would apply in cases where the largest signed type might not be sufficient). Provided shift operands are only allowed to be constants, one could determine pretty easily at compile time what the largest meaningful value of any operand could be, so I don't see any good reason for compilers not to look at how an operator's result is used in deciding on the implementation of the operator.