在 C 中将 char 和 int 相乘第 2 部分

发布于 2024-08-29 13:01:23 字数 518 浏览 3 评论 0原文

如果我执行以下操作:

int c0 = CHAR_MAX; //8 bit
int c1 = CHAR_MAX; //8-bit
int i = c0*c1; //store in 32-bit variable
printf("%d\n", i); //prints 16129

我们可以看到 8 位数字相乘并产生 32 位输出是没有问题的。

但是,如果我

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long int ll = i0*i1; //store in 64-bit variable
printf("%lld\n", ll); //prints 1..overflow!!

在这种情况下,两个 32 位变量相乘,溢出,然后分配给 64 位变量。

那么为什么在乘以整数而不是乘以字符时会发生这种溢出呢?它取决于我的机器的默认字大小吗? (32 位)

If I do the following:

int c0 = CHAR_MAX; //8 bit
int c1 = CHAR_MAX; //8-bit
int i = c0*c1; //store in 32-bit variable
printf("%d\n", i); //prints 16129

We can see that there is no problem with to 8-bit numbers being multiplied together, and producing a 32-bit output.

However, if I do

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long int ll = i0*i1; //store in 64-bit variable
printf("%lld\n", ll); //prints 1..overflow!!

In this case, two 32-bit variables were multiplied together, overflowed, and then were assigned to the 64-bit variable.

So why did this overflow happen when multiplying the ints, but not the chars? Is it dependent on the default word-size of my machine? (32-bits)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

心奴独伤 2024-09-05 13:01:23

您应该更改第二个代码示例

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long ll = ((long long)i0)*i1; //compute and store in 64-bit variable
printf("%lld\n", ll);

,即在将它们相乘之前,将(至少其中一个)整数转换为 64 位。否则,会发生溢出,因为在将结果分配给 long long 变量之前,尝试将结果存储在 int 类型的临时变量中。任何表达式的结果都会转换为其具有最高精度的成员的精度。

在第一个示例中,int 足够大,可以容纳 char 相乘的结果,因此不会发生溢出。

附带说明一下,不建议将变量命名为 ll,因为很难区分数字“1”和小写字母“l”。

You should change your second code sample like

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long ll = ((long long)i0)*i1; //compute and store in 64-bit variable
printf("%lld\n", ll);

that is, cast (at least one of the) the ints to 64 bit before multiplying them. Otherwise the overflow happens because the result is attempted to be stored in a temporary of type int before assigning it to the long long variable. The result of any expression is casted to the precision of its member with the highest precision.

In the first example, an int is large enough to hold the result of multiplying chars, so there is no overflow.

As a side note, naming your variable ll is not recommended as it is very difficult to differentiate between the digit '1' and the lowercase letter 'l'.

做个ˇ局外人 2024-09-05 13:01:23

你对正在发生的事情的解释存在逻辑错误。

至少在 Linux 系统上,CHAR_MAX 当然不是 8 位数字。它是一个(或多或少)简单的预处理器定义,如下所示:

#  define SCHAR_MAX     127

/* Maximum value an `unsigned char' can hold.  (Minimum is 0.)  */
#  define UCHAR_MAX     255

/* Minimum and maximum values a `char' can hold.  */
#  ifdef __CHAR_UNSIGNED__
#   define CHAR_MIN     0
#   define CHAR_MAX     UCHAR_MAX
#  else
#   define CHAR_MIN     SCHAR_MIN
#   define CHAR_MAX     SCHAR_MAX
#  endif

因此,对于具有签名 char 的系统,最后两行有效,这意味着当您在代码中写入 CHAR_MAX 时,编译器看到一个普通的 127,其类型为 int

这意味着乘法 CHAR_MAX * CHAR_MAX 发生在 int 精度。

There's a logic fault in your explanation of what is going on.

On at least Linux systems, CHAR_MAX certainly isn't an 8-bit number. It's a (more or less) plain preprocessor define, like so:

#  define SCHAR_MAX     127

/* Maximum value an `unsigned char' can hold.  (Minimum is 0.)  */
#  define UCHAR_MAX     255

/* Minimum and maximum values a `char' can hold.  */
#  ifdef __CHAR_UNSIGNED__
#   define CHAR_MIN     0
#   define CHAR_MAX     UCHAR_MAX
#  else
#   define CHAR_MIN     SCHAR_MIN
#   define CHAR_MAX     SCHAR_MAX
#  endif

So, for a system with signed chars, the two last lines are in effect, which means that when you write CHAR_MAX in your code, the compiler sees a plain 127, which has type int.

This means that the multiplication CHAR_MAX * CHAR_MAX happens at int precision.

待天淡蓝洁白时 2024-09-05 13:01:23

类型转换如何工作...


除非指定显式类型转换,否则任何表达式都会被类型转换为所涉及的最高精度变量/常量的精度。

正如 Peter 指出的,在表达式中使用显式类型转换可以强制实现更高的精度。

注意:我没有得到“long long int”部分。也许自从我看到一个以来已经很长了......;-)

  • long long int 真的声明了 64 位 int 吗?

你使用的是哪个编译器?

How Typecast works...


Unless explicit typecast is specified, any expression is typecasted to the precision of the highest precision variable/constant involved.

As Peter pointed out, using an explicit typecast in the expression to forces higher precison.

NOTE: I didn't get the "long long int" part. Maybe its been a long time since i saw one... ;-)

  • Does long long int really declare a 64-bit int??

which compiler are U using??

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文