Given the following snippet:
#include <stdio.h>
typedef signed long long int64;
typedef signed int int32;
typedef signed char int8;
int main()
{
printf("%i
", sizeof(int8));
printf("%i
", sizeof(int32));
printf("%i
", sizeof(int64));
int8 a = 100;
int8 b = 100;
int32 c = a * b;
printf("%i
", c);
int32 d = 1000000000;
int32 e = 1000000000;
int64 f = d * e;
printf("%I64d
", f);
}
The output with MinGW GCC 3.4.5 is (-O0):
1
4
8
10000
-1486618624
The first multiplication is casted to an int32 internally (according to the assembler output). The second multiplication is not casted. I'm not sure if the results differ because the program was running on a IA32, or because it is defined somewhere in the C standard. Nevertheless I'm interested if this exact behavior is defined somewhere (ISO/IEC 9899?), because I like to better understand why and when I've to cast manually (I've problems porting a program from a different architecture).
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…