This might be very basic but I got very curious what the reason is.
When dealing with different types of operations such as multiplication and division with data by different types (int, float etc) what decides which datatype that gets picked?
For example, if I do the following:
float a = 5 / 10;
I will get "0" as result since the 5 and the 10 are temporarily stored in an int where we do the division, and then we put it in a float. Right?
But if we instead do:
float a = (float)5 / 10;
We get 0.5 instead.
How does the decision making look when float is prefered over int in this case in C?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…