My best bet is that this is due to the liberty the runtime has to perform floating point operations with higher precision than the types involved and then truncating the result to the type's precision when assigning:
The CLI specification in section 12.1.3 dictates an exact precision for floating point numbers, float and double, when used in storage locations. However it allows for the precision to be exceeded when floating point numbers are used in other locations like the execution stack, arguments return values, etc … What precision is used is left to the runtime and underlying hardware. This extra precision can lead to subtle differences in floating point evaluations between different machines or runtimes.
Source here.
In you first example t % (1f / stepAmount)
can be performed entirely with a higher precision than float
and then truncated when the result is assigned to remainder
, while in the second example, 1f / stepAmount
is truncated and assigned to fractions
prior to the modulus operation.
As to why making stepamount
a const
makes both modulus operations consistent, the reason is that 1f / stepamount
immediately becomes a constant expression that is evaluated and truncated to float precision at compile time and is no different from writing 0.01f
which essentially makes both examples equivalent.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…