If we use algorithms with double and float arithmetic, how can we guarantee that the results are the same running it in Python and C, in x86 and x64 Linux and Windows computers and ARM microcontrollers?
Generally speaking, you cannot do so except possibly by carefully implementing your own FP operations. If you are using the various languages' standard operators and libraries and the underlying floating-point hardware, then you cannot be ensured of exact reproducability of results across different implementations.
In the first place, there is an issue with the internal representation of floating-point numbers. C does not specify the representation to be used, and even if all else were equal, that means you cannot rely on the same C program running on different implementations (e.g. x86_64 and ARM) to compute identical results.
In practice, most everyone uses IEEE 754 floating-point formats these days, and CPython uses the underlying C implementation's double
type to back its floats. Even then, however, IEEE allows for a certain small amount of variation between conforming implementations. Even directives and compilation options that request strict conformance to IEEE specifications cannot fully work around that.
Additionally, you specify that you want to handle both double
and float
, in both C and Python, but Python doesn't have a native analog of float
. Its native floating-point format (probably) corresponds to a C double
. Operations performed on different floating-point data types necessarily produce different results, even when the operands are numerically equivalent, and the difference can persist across type conversions, such as converting a double
result to float
.
There are additional details to consider as well, at the (machine) code-generation level, such as whether or when intermediate results are copied out of FPU registers into main memory (which may involve rounding) and the order in which operations are performed.
We re using an algorithm that uses:
double + double
double + float
double exp(double)
float * float
If you want to minimize the differences in computed values, then start by choosing one floating-point data type, and using it everywhere. For consistency between Python and C implementations, that should be double
.
You should also consider disabling any and all optimizations that might change the order in which FP operations are evaluated. That might be all optimizations whatever. If you have options available in your C compiler to enforce strict IEEE conformance, then turn those on.
You should furthermore test the equivalency of the exp()
functions on all relevant platforms. You may need to provide your own implementation.
Whatever you do, you should recognize that if your various implementations produce different results despite all being correct in some algorithmic sense, then that's a result in itself. It tells you something about the true precision of the computation, as implemented.
You must never forget that most computer FP operations produce approximate results, so even if you did manage to get all the implementations to produce identical results, that doesn't mean that those results are necessarily more right in an absolute sense than other nearby FP values. If numeric consistency is a requirement, then you ought to quantify that in terms of a specific precision of the results, implement your algorithm in a way that will deliver that precision, and ignore differences at precision higher than the one chosen.