No, this is not another "Why is (1/3.0)*3 != 1" question.
I've been reading about floating-points a lot lately; specifically, how the same calculation might give different results on different architectures or optimization settings.
This is a problem for video games which store replays, or are peer-to-peer networked (as opposed to server-client), which rely on all clients generating exactly the same results every time they run the program - a small discrepancy in one floating-point calculation can lead to a drastically different game-state on different machines (or even on the same machine!)
This happens even amongst processors that "follow" IEEE-754, primarily because some processors (namely x86) use double extended precision. That is, they use 80-bit registers to do all the calculations, then truncate to 64- or 32-bits, leading to different rounding results than machines which use 64- or 32- bits for the calculations.
I've seen several solutions to this problem online, but all for C++, not C#:
- Disable double extended-precision mode (so that all
double
calculations use IEEE-754 64-bits) using _controlfp_s
(Windows), _FPU_SETCW
(Linux?), or fpsetprec
(BSD).
- Always run the same compiler with the same optimization settings, and require all users to have the same CPU architecture (no cross-platform play). Because my "compiler" is actually the JIT, which may optimize differently every time the program is run, I don't think this is possible.
- Use fixed-point arithmetic, and avoid
float
and double
altogether. decimal
would work for this purpose, but would be much slower, and none of the System.Math
library functions support it.
So, is this even a problem in C#? What if I only intend to support Windows (not Mono)?
If it is, is there any way to force my program to run at normal double-precision?
If not, are there any libraries that would help keep floating-point calculations consistent?
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…