This is a follow-up to Testing for floating-point value equality: Is there a standard name for the “precision” constant?.
There is a very similar question Double.Epsilon
for equality, greater than, less than, less than or equal to, greater than or equal to.
It is well known that an equality test for two floating-point values x and y should look more like this (rather than a straightforward =):
abs( x - y ) < epsilon , where epsilon is some very small value.
How to choose a value for epsilon?
It would obviously be preferable to choose for epsilon as small a value as possible, to get the highest-possible precision for the equality check.
As an example, the .NET framework offers a constant System.Double.Epsilon
(= 4.94066 × 10-324), which represents the smallest positive System.Double
value that is greater than zero.
However, it turns out that this particular value can't be reliably used as epsilon, since:
0 + System.Double.Epsilon
≠ 0
1 + System.Double.Epsilon
= 1 (!)
which is, if I understand correctly, because that constant is less than machine epsilon.
→ Is this correct?
→ Does this also mean that I can reliably use epsilon := machine epsilon for equality tests?
Removed these two questions, as they are already adequately answered by the second SO question linked-to above.
The linked-to Wikipedia article says that for 64-bit floating-point numbers (ie. the double
type in many languages), machine epsilon is equal to:
2-53, or approx. 0.000000000000000111 (a number with 15 zeroes after the decimal point)
→ Does it follow from this that all 64-bit floating point values are guaranteed to be accurate to 14 (if not 15) digits?
See Question&Answers more detail:
os