This is not a question about how to compare two BigDecimal
objects - I know that you can use compareTo
instead of equals
to do that, since equals
is documented as:
Unlike compareTo, this method considers two BigDecimal objects equal only if they are equal in value and scale (thus 2.0 is not equal to 2.00 when compared by this method).
The question is: why has the equals
been specified in this seemingly counter-intuitive manner? That is, why is it important to be able to distinguish between 2.0 and 2.00?
It seems likely that there must be a reason for this, since the Comparable
documentation, which specifies the compareTo
method, states:
It is strongly recommended (though not required) that natural orderings be consistent with equals
I imagine there must be a good reason for ignoring this recommendation.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…