I'm having a problem in financial application when calculate 70000.0*5.65500*18.0/36000.0
and compare the result with another number.
The accurate result is 197.925
When use Decimal, the results depends on operation order:
from decimal import Decimal
from fractions import Fraction
Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')/Decimal('36000.0')
The result is Decimal('197.925000')
Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0')
The result is Decimal('197.9249999999999999999999999')
When use Decimal + Fraction, the results are still inaccurate:
Decimal('70000.0')*Decimal('5.65500')*Decimal(float(Fraction(18, 36000)))
The result is Decimal('197.9250000000000041201417278')
When use the native float, the operation order does not affect the results while results are still inaccurate:
Decimal(70000.0*5.65500*18.0/36000.0)
The result is Decimal('197.92500000000001136868377216160297393798828125')
Decimal(70000.0/36000.0*5.65500*18.0)
The result is Decimal('197.92500000000001136868377216160297393798828125')
And by treat Decimal(1.0/36000.0)
or Decimal(5.655/36000.0)
as a multiplier, the order almost does not affect the results while results are still inaccurate:
Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')*Decimal(1.0/36000.0)
The result is Decimal('197.9250000000000094849096025')
Decimal('70000.0')*Decimal('5.65500')*Decimal(1.0/36000.0)*Decimal('18.0')
The result is Decimal('197.9250000000000094849096026')
Decimal('70000.0')*Decimal(5.655/36000.0)*Decimal('18.0')
The result is Decimal('197.9250000000000182364540136')
Decimal('70000.0')*Decimal('18.0')*Decimal(5.655/36000.0)
The result is Decimal('197.9250000000000182364540136')
If there is no method to achieve absolute accurate, a fault-tolerance maybe is a way out: Compare two number within a fault-tolerance.
The native float have precision of 1E-14
Decimal(70000.0/36000.0*5.65500*18.0) - Decimal('197.925000')
The result is Decimal('1.136868377216160297393798828E-14')
The Decimal of default setting have precision of 1E-25
Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0') - Decimal('197.925000')
The result is Decimal('-1E-25')
The precision of Decimal can be set by user
import decimal as decimal
from decimal import Decimal, Context
decimal.setcontext(Context(prec=60))
Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0')
The result is Decimal('197.924999999999999999999999999999999999999999999999999999999')
Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')/Decimal('36000.0')
The result is Decimal('197.925000')
Decimal(70000.0/36000.0*5.65500*18.0) - Decimal('197.925000')
The result is Decimal('1.136868377216160297393798828125E-14')
Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0') - Decimal('197.925000')
The result is Decimal('-1E-57')
In financial applications, in order to ensure absolute security, is there a recommended fault-tolerance? Does the default Decimal precision with fault-tolerance of 1E-20 enough?
question from:
https://stackoverflow.com/questions/65923108/confused-by-loss-of-float-precision-in-financial-application-with-python-decimal