Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
216 views
in Technique[技术] by (71.8m points)

floating point - Confused by loss of float precision in financial application with python Decimal package

I'm having a problem in financial application when calculate 70000.0*5.65500*18.0/36000.0 and compare the result with another number.

The accurate result is 197.925

When use Decimal, the results depends on operation order:

from decimal import Decimal
from fractions import Fraction
Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')/Decimal('36000.0')

The result is Decimal('197.925000')

Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0')

The result is Decimal('197.9249999999999999999999999')

When use Decimal + Fraction, the results are still inaccurate:

Decimal('70000.0')*Decimal('5.65500')*Decimal(float(Fraction(18, 36000)))

The result is Decimal('197.9250000000000041201417278')

When use the native float, the operation order does not affect the results while results are still inaccurate:

Decimal(70000.0*5.65500*18.0/36000.0)

The result is Decimal('197.92500000000001136868377216160297393798828125')

Decimal(70000.0/36000.0*5.65500*18.0)

The result is Decimal('197.92500000000001136868377216160297393798828125')

And by treat Decimal(1.0/36000.0) or Decimal(5.655/36000.0) as a multiplier, the order almost does not affect the results while results are still inaccurate:

Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')*Decimal(1.0/36000.0)

The result is Decimal('197.9250000000000094849096025')

Decimal('70000.0')*Decimal('5.65500')*Decimal(1.0/36000.0)*Decimal('18.0')

The result is Decimal('197.9250000000000094849096026')

Decimal('70000.0')*Decimal(5.655/36000.0)*Decimal('18.0')

The result is Decimal('197.9250000000000182364540136')

Decimal('70000.0')*Decimal('18.0')*Decimal(5.655/36000.0)

The result is Decimal('197.9250000000000182364540136')

If there is no method to achieve absolute accurate, a fault-tolerance maybe is a way out: Compare two number within a fault-tolerance.

The native float have precision of 1E-14

Decimal(70000.0/36000.0*5.65500*18.0) - Decimal('197.925000')

The result is Decimal('1.136868377216160297393798828E-14')

The Decimal of default setting have precision of 1E-25

Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0') - Decimal('197.925000')

The result is Decimal('-1E-25')

The precision of Decimal can be set by user

import decimal as decimal
from decimal import Decimal, Context
decimal.setcontext(Context(prec=60))
Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0')

The result is Decimal('197.924999999999999999999999999999999999999999999999999999999')


Decimal('70000.0')*Decimal('5.65500')*Decimal('18.0')/Decimal('36000.0')

The result is Decimal('197.925000')


Decimal(70000.0/36000.0*5.65500*18.0) - Decimal('197.925000')

The result is Decimal('1.136868377216160297393798828125E-14')


Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0') - Decimal('197.925000')

The result is  Decimal('-1E-57')

In financial applications, in order to ensure absolute security, is there a recommended fault-tolerance? Does the default Decimal precision with fault-tolerance of 1E-20 enough?

question from:https://stackoverflow.com/questions/65923108/confused-by-loss-of-float-precision-in-financial-application-with-python-decimal

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Just like floating-point arithmetic does generally, Python's Decimal is not exact and is subject to round-off error. Compared to float, the key difference with Decimal is that it uses a base-10 representation, so that values like "0.1" are exactly representable.

In your first example,

Decimal('70000.0')*Decimal('5.65500')/Decimal('36000.0')*Decimal('18.0')

The result is Decimal('197.9249999999999999999999999')

In this computation, Decimal is using its default precision of 28 places. Note that the result is printed with 28 significant digits, and is correct up to an off-by-one difference in the lowest digit. In other words, the computation has a round-off error of 1 "units in the last place" (ULP). It is typical that an expression involving a handful of floating-point operations comes out with a few ULPs of round-off error. So this is working as intended.

I see you have already tried increasing Decimal's precision. This reduces the magnitude of the round-off error, but yes, there is still round-off error.

If you don't like to see the trailing "9"s on the end of the printed result, use for instance round(result, 12) to round away the bottom few digits.

Or if exact computation is a must, which I could understand for a financial application, then don't use Decimal or other float-point representations. Reformulate the computation as arithmetic with Fractions:

Fraction(70000) * Fraction(5655, 1000) * Fraction(18) / Fraction(36000)

This exactly produces the correct answer, 7917/40 = 197.925.

See also the Python documentation Floating Point Arithmetic: Issues and Limitations.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...