Although the other answer on this question links to a correct explanation of the concept of significant digits in general, NSNumberFormatter
's {uses|minimum|maximum}SignificantDigits
properties have nothing to do with precision of calculations.
The significant digits are the group of digits in a number from the first nonzero digit to the last nonzero digit, inclusive, usually unless trailing zeroes are fractional. Restricting output to a specific number of significant digits is useful if a relative (percentage) error is known or desired.
First of all, the minimumSignificantDigits
and maximumSignificantDigits
have no effect unless usesSignificantDigits
is set to YES
. If this is the case, their effect is probably most easily explained using examples.
Let's take the numbers a = 123.4567
, b = 1.23
, and c = 0.00123
:
Assuming minimumSignificantDigits = 0
, 1
or 2
:
If maximumSignificantDigits = 3
, then a
will be formatted as "123", b
as "1.23", and c
as "0.00123".
If maximumSignificantDigits = 4
, then a
will be formatted as "123.5", b
as "1.23" and c
as "0.00123".
If maximumSignificantDigits = 2
, then a
will be formatted as "120", b
as "1.2" and c
as "0.0012".
Assuming minimumSignificantDigits = 4
:
If maximumSignificantDigits = 4
, then a
will be formatted as "123.5", b
as "1.230", and c
as "0.001230".
Note: The 4 → 5 conversions occur due to the round-to-nearest mode, as the digit following the 4 in a is 5.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…