Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
137 views
in Technique[技术] by (71.8m points)

Formatting doubles for output in C#

Running a quick experiment related to Is double Multiplication Broken in .NET? and reading a couple of articles on C# string formatting, I thought that this:

{
    double i = 10 * 0.69;
    Console.WriteLine(i);
    Console.WriteLine(String.Format("  {0:F20}", i));
    Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
    Console.WriteLine(String.Format("= {0:F20}", 6.9));
}

Would be the C# equivalent of this C code:

{
    double i = 10 * 0.69;

    printf ( "%f
", i );
    printf ( "  %.20f
", i );
    printf ( "+ %.20f
", 6.9 - i );
    printf ( "= %.20f
", 6.9 );
}

However the C# produces the output:

6.9
  6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000

despite i showing up equal to the value 6.89999999999999946709 (rather than 6.9) in the debugger.

compared with C which shows the precision requested by the format:

6.900000                          
  6.89999999999999946709          
+ 0.00000000000000088818          
= 6.90000000000000035527          

What's going on?

( Microsoft .NET Framework Version 3.51 SP1 / Visual Studio C# 2008 Express Edition )


I have a background in numerical computing and experience implementing interval arithmetic - a technique for estimating errors due to the limits of precision in complicated numerical systems - on various platforms. To get the bounty, don't try and explain about the storage precision - in this case it's a difference of one ULP of a 64 bit double.

To get the bounty, I want to know how (or whether) .Net can format a double to the requested precision as visible in the C code.

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The problem is that .NET will always round a double to 15 significant decimal digits before applying your formatting, regardless of the precision requested by your format and regardless of the exact decimal value of the binary number.

I'd guess that the Visual Studio debugger has its own format/display routines that directly access the internal binary number, hence the discrepancies between your C# code, your C code and the debugger.

There's nothing built-in that will allow you to access the exact decimal value of a double, or to enable you to format a double to a specific number of decimal places, but you could do this yourself by picking apart the internal binary number and rebuilding it as a string representation of the decimal value.

Alternatively, you could use Jon Skeet's DoubleConverter class (linked to from his "Binary floating point and .NET" article). This has a ToExactString method which returns the exact decimal value of a double. You could easily modify this to enable rounding of the output to a specific precision.

double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));

// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...