Running a quick experiment related to Is double Multiplication Broken in .NET? and reading a couple of articles on C# string formatting, I thought that this:
{
double i = 10 * 0.69;
Console.WriteLine(i);
Console.WriteLine(String.Format(" {0:F20}", i));
Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
Console.WriteLine(String.Format("= {0:F20}", 6.9));
}
Would be the C# equivalent of this C code:
{
double i = 10 * 0.69;
printf ( "%f
", i );
printf ( " %.20f
", i );
printf ( "+ %.20f
", 6.9 - i );
printf ( "= %.20f
", 6.9 );
}
However the C# produces the output:
6.9
6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000
despite i showing up equal to the value 6.89999999999999946709 (rather than 6.9) in the debugger.
compared with C which shows the precision requested by the format:
6.900000
6.89999999999999946709
+ 0.00000000000000088818
= 6.90000000000000035527
What's going on?
( Microsoft .NET Framework Version 3.51 SP1 / Visual Studio C# 2008 Express Edition )
I have a background in numerical computing and experience implementing interval arithmetic - a technique for estimating errors due to the limits of precision in complicated numerical systems - on various platforms. To get the bounty, don't try and explain about the storage precision - in this case it's a difference of one ULP of a 64 bit double.
To get the bounty, I want to know how (or whether) .Net can format a double to the requested precision as visible in the C code.
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…