Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
340 views
in Technique[技术] by (71.8m points)

c# - Floating point inconsistency between expression and assigned object

This surprised me - the same arithmetic gives different results depending on how its executed:

> 0.1f+0.2f==0.3f
False

> var z = 0.3f;
> 0.1f+0.2f==z
True

> 0.1f+0.2f==(dynamic)0.3f
True

(Tested in Linqpad)

What's going on?


Edit: I understand why floating point arithmetic is imprecise, but not why it would be inconsistent.

The venerable C reliably confirms that 0.1 + 0.2 == 0.3 holds for single-precision floats, but not double-precision floating points.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I strongly suspect you may find that you get different results running this code with and without the debugger, and in release configuration vs in debug configuration.

In the first version, you're comparing two expressions. The C# language allows those expressions to be evaluated in higher precision arithmetic than the source types.

In the second version, you're assigning the addition result to a local variable. In some scenarios, that will force the result to be truncated down to 32 bits - leading to a different result. In other scenarios, the CLR or C# compiler will realize that it can optimize away the local variable.

From section 4.1.6 of the C# 4 spec:

Floating point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating point type with greater range and precision than the double type, and implicitly perform all floating point operations with the higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating point operations with less precision. Rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating point operations. Other than delivering more precise results, this rarely has any measurable effects.

EDIT: I haven't tried compiling this, but in the comments, Chris says the first form isn't being evaluated at execution time at all. The above can still apply (I've tweaked my wording slightly) - it's just shifted the evaluation time of a constant from execution time to compile-time. So long as it behaves the same way as a valid evaluation, that seems okay to me - so the compiler's own constant expression evaluation can use higher-precision arithmetic too.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...