I remember this behavior was kind of last minute change. In early betas of .NET 2.0, Nullable<T>
was a "normal" value type. Boxing a null
valued int?
turned it into a boxed int?
with a boolean flag. I think the reason they decided to choose the current approach is consistency. Say:
int? test = null;
object obj = test;
if (test != null)
Console.WriteLine("test is not null");
if (obj != null)
Console.WriteLine("obj is not null");
In the former approach (box null
-> boxed Nullable<T>
), you wouldn't get "test is not null" but you'd get "object is not null" which is weird.
Additionally, if they had boxed a nullable value to a boxed-Nullable<T>
:
int? val = 42;
object obj = val;
if (obj != null) {
// Our object is not null, so intuitively it's an `int` value:
int x = (int)obj; // ...but this would have failed.
}
Beside that, I believe the current behavior makes perfect sense for scenarios like nullable database values (think SQL-CLR...)
Clarification:
The whole point of providing nullable types is to make it easy to deal with variables that have no meaningful value. They didn't want to provide two distinct, unrelated types. An int?
should behaved more or less like a simple int
. That's why C# provides lifted operators.
So, when unboxing a value type into a nullable version, the CLR must allocate a Nullable<T>
object, initialize the hasValue field to true, and set the value field to the same value that is in the boxed value type. This impacts your application performance (memory allocation during unboxing).
This is not true. The CLR would have to allocates memory on stack to hold the variable whether or not it's nullable. There's not a performance issue to allocate space for an extra boolean variable.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…