There are a few aspects of mscorlib and the like which wouldn't compile as-written, without some interesting hacks. In particular, there are some cyclic dependencies. This is another case, but I think it's reasonable to consider MaxValue
and MinValue
as being const
as far as the C# compiler is concerned.
In particular, it's valid to use them within other const
calculations:
const decimal Sum = decimal.MaxValue + decimal.MinValue;
The fields have the DecimalConstantAttribute
applied to them, which is effectively a hack to get around an impedance mismatch between C# and the CLR: you can't have a constant field of type decimal
in the CLR in the same way that you can have a constant field of type int
or string
, with an IL declaration using static literal ...
.
(This is also why you can't use decimal
values in attribute constructors - there, the "const-ness" requirement is true IL-level constness.)
Instead, any const decimal
declaration in C# code is compiled to a static initonly
field with DecimalConstantAttribute
applied to it specifying the appropriate data. The C# compiler uses that information to treat such a field as a constant expression elsewhere.
Basically, decimal
in the CLR isn't a "known primitive" type in the way that int
, float
etc are. There are no decimal
-specific IL instructions.
Now, in terms of the specific C# code you're referring to, I suspect there are two possibilities:
- No, this isn't the exact source code used.
- The C# compiler used to compile mscorlib and other core aspects of the framework may have special flags applied to allow such code, converting it directly to
DecimalConstantAttribute
To a large extent you can ignore this - it won't affect you. It's a shame that MSDN documents the fields as being static readonly
rather than const
though, as that gives the mistaken impression that one can't use them in const
expressions :(
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…