My question is somewhat related to this one: Explicitly implemented interface and generic constraint.
My question, however, is how the compiler enables a generic constraint to eliminate the need for boxing a value type that explicitly implements an interface.
I guess my question boils down to two parts:
What is going on with the behind-the-scenes CLR implementation that requires a value type to be boxed when accessing an explicitly implemented interface member, and
What happens with a generic constraint that removes this requirement?
Some example code:
internal struct TestStruct : IEquatable<TestStruct>
{
bool IEquatable<TestStruct>.Equals(TestStruct other)
{
return true;
}
}
internal class TesterClass
{
// Methods
public static bool AreEqual<T>(T arg1, T arg2) where T: IEquatable<T>
{
return arg1.Equals(arg2);
}
public static void Run()
{
TestStruct t1 = new TestStruct();
TestStruct t2 = new TestStruct();
Debug.Assert(((IEquatable<TestStruct>) t1).Equals(t2));
Debug.Assert(AreEqual<TestStruct>(t1, t2));
}
}
And the resultant IL:
.class private sequential ansi sealed beforefieldinit TestStruct
extends [mscorlib]System.ValueType
implements [mscorlib]System.IEquatable`1<valuetype TestStruct>
{
.method private hidebysig newslot virtual final instance bool System.IEquatable<TestStruct>.Equals(valuetype TestStruct other) cil managed
{
.override [mscorlib]System.IEquatable`1<valuetype TestStruct>::Equals
.maxstack 1
.locals init (
[0] bool CS$1$0000)
L_0000: nop
L_0001: ldc.i4.1
L_0002: stloc.0
L_0003: br.s L_0005
L_0005: ldloc.0
L_0006: ret
}
}
.class private auto ansi beforefieldinit TesterClass
extends [mscorlib]System.Object
{
.method public hidebysig specialname rtspecialname instance void .ctor() cil managed
{
.maxstack 8
L_0000: ldarg.0
L_0001: call instance void [mscorlib]System.Object::.ctor()
L_0006: ret
}
.method public hidebysig static bool AreEqual<([mscorlib]System.IEquatable`1<!!T>) T>(!!T arg1, !!T arg2) cil managed
{
.maxstack 2
.locals init (
[0] bool CS$1$0000)
L_0000: nop
L_0001: ldarga.s arg1
L_0003: ldarg.1
L_0004: constrained !!T
L_000a: callvirt instance bool [mscorlib]System.IEquatable`1<!!T>::Equals(!0)
L_000f: stloc.0
L_0010: br.s L_0012
L_0012: ldloc.0
L_0013: ret
}
.method public hidebysig static void Run() cil managed
{
.maxstack 2
.locals init (
[0] valuetype TestStruct t1,
[1] valuetype TestStruct t2,
[2] bool areEqual)
L_0000: nop
L_0001: ldloca.s t1
L_0003: initobj TestStruct
L_0009: ldloca.s t2
L_000b: initobj TestStruct
L_0011: ldloc.0
L_0012: box TestStruct
L_0017: ldloc.1
L_0018: callvirt instance bool [mscorlib]System.IEquatable`1<valuetype TestStruct>::Equals(!0)
L_001d: stloc.2
L_001e: ldloc.2
L_001f: call void [System]System.Diagnostics.Debug::Assert(bool)
L_0024: nop
L_0025: ldloc.0
L_0026: ldloc.1
L_0027: call bool TesterClass::AreEqual<valuetype TestStruct>(!!0, !!0)
L_002c: stloc.2
L_002d: ldloc.2
L_002e: call void [System]System.Diagnostics.Debug::Assert(bool)
L_0033: nop
L_0034: ret
}
}
The key call is constrained !!T
instead of box TestStruct
, but the subsequent call is still callvirt
in both cases.
So I don't know what it is with boxing that is required to make a virtual call, and I especially do not understand how using a generic constrained to a value type removes the need for the boxing operation.
I thank everyone in advance...
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…