Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
227 views
in Technique[技术] by (71.8m points)

c# - CLR implementation of virtual method calls to interface members

Out of curiosity: how does the CLR dispatch virtual method calls to interface members to the correct implementation?

I know about the VTable that the CLR maintains for each type with method slots for each method, and the fact that for each interface it has an additional list of method slots that point to the associated interface method implementations. But I don't understand the following: how does the CLR efficiently determine which interface method slot list to pick from the type's VTable?

The article Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects from the May 2005 issue of the MSDN Magazine talks about a process-level mapping table IVMap indexed by interface ID. Does this mean that all types in the same process have the same pointer to the same IVMap?

It also states that:

If MyInterface1 is implemented by two classes, there will be two entries in the IVMap table. The entry will point back to the beginning of the sub-table embedded within the MyClass method table.

How does the CLR know which entry to pick? Does it do a linear search to find the entry that matches the current type? Or a binary search? Or some kind of direct indexing and have a map with possibly many empty entries in it?

I've also read the chapter on Interfaces in CLR via C# 3rd edition but it does not talk about this. Therefore, the answers to this other question do not answer my question.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

That article is more than 10 years old, and a lot has changed since then.

IVMaps have now been superseded by Virtual Stub Dispatch.

Virtual stub dispatching (VSD) is the technique of using stubs for virtual method invocations instead of the traditional virtual method table. In the past, interface dispatch required that interfaces had process-unique identifiers, and that every loaded interface was added to a global interface virtual table map.

Go read that article, it has more detail you'll ever need to know. It comes from the Book of the Runtime, which was documentation originally written by the CLR devs for CLR devs but has now been published for everyone. It basically describes the guts of the runtime.

There's no point for me to duplicate the article here, but I'll just state the main points and what they imply:

  • When the JIT sees a call to an interface member, it compiles it into a lookup stub. This is a piece of code will invoke a generic resolver.
  • The generic resolver is a function which will find out which method to call. It's the most generic and therefore slowest way to invoke such a method. When called for the first time from a lookup stub, it will patch that stub (rewrite its code at runtime) into a dispatch stub. It also generates a resolve stub for later use. The lookup stub goes away at this point.
  • A dispatch stub is the fastest way to invoke an interface member, but there's a catch: it is optimistic about the call being monomorphic, which means that it's optimized for the case when the interface call always resolves to the same concrete type. It compares the method table (ie the concrete type) of the object to the previously seen one (which is hardcoded into the stub), and calls the cached method (whose address is also also hardocded) if the comparison succeeds. If it fails, it falls back to the resolve stub.
  • The resolve stub handles polymorphic calls (the general case). It uses a cache to find which method to call. If the method is not in the cache, it invokes the generic resolver (which also writes to this cache).

And here's an important consideration, straight from the article:

When a dispatch stub fails frequently enough, the call site is deemed to be polymorphic and the resolve stub will back patch the call site to point directly to the resolve stub to avoid the overhead of a consistently failing dispatch stub. At sync points (currently the end of a GC), polymorphic sites will be randomly promoted back to monomorphic call sites under the assumption that the polymorphic attribute of a call site is usually temporary. If this assumption is incorrect for any particular call site, it will quickly trigger a backpatch to demote it to polymorphic again.

The runtime is really optimistic about monomorphic call sites, which makes a lot of sense in real code, and it will try hard to avoid resolve stubs as much as possible.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...