There is a lot of complexity at the intersection of overload resolution and type inference. The current draft of the lambda specification has all the gory details. Sections F and G cover overload resolution and type inference, respectively. I don't pretend to understand it all. The summary sections in the introduction are fairly understandable, though, and I recommend that people read them, particularly the summaries of sections F and G, to get an idea of what's going on in this area.
To recap the issues briefly, consider a method call with some arguments in the presence of overloaded methods. Overload resolution has to choose the right method to call. The "shape" of the method (arity, or number of arguments) is most significant; obviously a method call with one argument can't resolve to a method that takes two parameters. But overloaded methods often have the same number of parameters of different types. In this case, the types start to matter.
Suppose there are two overloaded methods:
void foo(int i);
void foo(String s);
and some code has the following method call:
foo("hello");
Obviously this resolves to the second method, based on the type of the argument being passed. But what if we are doing overload resolution, and the argument is a lambda? (Especially one whose types are implicit, that relies on type inference to establish the types.) Recall that a lambda expression's type is inferred from the target type, that is, the type expected in this context. Unfortunately, if we have overloaded methods, we don't have a target type until we've resolved which overloaded method we're going to call. But since we don't yet have a type for the lambda expression, we can't use its type to help us during overload resolution.
Let's look at the example here. Consider interface A
and abstract class B
as defined in the example. We have class C
that contains two overloads, and then some code calls the apply
method and passes it a lambda:
public void apply(A a)
public B apply(B b)
c.apply(x -> System.out.println(x));
Both apply
overloads have the same number of parameters. The argument is a lambda, which must match a functional interface. A
and B
are actual types, so it's manifest that A
is a functional interface whereas B
is not, therefore the result of overload resolution is apply(A)
. At this point we now have a target type A
for the lambda, and type inference for x
proceeds.
Now the variation:
public void apply(A a)
public <T extends B> T apply(T t)
c.apply(x -> System.out.println(x));
Instead of an actual type, the second overload of apply
is a generic type variable T
. We haven't done type inference, so we don't take T
into account, at least not until after overload resolution has completed. Thus both overloads are still applicable, neither is most specific, and the compiler emits an error that the call is ambiguous.
You might argue that, since we know that T
has a type bound of B
, which is a class, not a functional interface, the lambda can't possibly apply to this overload, thus it should be ruled out during overload resolution, removing the ambiguity. I'm not the one to have that argument with. :-) This might indeed be a bug in either the compiler or perhaps even in the specification.
I do know that this area went through a bunch of changes during the design of Java 8. Earlier variations did attempt to bring more type checking and inference information into the overload resolution phase, but they were harder to implement, specify, and understand. (Yes, even harder to understand than it is now.) Unfortunately problems kept arising. It was decided to simplify things by reducing the range of things that can be overloaded.
Type inference and overloading are ever in opposition; many languages with type inference from day 1 prohibit overloading (except maybe on arity.) So for constructs like implicit lambdas, which require inference, it seems reasonable to give up something in overloading power to increase the range of cases where implicit lambdas can be used.
-- Brian Goetz, Lambda Expert Group, 9 Aug 2013
(This was quite a controversial decision. Note that there were 116 messages in this thread, and there are several other threads that discuss this issue.)
One of the consequences of this decision was that certain APIs had to be changed to avoid overloading, for example, the Comparator API. Previously, the Comparator.comparing
method had four overloads:
comparing(Function)
comparing(ToDoubleFunction)
comparing(ToIntFunction)
comparing(ToLongFunction)
The problem was that these overloads are differentiated only by the lambda return type, and we actually never quite got the type inference to work here with implicitly-typed lambdas. In order to use these one would always have to cast or supply an explicit type argument for the lambda. These APIs were later changed to:
comparing(Function)
comparingDouble(ToDoubleFunction)
comparingInt(ToIntFunction)
comparingLong(ToLongFunction)
which is somewhat clumsy, but it's entirely unambiguous. A similar situation occurs with Stream.map
, mapToDouble
, mapToInt
, and mapToLong
, and in a few other places around the API.
The bottom line is that getting overload resolution right in the presence of type inference is very difficult in general, and that the language and compiler designers traded away power from overload resolution in order to make type inference work better. For this reason, the Java 8 APIs avoid overloaded methods where implicitly typed lambdas are expected to be used.