WCF operations can be defined using either synchronous, EAP or (as of .NET 4.5) TAP. From MSDN:
Clients can offer the developer any programming model they choose, so long as the underlying message exchange pattern is observed. So, too, can services implement operations in any manner, so long as the specified message pattern is observed.
You can actually have all 3 patterns in a single contract interface, and they would all relate to the same message.
On the wire, there's no difference how you execute the operations. WSDL (which WCF builds from each endpoint's ABC - address, binding and contract) doesn't contain this information. It is generated from operation descriptions.
If you look at the OperationDescription
class, which is used in a ContractDescription
, you'll see each operation has these properties: SyncMethod
, BeginMethod
, EndMethod
and TaskMethod
. When creating a description, WCF will combine all the methods according to the operation's name into a single operation. If there's some mismatch between operations with the same name in different patterns (e.g. different parameters) WCF would throw an exception detailing exactly what's wrong. WCF automatically assumes (optional) "Async" suffix for Task-based methods, and Begin/End prefix for APM.
The client and server side are completely unrelated in this sense. The utility that generates proxy classes from WSDL (svcutil
), can build proxies for any execution pattern. It doesn't even have to be a WCF service.
On the server side, if more than one pattern is implemented, WCF will use just one in the following order of precedence: Task, Sync and APM. This is documented somewhere in MSDN, I just can't find it right now. But you can look at the reference source here.
In conclusion, you can safely change your server implementation as long as you don't modify the message the operation represents.
Regarding the scaling (should be a different question IMO)
- WCF's throttling default values have been updated in .NET 4.5 to much more reasonable values and are now processor-dependent (see here).
- There's no change in regards to the thread-pool issue. The problem stems from the initial size of the completion port thread-pool, which is initially set to 4 times the amount of the logical processors. You can use
ThreadPool.SetMinThreads
to increase the amount by some factor (see this post). This setting could also be beneficial on the client side.
If you use async on the server side (when calling other services, database, etc.), the threading situation could improve dramatically because you won't be wasting thread-pool threads that are just waiting for IO to complete.
The best thing in these situations is to do a LOT of benchmarking.