The absolute simplest of the simplest to deploy is the following config:
- Use the same message types and contracts every where. This is a must in any situations, so it kind of goes without saying.
- Do no use dialog security. simply
GRANT SEND ON SERVICE::[<servicename>] TO [public]
everywhere. This eliminates the need for database certificates and remote service bindings.
- Do use certificates for endpoints, but do not exchange them (export, import, create login etc). There is a trick:
GRANT CONNECT ON ENDPOINT::[<brokerendpointname>] TO [public]
allows two endpoints to connect using SSL (certificates) even w/o exchanging certificates.
- Use transport routing (meaning enable the special
TRANSPORT
route and name your services using the [tcp://hostname:port/servicename]
convention.
Let me explain why I'm recommending this setup:
- Removing dialog security simplifies deployment by something like 10x. Dialog security allows a service to authenticate and autorize the sender of each message, but in relatively controlled environments (intranet) you can deploy based on trust: any message received by a service is trusted to come from an authorized sender.
- Using certificates for endpoints is generally seen as complex because of the need to exchange the certificates to allow connectivity, but the trick of granting connect to
public
on the broker endpoint removes the requirement to exchange certificates. All machines using this trick can talk to each other w/o any prior setup which is better even than using Windows authentication on the endpoints (that require grant connect to domainmachine$
ro requires deployment of SQL Server instances using specific domain accounts). Again, you loos the ability to say 'No' on a connection, you will accept connection from any SQL instance in your intranet.
- With TRANSPORT routing any SQL Server instance that joins the 'party' is ready to rock: because the service name contains the host name, all the other machines already know how to talk with this machine and do no require explicit routes to be added.
This configuration is really as close as you get to 'plug-and-play'. New machines can join the communication with any existing SQL Server SSB services immediately without requiring any configuration changes on the other existing machines.
Here is an example of how to configure a machines for such a deployment. Say you want to start by deploying the central server on MACHINE1
:
use master;
go
create database master key...
create certificate [MACHINE1] with subject 'MACHINE1';
create endpoint BROKER as tcp (listener_port 4022) for service_broker
(authentication certificate [MACHINE1]);
grant connect on endpoint::BROKER to [public];
go
use db1;
create message type...
create contract ...
create queue ...
create service [tcp://MACHINE1:4022/CentralService]
on ...
([...]);
grant send on service::[tcp://MACHINE1:4022/CentralService] to [public];
create route transport with address = 'TRANSPORT';
go
That's it. Now let ad a node, say on host MACHINE2:
use master;
go
create database master key...
create certificate [MACHINE2] with subject 'MACHINE2';
create endpoint BROKER as tcp (listener_port 4022) for service_broker
(authentication certificate [MACHINE2]);
grant connect on endpoint::BROKER to [public];
go
use db2;
create message type...
create contract ...
create queue ...
create service [tcp://MACHINE2:4022/Satellite]
on ...
([...]);
grant send on service::[tcp://MACHINE2:4022/Satellite] to [public];
create route transport with address = 'TRANSPORT';
go
That's it. Now two things happen:
- because both endpoints on MACHINE1 and MACHINE2 use certificate based authentication and have granted connect to public, they can connect and exchange messages as is, without the need to exchange (export and import) their endpoint certificates
- because both databases have created the special TRANSPORT route and the service names have the special
[tcp://machine:port/service]
syntax the two services can immediately exchange messages as-is, without any explicit routing.
The best thing is how you add a new node, say MACHINE3:
use master;
go
create database master key...
create certificate [MACHINE3] with subject 'MACHINE3';
create endpoint BROKER as tcp (listener_port 4022) for service_broker
(authentication certificate [MACHINE3]);
grant connect on endpoint::BROKER to [public];
go
use db2;
create message type...
create contract ...
create queue ...
create service [tcp://MACHINE3:4022/Satellite]
on ...
([...]);
grant send on service::[tcp://MACHINE3:4022/Satellite] to [public];
create route transport with address = 'TRANSPORT';
go
Now, whiteout any single change to MACHINE1 nor to MACHINE2, the new node MACHINE3 can exchange messages with the central service, and actually with the Satellite of MACHINE2 too, if needed. The endpoints are accepting anybody to connect so MACHINE3 is welcome, and the service names used are auto-routed by the special TRANSPORT routing mechanism. This is the beauty of this configuration, the plug-and-play: adding a new node requires 0 config on the other nodes.
So what gives? The biggest problem is security. Any employee can download SQL Server Express on his desktop, set up an unauthorized Satellite node and start exchanging messages with the Central service. There really isn't anything to stop him, you have explicitly open all gates. A more subtle problem is when a service moves. When the service [tcp://MACHINE3:4022/Satellite]
is moved (eg. via database backup/restore) to MACHINE4
the name of the service is still a valid TRANSPORT route syntax name, but is incorrect. Depending on how important is to preserve existing conversation, you can choose to nuke the service and create a new one, named [tcp://MACHINE4:4022/Satellite]
and party (you cannot rename a service, you must drop and create a new one). If maintaining existing conversation is critical then there are workarounds, as adding an explicit route for it on the Central service database will take precedence over the last-resort TRANSPORT route and messages will be redirected correctly. The important thing is that there are solutions :)