Paz Service Discovery

How Paz's service discovery and inter-service routing works

The Service Discovery Problem

In a multi-host cluster of machines running Docker containers, we need a way for services to communicate with each other and receive requests from the outside world, yet do so in such a way that services can move hosts or come and go at any time. This is called service discovery.

For a Docker PaaS such as Paz, this needs to be dynamic. If service A wants to talk to service B, we cannot simply pass the IP address and port of any instance of service B to service A when service A starts, because that binds those two instances together and service A will become unusable if service B goes away. What we instead want is logical routing, whereby service A can talk to any instance of service B, and have the platform take care of the underlying changes as particular instances of service B come and go. We want service A to be able to use the service name of B as a routable address.

Paz's service discovery solution is built around Etcd, Confd and HAProxy; in short it is a dynamically-configured load balancing proxy.

Imagine we have a "web" service that wants to talk to an "api" service, both of which are running in a container; we have multiple instances of each. A logical view of this service discovery problem might look like this:


Service discovery - logical view

We have a dynamically configured proxy that takes care of routing "web"s requests to "api" by service name.

How Paz's Service Discovery Works

Paz's service discovery magic comes about through a combination of HAProxy and Dnsmasq.


Paz service discovery stack

Each Docker container running on a Paz host is started with --dns=HOST_IP. Bound to port 53 on each host is an instance of Dnsmasq, which has been configured to filter DNS requests from the host's containers, returning the address of the local HAProxy instance to those DNS requests for services within the cluster, and forwarding all other DNS requests on to a "real" DNS server on the Internet.

This HAProxy instance is dynamically configured with routing information for all other containers within the cluster. This dynamic configuration is done by a tool called Confd, a tool for writing configuration files (HAProxy config files in this case) from templates populated with data from Etcd.

The information Confd takes from Etcd is placed there by so-called side-kick announce units. For every Paz container there is a corresponding side-kick whose lifetime is bound to the Paz container using the systemd BindsTo directive (meaning that when the service starts the announce side-kick will also start, and when it stops so to will the announce sidekick).


Paz Etcd service announcement

The announce side-kick's job is to use docker inspect to grab the port the container is bound to on the host and write the host IP and container port into Etcd. It writes it with a TTL, then it sleeps for slightly less than that time, and then writes it again. It does this in a loop. This means that as long as the container is running its IP and port will be written in Etcd as per the above diagram.

Confd watches the /paz/services directory in Etcd and, on change, takes the values (for all services) and writes an HAProxy config file, then notifies HAProxy to reload its config.

In this way, HAProxy is always up-to-date with routing information for all services that are running and no services that are not (except for a short window of time before the TTL expires).

Below is an example excerpt of a Paz HAProxy config file generated by Confd:

frontend http-in
    bind *:80
    acl subdom_myservice hdr(host) -i myservice.paz hdr(host) -i
    use_backend backend-myservice if subdom_myservice
    acl subdom_paz-web hdr(host) -i paz-web.paz hdr(host) -i
    use_backend backend-paz-web if subdom_paz-web

backend backend-myservice
    balance roundrobin
    server myservice-v1-1
    server myservice-v1-2

backend backend-paz-web
    server paz-web