Modern Application Architecture on an Azure distributed Docker Swarm (Part 0)

docker_Azure

I have been working with my inspiring colleague “Peter Vyvey” for building, shipping and running a modern application for the Enterprise running on a distributed Docker Swarm cluster.

This small project is started in our spare time.  So let us know how we can improve our posts.

Now back to this project.  This solution consists of several technologies :

More specifications about the software architecture soon to come…

To host this architecture, we’ve decided to leverage the benefits of small independent services.  To achieve this, we use Docker and its ecosystem for building/shipping/running the entire stack.  This post will focus on the fundamental parts of the infrastructure needed for hosting a modern application. (in this case on an Azure Public Cloud offering)

COMPONENTS AND TERMINOLOGY

1. Service Discovery Managers
Multiple service discovery managers are supported by Docker Swarm.  There are excellent reviews on the advantages/disadvantages of each solution.  Service Discovery Managers are very important building blocks for achieving high available, multi data-center solutions.  Solutions like Consul can perform health-checks on the services, offer a distributed key/value store, provide a DNS- and a powerful HTTP interface.
More about this very powerful component in the section “Consul Service Discovery”.

2. Docker Swarm
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

3. Swarm “Worker Nodes”
This is not an official Docker term, but a logical association for a host machine that is only responsible for running containers.  A Swarm Node needs to have the Docker Daemon and the Swarm Join container running on it.

4. Docker Networking
Since the latest release of Docker (1.9), there are a lot of enhancements in this area.
Docker’s new overlay network driver supports multi-host networking natively out-of-the-box. This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker’s libkv library.  highly recommended: reading more about overlay networks for building software defined networks.

5. Registrator
Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. Registrator supports Service Discovery Managers Consul, etcd and SkyDNS 2.  More information about this image can be found here.

DOCKER MACHINE & AZURE

Docker Machine lets you create Docker hosts on your computer, on multiple cloud providers and inside your own data center. However, in the current version, there is lack of integration with “Azure Resource Manager”.  Therefore, at the moment of writing, there are no possibilities to set virtual network configuration , static IP-addresses (for Swarm & Consul Cluster), network security groups , availability sets.
Therefore I decided now to perform a manual installation using Azure Resource Manager.

SERVICE DISCOVERY MANAGER – CONSUL –

This solution should have its own blog post. 🙂  I will zoom in on the various usages of the Consul Service Discovery solution.

  • Between micro-services : consumers can query the registry where a desired service is located. (by using DNS or HTTP)
  • Inside a micro-service
  • At Container level : Registrator will registrate all container instances, grouped per type on the registry.
  • At Infrastructure level : the registry is used by Swarm (internally) to discover nodes and to enable Multi-host (Overlay) networking.

 

PRODUCTION ENVIRONMENT DIAGRAM

Every production environment is going to have nuances that need to be handled carefully. This reference architecture diagram is an example deployment topology for running a production environment in one datacenter :

Production_Diagram

From the diagram you can see we have dedicated Swarm Managers and dedicated Service Discovery Managers.  This allows for a consistent Swarm Node setup that is only going to host Docker Containers running the applications (= worker nodes).
DevOps will only interface with the Swarm Managers, and for security it makes sense to lock down access to the Container Nodes to only the ports necessary for hosting the applications and the management ports for the Swarm.  This can be accomplished in Azure by using Network Security Groups.

CONSUL CLUSTER

The Consul cluster consists of three Consul-server instances/nodes. Each node is defined as an Azure A0 virtual machine and is added to a specific Azure availability set.
The cluster will be activated when the expected number of nodes are connected , in our case the cluster is formed when there are 3 active nodes.
We use the Docker Consul image provided by Glider Labs to avoid software installations and to keep each node clean and easy swappable.


# consulnode1 with static IP-address 10.2.0.10
export IP=`hostname -i`
docker run --name consulnode1 -d -h consulnode1 -v /mnt:/data -p $IP:8300:8300 \
-p $IP:8301:8301 -p $IP:8301:8301/udp -p $IP:8302:8302 -p $IP:8302:8302/udp -p $IP:8400:8400 -p $IP:8500:8500 -p 172.17.0.1:53:53/udp \
gliderlabs/consul-server -server -advertise $IP -bootstrap-expect 3

# consulnode2 with static IP-address 10.2.0.11
export IP=`hostname -i`
docker run --name consulnode2 -d -h consulnode2 -v /mnt:/data -p $IP:8300:8300 \
-p $IP:8301:8301 -p $IP:8301:8301/udp -p $IP:8302:8302 -p $IP:8302:8302/udp -p $IP:8400:8400 -p $IP:8500:8500 -p 172.17.0.1:53:53/udp \
gliderlabs/consul-server -server -advertise $IP -join 10.2.0.10

# consulnode3 with static IP-address 10.2.0.12
export IP=`hostname -i`
docker run --name consulnode3 -d -h consulnode3 -v /mnt:/data -p $IP:8300:8300 \
-p $IP:8301:8301 -p $IP:8301:8301/udp -p $IP:8302:8302 -p $IP:8302:8302/udp -p $IP:8400:8400 -p $IP:8500:8500 -p 172.17.0.1:53:53/udp \
gliderlabs/consul-server -server -advertise $IP -join 10.2.0.10

SWARM CLUSTER

The Swarm Cluster consists of three instances/nodes.  Each node is defined as an Azure A0 virtual machine and is added to a specific Azure availability set.
We use the Swarm Docker image to avoid software installations and to keep each node clean and easy swappable.  In order to perform node discovery, Docker Swarm can, preferably, use a Service Discovery Manager (like Consul which has a distributed key/value store)

# Create three Swarm Manager instances with each a static IP-address.
# Assign IP-addresses 10.2.1.4 , 10.2.1.5 , 10.2.1.6
export IP=`hostname -i`

# Execute on each Swarm Manager
docker run -d -p 4000:2375 --name swarm_manager swarm manage --replication --advertise $IP:4000 consul://10.2.0.10:8500/swarm

 

SWARM “WORKER NODES”

The Swarm “Worker nodes” are the instances/nodes that perform the actual worker-activities.  Each node is defined as an Azure A2 virtual machine (more CPU/Memory is needed) and is added to a specific Azure availability set.
We can register the Swarm “Worker node” by  running the Swarm Docker image in Join-modus and setting the Service Discovery Manager to our Consul cluster.

Important to note here : the Docker Daemon listens on the default Unix socket but should also listen on the TCP port 2375. (or port 2376 for encrypted communication.)

# Docker Swarm 'Worker nodes'
# For each Worker, the following bash script is executed.
export IP=`hostname -i`

# Listen on Default unix socket and on TCP port 2375.
sudo docker daemon -H tcp://$IP:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.2.0.10:8500 --cluster-advertise=$IP:2375 --dns 8.8.8.8 --dns 8.8.4.4

# Let the Docker Swarm Worker node register itself on the Consul key/value store
docker -H $IP:2375 run -d --name swarm_join swarm join --addr=$IP:2375 consul://10.2.0.10:8500/swarm

# Let's run the registrator Docker image.  This will register/unregister all containers on the Consul key/value store.
docker run --name registrator -d -e DOCKER_HOST=tcp://$IP:2375 gliderlabs/registrator -ip $IP consul://10.2.0.10:8500
 

DOCKER NETWORKING

Since the latest release of Docker (1.9), there are a lot of enhancements in this area.
Docker’s new overlay network driver supports multi-host networking natively out-of-the-box.  This allows development to set up container defined networking and perform abstraction of where each services resides.

The overlay network driver requires a key/value store service.  Consul to the rescue again…

The following example will create an overlay network “overlayCottonCandy”.  After creation, this network is available for each Swarm node.
Then we run two containers (nginx in node 1, alpine in node 2) on the overlay network and let both containers talk to each other.


# Create overlay network
docker network create -d overlay overlayCottonCandy

# Run the containers against the Docker Swarm Cluster

# Run nginx on node 1
export IP=`hostname -i`
docker -H $IP:4000 run -d --name=nginx_test -p 8000:80 --env="constraint:node==*node1" --net=overlayCottonCandy nginx

# Run alpine on node 2
export IP=`hostname -i`
docker -H $IP:4000 run -d --name=alpine_test --env="constraint:node==*node2" --net=overlayCottonCandy alpine

# Let us try to connect from the alpine container to the nginx container
docker exec -it alpine_test /bin/sh/

# Exec in the alpine container :
apk --update add curl
curl http://nginx_test

# Notice that the /etc/hosts file contains an entry for the nginx container and the curl statement gets the default NGINX page.

What’s next?

  • More focus on the software architecture and purpose of this project
  • Following topics will be discussed

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s