Posts Tagged Cluster

Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1

In the following two-part post, we will explore the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.

Application Environment Management

Container orchestration engines, such as Kubernetes, have revolutionized the deployment and management of microservice-based architectures. Combined with a Service Mesh, such as Istio, Kubernetes provides a secure, instrumented, enterprise-grade platform for modern, distributed applications.

One of many challenges with any platform, even one built on Kubernetes, is managing multiple application environments. Whether applications run on bare-metal, virtual machines, or within containers, deploying to and managing multiple application environments increases operational complexity.

As Agile software development practices continue to increase within organizations, the need for multiple, ephemeral, on-demand environments also grows. Traditional environments that were once only composed of Development, Test, and Production, have expanded in enterprises to include a dozen or more environments, to support the many stages of the modern software development lifecycle. Current application environments often include Continous Integration and Delivery (CI), Sandbox, Development, Integration Testing (QA), User Acceptance Testing (UAT), Staging, Performance, Production, Disaster Recovery (DR), and Hotfix. Each environment requiring its own compute, security, networking, configuration, and corresponding dependencies, such as databases and message queues.

Environments and Kubernetes

There are various infrastructure architectural patterns employed by Operations and DevOps teams to provide Kubernetes-based application environments to Development teams. One pattern consists of separate physical Kubernetes clusters. Separate clusters provide a high level of isolation. Isolation offers many advantages, including increased performance and security, the ability to tune each cluster’s compute resources to meet differing SLAs, and ensuring a reduced blast radius when things go terribly wrong. Conversely, separate clusters often result in increased infrastructure costs and operational overhead, and complex deployment strategies. This pattern is often seen in heavily regulated, compliance-driven organizations, where security, auditability, and separation of duties are paramount.

Kube Clusters Diagram F15

Namespaces

An alternative to separate physical Kubernetes clusters is virtual clusters. Virtual clusters are created using Kubernetes Namespaces. According to Kubernetes documentation, ‘Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces’.

In most enterprises, Operations and DevOps teams deliver a combination of both virtual and physical Kubernetes clusters. For example, lower environments, such as those used for Development, Test, and UAT, often reside on the same physical cluster, each in a separate virtual cluster (namespace). At the same time, environments such as Performance, Staging, Production, and DR, often require the level of isolation only achievable with physical Kubernetes clusters.

In the Cloud, physical clusters may be further isolated and secured using separate cloud accounts. For example, with AWS you might have a Non-Production AWS account and a Production AWS account, both managed by an AWS Organization.

Kube Clusters Diagram v2 F3

In a multi-environment scenario, a single physical cluster would contain multiple namespaces, into which separate versions of an application or applications are independently deployed, accessed, and tested. Below we see a simple example of a single Kubernetes non-prod cluster on the left, containing multiple versions of different microservices, deployed across three namespaces. You would likely see this type of deployment pattern as applications are deployed, tested, and promoted across lower environments, before being released to Production.

Kube Clusters Diagram v2 F5.png

Example Application

To demonstrate the promotion and testing of an application across multiple environments, we will use a simple election-themed microservice, developed for a previous post, Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry. The Spring Boot-based application allows API consumers to create, read, update, and delete, candidates, elections, and votes, through an exposed set of resources, accessed via RESTful endpoints.

Source Code

All source code for this post can be found on GitHub. The project’s README file contains a list of the Election microservice’s endpoints. To get started quickly, use one of the two following options (gist).

# clone the official v3.0.0 release for this post
git clone --depth 1 --branch v3.0.0 \
https://github.com/garystafford/spring-postgresql-demo.git \
&& cd spring-postgresql-demo \
&& git checkout -b v3.0.0
# clone the latest version of code (newer than article)
git clone --depth 1 --branch master \
https://github.com/garystafford/spring-postgresql-demo.git \
&& cd spring-postgresql-demo

Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post. The scripts are designed to be easily adapted to a CI/CD DevOps workflow. You will need to modify the script’s variables to match your own environment’s configuration.

istio_107small

Database

The post’s Spring Boot application relies on a PostgreSQL database. In the previous post, ElephantSQL was used to host the PostgreSQL instance. This time, I have used Amazon RDS for PostgreSQL. Amazon RDS for PostgreSQL and ElephantSQL are equivalent choices. For simplicity, you might also consider a containerized version of PostgreSQL, managed as part of your Kubernetes environment.

Ideally, each environment should have a separate database instance. Separate database instances provide better isolation, fine-grained RBAC, easier test data lifecycle management, and improved performance. Although, for this post, I suggest a single, shared, minimally-sized RDS instance.

The PostgreSQL database’s sensitive connection information, including database URL, username, and password, are stored as Kubernetes Secrets, one secret for each namespace, and accessed by the Kubernetes Deployment controllers.

istio_043.png

Istio

Although not required, Istio makes the task of managing multiple virtual and physical clusters significantly easier. Following Istio’s online installation instructions, download and install Istio 0.7.1.

To create a Google Kubernetes Engine (GKE) cluster with Istio, you could use gcloud CLI’s container clusters create command, followed by installing Istio manually using Istio’s supplied Kubernetes resource files. This was the method used in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).

Alternatively, you could use Istio’s Google Cloud Platform (GCP) Deployment Manager files, along with the gcloud CLI’s deployment-manager deployments create command to create a Kubernetes cluster, replete with Istio, in a single step. Although arguably simpler, the deployment-manager method does not provide the same level of fine-grain control over cluster configuration as the container clusters create method. For this post, the deployment-manager method will suffice.

istio_001

The latest version of the Google Kubernetes Engine, available at the time of this post, is 1.9.6-gke.0. However, to install this version of Kubernetes Engine using the Istio’s supplied deployment Manager Jinja template requires updating the hardcoded value in the istio-cluster.jinja file from 1.9.2-gke.1. This has been updated in the next release of Istio.

istio_002

Another change, the latest version of Istio offered as an option in the istio-cluster-jinja.schema file. Specifically, the installIstioRelease configuration variable is only 0.6.0. The template does not include 0.7.1 as an option. Modify the istio-cluster-jinja.schema file to include the choice of 0.7.1. Optionally, I also set 0.7.1 as the default. This change should also be included in the next version of Istio.

istio_075.png

There are a limited number of GKE and Istio configuration defaults defined in the istio-cluster.yaml file, all of which can be overridden from the command line.

istio_002B.png

To optimize the cluster, and keep compute costs to a minimum, I have overridden several of the default configuration values using the properties flag with the gcloud CLI’s deployment-manager deployments create command. The README file provided by Istio explains how to use this feature. Configuration changes include the name of the cluster, the version of Istio (0.7.1), the number of nodes (2), the GCP zone (us-east1-b), and the node instance type (n1-standard-1). I also disabled automatic sidecar injection and chose not to install the Istio sample book application onto the cluster (gist).

# change to match your environment
ISTIO_HOME="/Applications/istio-0.7.1"
GCP_DEPLOYMENT_MANAGER="$ISTIO_HOME/install/gcp/deployment_manager"
GCP_PROJECT="springdemo-199819"
GKE_CLUSTER="election-nonprod-cluster"
GCP_ZONE="us-east1-b"
ISTIO_VER="0.7.1"
NODE_COUNT="1"
INSTANCE_TYPE="n1-standard-1"
# deploy gke istio cluster
gcloud deployment-manager deployments create springdemo-istio-demo-deployment \
--template=$GCP_DEPLOYMENT_MANAGER/istio-cluster.jinja \
--properties "gkeClusterName:$GKE_CLUSTER,installIstioRelease:$ISTIO_VER,"\
"zone:$GCP_ZONE,initialNodeCount:$NODE_COUNT,instanceType:$INSTANCE_TYPE,"\
"enableAutomaticSidecarInjection:false,enableMutualTLS:true,enableBookInfoSample:false"
# get creds for cluster
gcloud container clusters get-credentials $GKE_CLUSTER \
--zone $GCP_ZONE --project $GCP_PROJECT
# required dashboard access
kubectl apply -f ./roles/clusterrolebinding-dashboard.yaml
# use dashboard token to sign into dashboard:
kubectl -n kube-system describe secret kubernetes-dashboard-token

Cluster Provisioning

To provision the GKE cluster and deploy Istio, first modify the variables in the part1-create-gke-cluster.sh file (shown above), then execute the script. The script also retrieves your cluster’s credentials, to enable command line interaction with the cluster using the kubectl CLI.

istio_002C.png

Once complete, validate the version of Istio by examining Istio’s Docker image versions, using the following command (gist).

kubectl get pods --all-namespaces -o jsonpath="{..image}" | \
tr -s '[[:space:]]' '\n' | sort | uniq -c | \
egrep -oE "\b(docker.io/istio).*\b"

The result should be a list of Istio 0.7.1 Docker images.

istio_076.png

The new cluster should be running GKE version 1.9.6.gke.0. This can be confirmed using the following command (gist).

gcloud container clusters describe election-nonprod-cluster | \
egrep currentMasterVersion

Or, from the GCP Cloud Console.

istio_037

The new GKE cluster should be composed of (2) n1-standard-1 nodes, running in the us-east-1b zone.

istio_038

As part of the deployment, all of the separate Istio components should be running within the istio-system namespace.

istio_040

As part of the deployment, an external IP address and a load balancer were provisioned by GCP and associated with the Istio Ingress. GCP’s Deployment Manager should have also created the necessary firewall rules for cluster ingress and egress.

istio_010.png

Building the Environments

Next, we will create three namespaces,dev, test, and uat, which represent three non-production environments. Each environment consists of a Kubernetes Namespace, Istio Ingress, and Secret. The three environments are deployed using the part2-create-environments.sh script.

istio_048.png

Deploying Election v1

For this demonstration, we will assume v1 of the Election service has been previously promoted, tested, and released to Production. Hence, we would expect v1 to be deployed to each of the lower environments. Additionally, a new v2 of the Election service has been developed and tested locally using Minikube. It is ready for deployment to the three environments and will undergo integration testing (detailed in Part Two of the post).

If you recall from our GKE/Istio configuration, we chose manual sidecar injection of the Istio proxy. Therefore, all election deployment scripts perform a kube-inject command. To connect to our external Amazon RDS database, this kube-inject command requires the includeIPRanges flag, which contains two cluster configuration values, the cluster’s IPv4 CIDR (clusterIpv4Cidr) and the service’s IPv4 CIDR (servicesIpv4Cidr).

Before deployment, we export the includeIPRanges value as an environment variable, which will be used by the deployment scripts, using the following command, export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh). The get-cluster-ip-ranges.sh script is shown below (gist).

# run this command line:
# export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh)
# capture the clusterIpv4Cidr and servicesIpv4Cidr values
# required for manual sidecar injection with kube-inject
# change to match your environment
GCP_PROJECT="springdemo-199819"
GKE_CLUSTER="election-nonprod-cluster"
GCP_ZONE="us-east1-b"
CLUSTER_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \
| egrep clusterIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b")
SERVICES_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \
| egrep servicesIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b")
export IP_RANGES="$CLUSTER_IPV4_CIDR,$SERVICES_IPV4_CIDR"
echo $IP_RANGES

Using this method with manual sidecar injection is discussed in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).

To deploy v1 of the Election service to all three namespaces, execute the part3-deploy-v1-all-envs.sh script.

istio_051.png

We should now have two instances of v1 of the Election service, running in the dev, test, and uat namespaces, for a total of six election-v1 Kubernetes Pods.

istio_052

HTTP Request Routing

Before deploying additional versions of the Election service in Part Two of this post, we should understand how external HTTP requests will be routed to different versions of the Election service, in multiple namespaces. In the post’s simple example, we have a matrix of three namespaces and two versions of the Election service. That means we need a method to route external traffic to up to six different election versions. There multiple ways to solve this problem, each with their own pros and cons. For this post, I found a combination of DNS and HTTP request rewriting is most effective.

DNS

First, to route external HTTP requests to the correct namespace, we will use subdomains. Using my current DNS management solution, Azure DNS, I create three new A records for my registered domain, voter-demo.com. There is one A record for each namespace, including api.dev, api.test, and api.uat.

istio_077.png

All three subdomains should resolve to the single external IP address assigned to the cluster’s load balancer.

istio_010.png

As part of the environments creation, the script deployed an Istio Ingress, one to each environment. The ingress accepts traffic based on a match to the Request URL (gist).

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
labels:
name: dev-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: api.dev.voter-demo.com
http:
paths:
- path: /.*
backend:
serviceName: election
servicePort: 8080

The istio-ingress service load balancer, running in the istio-system namespace, routes inbound external traffic, based on the Request URL, to the Istio Ingress in the appropriate namespace.

istio_053.png

The Istio Ingress in the namespace then directs the traffic to one of the Kubernetes Pods, containing the Election service and the Istio sidecar proxy.

istio_068.png

HTTP Rewrite

To direct the HTTP request to v1 or v2 of the Election service, an Istio Route Rule is used. As part of the environment creation, along with a Namespace and Ingress resources, we also deployed an Istio Route Rule to each environment. This particular route rule examines the HTTP request URL for a /v1/ or /v2/ sub-collection resource. If it finds the sub-collection resource, it performs a HTTPRewrite, removing the sub-collection resource from the HTTP request. The Route Rule then directs the HTTP request to the appropriate version of the Election service, v1 or v2 (gist).

According to Istio, ‘if there are multiple registered instances with the specified tag(s), they will be routed to based on the load balancing policy (algorithm) configured for the service (round-robin by default).’ We are using the default load balancing algorithm to distribute requests across multiple copies of each Election service.

# kubectl apply -f ./routerules/routerule-election-v1.yaml -n dev
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: election-v1
spec:
destination:
name: election
match:
request:
headers:
uri:
prefix: /v1/
rewrite:
uri: /
route:
- labels:
app: election
version: v1

The final external HTTP request routing for the Election service in the Non-Production GKE cluster is shown on the left, in the diagram, below. Every Election service Pod also contains an Istio sidecar proxy instance.

Kube Clusters Diagram F14

Below are some examples of HTTP GET requests that would be successfully routed to our Election service, using the above-described routing strategy (gist).

# details of an election, id 5, requested from v1 elections in dev
curl http://api.dev.voter-demo.com/v1/elections/5
# list of candidates, last name Obama, requested from v2 of elections in test
curl http://api.test.voter-demo.com/v2/candidates/search/findByLastName?lastName=Obama
# process start time metric, requested from v2 of elections in uat
curl http://api.test.voter-demo.com/v2/actuator/metrics/process.start.time
# vote summary, requested from v1 of elections in production
curl http://api.voter-demo.com/v1/vote-totals/summary/2012%20Presidential%20Election

Part Two

In Part One of this post, we created the Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the Election microservice had already been released to Production, we deployed v1 to each of the three namespaces.

In Part Two of this post, we will learn how to utilize the sophisticated API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new, v2 of the Election microservice, locally, on Kubernetes Minikube. Once we are confident v2 is functioning as intended, we will promote and test v2, across the dev, test, and uat namespaces.

All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.

, , , , , , , , , , ,

3 Comments

Distributed Service Configuration with Consul, Spring Cloud, and Docker

Consul UI - Nodes

Introduction

In this post, we will explore the use of HashiCorp Consul for distributed configuration of containerized Spring Boot services, deployed to a Docker swarm cluster. In the first half of the post, we will provision a series of virtual machines, build a Docker swarm on top of those VMs, and install Consul and Registrator on each swarm host. In the second half of the post, we will configure and deploy multiple instances of a containerized Spring Boot service, backed by MongoDB, to the swarm cluster, using Docker Compose.

The final objective of this post is to have all the deployed services registered with Consul, via Registrator, and the service’s runtime configuration provided dynamically by Consul.

Objectives

  1. Provision a series of virtual machine hosts, using Docker Machine and Oracle VirtualBox
  2. Provide distributed and highly available cluster management and service orchestration, using Docker swarm mode
  3. Provide distributed and highly available service discovery, health checking, and a hierarchical key/value store, using HashiCorp Consul
  4. Provide service discovery and automatic registration of containerized services to Consul, using Registrator, Glider Labs’ service registry bridge for Docker
  5. Provide distributed configuration for containerized Spring Boot services using Consul and Pivotal Spring Cloud Consul Config
  6. Deploy multiple instances of a Spring Boot service, backed by MongoDB, to the swarm cluster, using Docker Compose version 3

Technologies

  • Docker
  • Docker Compose (v3)
  • Docker Hub
  • Docker Machine
  • Docker swarm mode
  • Docker Swarm Visualizer (Mano Marks)
  • Glider Labs Registrator
  • Gradle
  • HashiCorp Consul
  • Java
  • MongoDB
  • Oracle VirtualBox VM Manager
  • Spring Boot
  • Spring Cloud Consul Config
  • Travis CI

Code Sources

All code in this post exists in two GitHub repositories. I have labeled each code snippet in the post with the corresponding file in the repositories. The first repository, consul-docker-swarm-compose, contains all the code necessary for provisioning the VMs and building the Docker swarm and Consul clusters. The repo also contains code for deploying Swarm Visualizer and Registrator. Make sure you clone the swarm-mode branch.

git clone --depth 1 --branch swarm-mode \
  https://github.com/garystafford/microservice-docker-demo-consul.git
cd microservice-docker-demo-consul

The second repository, microservice-docker-demo-widget, contains all the code necessary for configuring Consul and deploying the Widget service stack. Make sure you clone the consul branch.

git clone --depth 1 --branch consul \
  https://github.com/garystafford/microservice-docker-demo-widget.git
cd microservice-docker-demo-widget

Docker Versions

With the Docker toolset evolving so quickly, features frequently change or become outmoded. At the time of this post, I am running the following versions on my Mac.

  • Docker Engine version 1.13.1
  • Boot2Docker version 1.13.1
  • Docker Compose version 1.11.2
  • Docker Machine version 0.10.0
  • HashiCorp Consul version 0.7.5

Provisioning VM Hosts

First, we will provision a series of six virtual machines (aka machines, VMs, or hosts), using Docker Machine and Oracle VirtualBox.

consul-post-1

By switching Docker Machine’s driver, you can easily switch from VirtualBox to other vendors, such as VMware, AWS, GCE, or AWS. I have explicitly set several VirtualBox driver options, using the default values, for better clarification into the VirtualBox VMs being created.

vms=( "manager1" "manager2" "manager3"
      "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine create \
    --driver virtualbox \
    --virtualbox-memory "1024" \
    --virtualbox-cpu-count "1" \
    --virtualbox-disk-size "20000" \
    ${vm}
done

Using the docker-machine ls command, we should observe the resulting series of VMs looks similar to the following. Note each of the six VMs has a machine name and an IP address in the range of 192.168.99.1/24.

$ docker-machine ls
NAME       ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager1   -        virtualbox   Running   tcp://192.168.99.109:2376           v1.13.1
manager2   -        virtualbox   Running   tcp://192.168.99.110:2376           v1.13.1
manager3   -        virtualbox   Running   tcp://192.168.99.111:2376           v1.13.1
worker1    -        virtualbox   Running   tcp://192.168.99.112:2376           v1.13.1
worker2    -        virtualbox   Running   tcp://192.168.99.113:2376           v1.13.1
worker3    -        virtualbox   Running   tcp://192.168.99.114:2376           v1.13.1

We may also view the VMs using the Oracle VM VirtualBox Manager application.

Oracle VirtualBox VM Manager

Docker Swarm Mode

Next, we will provide, amongst other capabilities, cluster management and orchestration, using Docker swarm mode. It is important to understand that the relatively new Docker swarm mode is not the same the Docker Swarm. Legacy Docker Swarm was succeeded by Docker’s integrated swarm mode, with the release of Docker v1.12.0, in July 2016.

We will create a swarm (a cluster of Docker Engines or nodes), consisting of three Manager nodes and three Worker nodes, on the six VirtualBox VMs. Using this configuration, the swarm will be distributed and highly available, able to suffer the loss of one of the Manager nodes, without failing.

consul-post-2

Manager Nodes

First, we will create the initial Docker swarm Manager node.

SWARM_MANAGER_IP=$(docker-machine ip manager1)
echo ${SWARM_MANAGER_IP}

docker-machine ssh manager1 \
  "docker swarm init \
  --advertise-addr ${SWARM_MANAGER_IP}"

This initial Manager node advertises its IP address to future swarm members.

Next, we will create two additional swarm Manager nodes, which will join the initial Manager node, by using the initial Manager node’s advertised IP address. The three Manager nodes will then elect a single Leader to conduct orchestration tasks. According to Docker, Manager nodes implement the Raft Consensus Algorithm to manage the global cluster state.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

docker-machine env manager1
eval $(docker-machine env manager1)

MANAGER_SWARM_JOIN=$(docker-machine ssh ${vms[0]} "docker swarm join-token manager")
MANAGER_SWARM_JOIN=$(echo ${MANAGER_SWARM_JOIN} | grep -E "(docker).*(2377)" -o)
MANAGER_SWARM_JOIN=$(echo ${MANAGER_SWARM_JOIN//\\/''})
echo ${MANAGER_SWARM_JOIN}

for vm in ${vms[@]:1:2}
do
  docker-machine ssh ${vm} ${MANAGER_SWARM_JOIN}
done

A quick note on the string manipulation of the MANAGER_SWARM_JOIN variable, above. Running the docker swarm join-token manager command, outputs something similar to the following.

To add a manager to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-1s53oajsj19lgniar3x1gtz3z9x0iwwumlew0h9ism5alt2iic-1qxo1nx24pyd0pg61hr6pp47t \
    192.168.99.109:2377

Using a bit of string manipulation, the resulting value of the MANAGER_SWARM_JOIN variable will be similar to the following command. We then ssh into each host and execute this command, one for the Manager nodes and another similar command, for the Worker nodes.

docker swarm join --token SWMTKN-1-1s53oajsj19lgniar3x1gtz3z9x0iwwumlew0h9ism5alt2iic-1qxo1nx24pyd0pg61hr6pp47t 192.168.99.109:2377

Worker Nodes

Next, we will create three swarm Worker nodes, using a similar method. The three Worker nodes will join the three swarm Manager nodes, as part of the swarm cluster. The main difference, according to Docker, Worker node’s “sole purpose is to execute containers. Worker nodes don’t participate in the Raft distributed state, make in scheduling decisions, or serve the swarm mode HTTP API.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

WORKER_SWARM_JOIN=$(docker-machine ssh manager1 "docker swarm join-token worker")
WORKER_SWARM_JOIN=$(echo ${WORKER_SWARM_JOIN} | grep -E "(docker).*(2377)" -o)
WORKER_SWARM_JOIN=$(echo ${WORKER_SWARM_JOIN//\\/''})
echo ${WORKER_SWARM_JOIN}

for vm in ${vms[@]:3:3}
do
  docker-machine ssh ${vm} ${WORKER_SWARM_JOIN}
done

Using the docker node ls command, we should observe the resulting Docker swarm cluster looks similar to the following:

$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
u75gflqvu7blmt5mcr2dcp4g5 *  manager1  Ready   Active        Leader
0b7pkek76dzeedtqvvjssdt2s    manager2  Ready   Active        Reachable
s4mmra8qdgwukjsfg007rvbge    manager3  Ready   Active        Reachable
n89upik5v2xrjjaeuy9d4jybl    worker1   Ready   Active
nsy55qzavzxv7xmanraijdw4i    worker2   Ready   Active
hhn1l3qhej0ajmj85gp8qhpai    worker3   Ready   Active

Note the three swarm Manager nodes, three swarm Worker nodes, and the Manager, which was elected Leader. The other two Manager nodes are marked as Reachable. The asterisk indicates manager1 is the active machine.

I have also deployed Mano Marks’ Docker Swarm Visualizer to each of the swarm cluster’s three Manager nodes. This tool is described as a visualizer for Docker Swarm Mode using the Docker Remote API, Node.JS, and D3. It provides a great visualization the swarm cluster and its running components.

docker service create \
  --name swarm-visualizer \
  --publish 5001:8080/tcp \
  --constraint node.role==manager \
  --mode global \
  --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
  manomarks/visualizer:latest

The Swarm Visualizer should be available on any of the Manager’s IP addresses, on port 5001. We constrained the Visualizer to the Manager nodes, by using the node.role==manager constraint.

Docker Swarm Visualizer

HashiCorp Consul

Next, we will install HashiCorp Consul onto our swarm cluster of VirtualBox hosts. Consul will provide us with service discovery, health checking, and a hierarchical key/value store. Consul will be installed, such that we end up with six Consul Agents. Agents can run as Servers or Clients. Similar to Docker swarm mode, we will install three Consul servers and three Consul clients, all in one Consul datacenter (our set of six local VMs). This clustered configuration will ensure Consul is distributed and highly available, able to suffer the loss of one of the Consul servers instances, without failing.

consul-post-3

Consul Servers

Again, similar to Docker swarm mode, we will install the initial Consul server. Both the Consul servers and clients run inside Docker containers, one per swarm host.

consul_server="consul-server1"
docker-machine env manager1
eval $(docker-machine env manager1)

docker run -d \
  --net=host \
  --hostname ${consul_server} \
  --name ${consul_server} \
  --env "SERVICE_IGNORE=true" \
  --env "CONSUL_CLIENT_INTERFACE=eth0" \
  --env "CONSUL_BIND_INTERFACE=eth1" \
  --volume consul_data:/consul/data \
  --publish 8500:8500 \
  consul:latest \
  consul agent -server -ui -bootstrap-expect=3 -client=0.0.0.0 -advertise=${SWARM_MANAGER_IP} -data-dir="/consul/data"

The first Consul server advertises itself on its host’s IP address. The bootstrap-expect=3option instructs Consul to wait until three Consul servers are available before bootstrapping the cluster. This option also allows an initial Leader to be elected automatically. The three Consul servers form a consensus quorum, using the Raft consensus algorithm.

All Consul server and Consul client Docker containers will have a data directory (/consul/data), mapped to a volume (consul_data) on their corresponding VM hosts.

Consul provides a basic browser-based user interface. By using the -ui option each Consul server will have the UI available on port 8500.

Next, we will create two additional Consul Server instances, which will join the initial Consul server, by using the first Consul server’s advertised IP address.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

consul_servers=( "consul-server2" "consul-server3" )

i=0
for vm in ${vms[@]:1:2}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})

  docker run -d \
    --net=host \
    --hostname ${consul_servers[i]} \
    --name ${consul_servers[i]} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth1" \
    --volume consul_data:/consul/data \
    --publish 8500:8500 \
    consul:latest \
    consul agent -server -ui -client=0.0.0.0 -advertise='{{ GetInterfaceIP "eth1" }}' -retry-join=${SWARM_MANAGER_IP} -data-dir="/consul/data"
  let "i++"
done

Consul Clients

Next, we will install Consul clients on the three swarm worker nodes, using a similar method to the servers. According to Consul, The client is relatively stateless. The only background activity a client performs is taking part in the LAN gossip pool. The lack of the -server option, indicates Consul will install this agent as a client.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

consul_clients=( "consul-client1" "consul-client2" "consul-client3" )

i=0
for vm in ${vms[@]:3:3}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker rm -f $(docker ps -a -q)

  docker run -d \
    --net=host \
    --hostname ${consul_clients[i]} \
    --name ${consul_clients[i]} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth1" \
    --volume consul_data:/consul/data \
    consul:latest \
    consul agent -client=0.0.0.0 -advertise='{{ GetInterfaceIP "eth1" }}' -retry-join=${SWARM_MANAGER_IP} -data-dir="/consul/data"
  let "i++"
done

From inside the consul-server1 Docker container, on the manager1 host, using the consul members command, we should observe a current list of cluster members along with their current state.

$ docker exec -it consul-server1 consul members
Node            Address              Status  Type    Build  Protocol  DC
consul-client1  192.168.99.112:8301  alive   client  0.7.5  2         dc1
consul-client2  192.168.99.113:8301  alive   client  0.7.5  2         dc1
consul-client3  192.168.99.114:8301  alive   client  0.7.5  2         dc1
consul-server1  192.168.99.109:8301  alive   server  0.7.5  2         dc1
consul-server2  192.168.99.110:8301  alive   server  0.7.5  2         dc1
consul-server3  192.168.99.111:8301  alive   server  0.7.5  2         dc1

Note the three Consul servers, the three Consul clients, and the single datacenter, dc1. Also, note the address of each agent matches the IP address of their swarm hosts.

To further validate Consul is installed correctly, access Consul’s web-based UI using the IP address of any of Consul’s three server’s swarm host, on port 8500. Shown below is the initial view of Consul prior to services being registered. Note the Consol instance’s names. Also, notice the IP address of each Consul instance corresponds to the swarm host’s IP address, on which it resides.

Consul UI - Nodes

Service Container Registration

Having provisioned the VMs, and built the Docker swarm cluster and Consul cluster, there is one final step to prepare our environment for the deployment of containerized services. We want to provide service discovery and registration for Consul, using Glider Labs’ Registrator.

Registrator will be installed on each host, within a Docker container. According to Glider Labs’ website, “Registrator will automatically register and deregister services for any Docker container, as the containers come online.

We will install Registrator on only five of our six hosts. We will not install it on manager1. Since this host is already serving as both the Docker swarm Leader and Consul Leader, we will not be installing any additional service containers on that host. This is a personal choice, not a requirement.

vms=( "manager1" "manager2" "manager3"
      "worker1" "worker2" "worker3" )

for vm in ${vms[@]:1}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})

  HOST_IP=$(docker-machine ip ${vm})
  # echo ${HOST_IP}

  docker run -d \
    --name=registrator \
    --net=host \
    --volume=/var/run/docker.sock:/tmp/docker.sock \
    gliderlabs/registrator:latest \
      -internal consul://${HOST_IP:localhost}:8500
done

Multiple Service Discovery Options

You might be wondering why we are worried about service discovery and registration with Consul, using Registrator, considering Docker swarm mode already has service discovery. Deploying our services using swarm mode, we are relying on swarm mode for service discovery. I chose to include Registrator in this example, to demonstrate an alternative method of service discovery, which can be used with other tools such as Consul Template, for dynamic load-balancer templating.

We actually have a third option for automatic service registration, Spring Cloud Consul Discovery. I chose not use Spring Cloud Consul Discovery in this post, to register the Widget service. Spring Cloud Consul Discovery would have automatically registered the Spring Boot service with Consul. The actual Widget service stack contains MongoDB, as well as other non-Spring Boot service components, which I removed for this post, such as the NGINX load balancer. Using Registrator, all the containerized services, not only the Spring Boot services, are automatically registered.

You will note in the Widget source code, I commented out the @EnableDiscoveryClient annotation on the WidgetApplication class. If you want to use Spring Cloud Consul Discovery, simply uncomment this annotation.

Distributed Configuration

We are almost ready to deploy some services. For this post’s demonstration, we will deploy the Widget service stack. The Widget service stack is composed of a simple Spring Boot service, backed by MongoDB; it is easily deployed as a containerized application. I often use the service stack for testing and training.

Widgets represent inanimate objects, users purchase with points. Widgets have particular physical characteristics, such as product id, name, color, size, and price. The inventory of widgets is stored in the widgets MongoDB database.

Hierarchical Key/Value Store

Before we can deploy the Widget service, we need to store the Widget service’s Spring Profiles in Consul’s hierarchical key/value store. The Widget service’s profiles are sets of configuration the service needs to run in different environments, such as Development, QA, UAT, and Production. Profiles include configuration, such as port assignments, database connection information, and logging and security settings. The service’s active profile will be read by the service at startup, from Consul, and applied at runtime.

As opposed to the traditional Java properties’ key/value format the Widget service uses YAML to specify it’s hierarchical configuration data. Using Spring Cloud Consul Config, an alternative to Spring Cloud Config Server and Client, we will store the service’s Spring Profile in Consul as blobs of YAML.

There are multi ways to store configuration in Consul. Storing the Widget service’s profiles as YAML was the quickest method of migrating the existing application.yml file’s multiple profiles to Consul. I am not insisting YAML is the most effective method of storing configuration in Consul’s k/v store; that depends on the application’s requirements.

Using the Consul KV HTTP API, using the HTTP PUT method, we will place each profile, as a YAML blob, into the appropriate data key, in Consul. For convenience, I have separated the Widget service’s three existing Spring profiles into individual YAML files. We will load each of the YAML file’s contents into Consul, using curl.

Below is the Widget service’s default Spring profile. The default profile is activated if no other profiles are explicitly active when the application context starts.

endpoints:
  enabled: true
  sensitive: false
info:
  java:
    source: "${java.version}"
    target: "${java.version}"
logging:
  level:
    root: DEBUG
management:
  security:
    enabled: false
  info:
    build:
      enabled: true
    git:
      mode: full
server:
  port: 8030
spring:
  data:
    mongodb:
      database: widgets
      host: localhost
      port: 27017

Below is the Widget service’s docker-local Spring profile. This is the profile we will use when we deploy the Widget service to our swarm cluster. The docker-local profile overrides two properties of the default profile — the name of our MongoDB host and the logging level. All other configuration will come from the default profile.

logging:
  level:
    root: INFO
spring:
  data:
    mongodb:
      host: mongodb

To load the profiles into Consul, from the root of the Widget local git repository, we will execute curl commands. We will use the Consul cluster Leader’s IP address for our HTTP PUT methods.

docker-machine env manager1
eval $(docker-machine env manager1)

CONSUL_SERVER=$(docker-machine ip $(docker node ls | grep Leader | awk '{print $3}'))

# default profile
KEY="config/widget-service/data"
VALUE="consul-configs/default.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

# docker-local profile
KEY="config/widget-service/docker-local/data"
VALUE="consul-configs/docker-local.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

# docker-production profile
KEY="config/widget-service/docker-production/data"
VALUE="consul-configs/docker-production.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

Returning to the Consul UI, we should now observe three Spring profiles, in the appropriate data keys, listed under the Key/Value tab. The default Spring profile YAML blob, the value, will be assigned to the config/widget-service/data key.

Consul UI - Docker Default Spring Profile

The docker-local profile will be assigned to the config/widget-service,docker-local/data key. The keys follow default spring.cloud.consul.config conventions.

Consul UI - Docker Local Spring Profile

Spring Cloud Consul Config

In order for our Spring Boot service to connect to Consul and load the requested active Spring Profile, we need to add a dependency to the gradle.build file, on Spring Cloud Consul Config.

dependencies {
  compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-consul-all';
  ...
}

Next, we need to configure the bootstrap.yml file, to connect and properly read the profile. We must properly set the CONSUL_SERVER environment variable. This value is the Consul server instance hostname or IP address, which the Widget service instances will contact to retrieve its configuration.

spring:
  application:
    name: widget-service
  cloud:
    consul:
      host: ${CONSUL_SERVER:localhost}
      port: 8500
      config:
        fail-fast: true
        format: yaml

Deployment Using Docker Compose

We are almost ready to deploy the Widget service instances and the MongoDB instance (also considered a ‘service’ by Docker), in Docker containers, to the Docker swarm cluster. The Docker Compose file is written using version 3 of the Docker Compose specification. It takes advantage of some of the specification’s new features.

version: '3.0'

services:
  widget:
    image: garystafford/microservice-docker-demo-widget:latest
    depends_on:
    - widget_stack_mongodb
    hostname: widget
    ports:
    - 8030:8030/tcp
    networks:
    - widget_overlay_net
    deploy:
      mode: global
      placement:
        constraints: [node.role == worker]
    environment:
    - "CONSUL_SERVER_URL=${CONSUL_SERVER}"
    - "SERVICE_NAME=widget-service"
    - "SERVICE_TAGS=service"
    command: "java -Dspring.profiles.active=${WIDGET_PROFILE} -Djava.security.egd=file:/dev/./urandom -jar widget/widget-service.jar"

  mongodb:
    image: mongo:latest
    command:
    - --smallfiles
    hostname: mongodb
    ports:
    - 27017:27017/tcp
    networks:
    - widget_overlay_net
    volumes:
    - widget_data_vol:/data/db
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == worker]
    environment:
    - "SERVICE_NAME=mongodb"
    - "SERVICE_TAGS=database"

networks:
  widget_overlay_net:
    external: true

volumes:
  widget_data_vol:
    external: true

External Container Volumes and Network

Note the external volume, widget_data_vol, which will be mounted to the MongoDB container, to the /data/db directory. The volume must be created on each host in the swarm cluster, which may contain the MongoDB instance.

Also, note the external overlay network, widget_overlay_net, which will be used by all the service containers in the service stack to communicate with each other. These must be created before deploying our services.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker volume create --name=widget_data_vol
  echo "Volume created: ${vm}..."
done

We should be able to view the new volume on one of the swarm Worker host, using the docker volume ls command.

$ docker volume ls
DRIVER              VOLUME NAME
local               widget_data_vol

The network is created once, for the whole swarm cluster.

docker-machine env manager1
eval $(docker-machine env manager1)

docker network create \
  --driver overlay \
  --subnet=10.0.0.0/16 \
  --ip-range=10.0.11.0/24 \
  --opt encrypted \
  --attachable=true \
  widget_overlay_net

We can view the new overlay network using the docker network ls command.

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
03bcf76d3cc4        bridge              bridge              local
533285df0ce8        docker_gwbridge     bridge              local
1f627d848737        host                host                local
6wdtuhwpuy4f        ingress             overlay             swarm
b8a1f277067f        none                null                local
4hy77vlnpkkt        widget_overlay_net  overlay             swarm

Deploying the Services

With the profiles loaded into Consul, and the overlay network and data volumes created on the hosts, we can deploy the Widget service instances and MongoDB service instance. We will assign the Consul cluster Leader’s IP address as the CONSUL_SERVER variable. We will assign the Spring profile, docker-local, to the WIDGET_PROFILE variable. Both these variables are used by the Widget service’s Docker Compose file.

docker-machine env manager1
eval $(docker-machine env manager1)

export CONSUL_SERVER=$(docker-machine ip $(docker node ls | grep Leader | awk '{print $3}'))
export WIDGET_PROFILE=docker-local

docker stack deploy --compose-file=docker-compose.yml widget_stack

The Docker images, used to instantiate the service’s Docker containers, must be pulled from Docker Hub, by each swarm host. Consequently, the initial deployment process can take up to several minutes, depending on your Internet connection.

After deployment, using the docker stack ls command, we should observe that the widget_stack stack is deployed to the swarm cluster with two services.

$ docker stack ls
NAME          SERVICES
widget_stack  2

After deployment, using the docker stack ps widget_stack command, we should observe that all the expected services in the widget_stack stack are running. Note this command also shows us where the services are running.

$ docker service ls
20:37:24-gstafford:~/Documents/projects/widget-docker-demo/widget-service$ docker stack ps widget_stack
ID            NAME                                           IMAGE                                                NODE     DESIRED STATE  CURRENT STATE          ERROR  PORTS
qic4j1pl6n4l  widget_stack_widget.hhn1l3qhej0ajmj85gp8qhpai  garystafford/microservice-docker-demo-widget:latest  worker3  Running        Running 4 minutes ago
zrbnyoikncja  widget_stack_widget.nsy55qzavzxv7xmanraijdw4i  garystafford/microservice-docker-demo-widget:latest  worker2  Running        Running 4 minutes ago
81z3ejqietqf  widget_stack_widget.n89upik5v2xrjjaeuy9d4jybl  garystafford/microservice-docker-demo-widget:latest  worker1  Running        Running 4 minutes ago
nx8dxlib3wyk  widget_stack_mongodb.1                         mongo:latest                                         worker1  Running        Running 4 minutes ago

Using the docker service ls command, we should observe that all the expected service instances (replicas) running, including all the widget_stack’s services and the three instances of the swarm-visualizer service.

$ docker service ls
ID            NAME                  MODE        REPLICAS  IMAGE
almmuqqe9v55  swarm-visualizer      global      3/3       manomarks/visualizer:latest
i9my74tl536n  widget_stack_widget   global      0/3       garystafford/microservice-docker-demo-widget:latest
ju2t0yjv9ily  widget_stack_mongodb  replicated  1/1       mongo:latest

Since it may take a few moments for the widget_stack’s services to come up, you may need to re-run the command until you see all expected replicas running.

$ docker service ls
ID            NAME                  MODE        REPLICAS  IMAGE
almmuqqe9v55  swarm-visualizer      global      3/3       manomarks/visualizer:latest
i9my74tl536n  widget_stack_widget   global      3/3       garystafford/microservice-docker-demo-widget:latest
ju2t0yjv9ily  widget_stack_mongodb  replicated  1/1       mongo:latest

If you see 3 of 3 replicas of the Widget service running, this is a good sign everything is working! We can confirm this, by checking back in with the Consul UI. We should see the three Widget services and the single instance of MongoDB. They were all registered with Registrator. For clarity, we have purposefully did not register the Swarm Visualizer instances with Consul, using Registrator’s "SERVICE_IGNORE=true" environment variable. Registrator will also not register itself with Consul.

Consul UI - Services

We can also view the Widget stack’s services, by revisiting the Swarm Visualizer, on any of the Docker swarm cluster Manager’s IP address, on port 5001.

Docker Swarm Visualizer - Deployed Services

Spring Boot Actuator

The most reliable way to confirm that the Widget service instances did indeed load their configuration from Consul, is to check the Spring Boot Actuator Environment endpoint for any of the Widget instances. As shown below, all configuration is loaded from the default profile, except for two configuration items, which override the default profile values and are loaded from the active docker-local Spring profile.

Spring Actuator Environment Endpoint

Consul Endpoint

There is yet another way to confirm Consul is working properly with our services, without accessing the Consul UI. This ability is provided by Spring Boot Actuator’s Consul endpoint. If the Spring Boot service has a dependency on Spring Cloud Consul Config, this endpoint will be available. It displays useful information about the Consul cluster and the services which have been registered with Consul.

Spring Actuator Consul Endpoint

Cleaning Up the Swarm

If you want to start over again, without destroying the Docker swarm cluster, you may use the following commands to delete all the stacks, services, stray containers, and unused images, networks, and volumes. The Docker swarm cluster and all Docker images pulled to the VMs will be left intact.

# remove all stacks and services
docker-machine env manager1
eval $(docker-machine env manager1)
for stack in $(docker stack ls | awk '{print $1}'); do docker stack rm ${stack}; done
for service in $(docker service ls | awk '{print $1}'); do docker service rm ${service}; done

# remove all containers, networks, and volumes
vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker system prune -f
  docker stop $(docker ps -a -q)
  docker rm -f $(docker ps -a -q)
done

Conclusion

We have barely scratched the surface of the features and capabilities of Docker swarm mode or Consul. However, the post did demonstrate several key concepts, critical to configuring, deploying, and managing a modern, distributed, containerized service application platform. These concepts included cluster management, service discovery, service orchestration, distributed configuration, hierarchical key/value stores, and distributed and highly-available systems.

This post’s example was designed for demonstration purposes only and meant to simulate a typical Development or Test environment. Although the swarm cluster, Consul cluster, and the Widget service instances were deployed in a distributed and highly available configuration, this post’s example is far from being production-ready. Some things that would be considered, if you were to make this more production-like, include:

  • MongoDB database is a single point of failure. It should be deployed in a sharded and clustered configuration.
  • The swarm nodes were deployed to a single datacenter. For redundancy, the nodes should be spread across multiple physical hypervisors, separate availability zones and/or geographically separate datacenters (regions).
  • The example needs centralized logging, monitoring, and alerting, to better understand and react to how the swarm, Docker containers, and services are performing.
  • Most importantly, we made was no attempt to secure the services, containers, data, network, or hosts. Security is a critical component for moving this example to Production.

 

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , ,

6 Comments