Posts Tagged docker

Streaming Docker Logs to Elastic Stack (ELK) using Fluentd

Kibana

Introduction

Fluentd and Docker’s native logging driver for Fluentd makes it easy to stream Docker logs from multiple running containers to the Elastic Stack. In this post, we will use Fluentd to stream Docker logs from multiple instances of a Dockerized Spring Boot RESTful service and MongoDB, to the Elastic Stack (ELK).

log_message_flow_notype

In a recent post, Distributed Service Configuration with Consul, Spring Cloud, and Docker, we built a Consul cluster using Docker swarm mode, to host distributed configurations for a Spring Boot service. We will use the resulting swarm cluster from the previous post as a foundation for this post.

Fluentd

According to the Fluentd website, Fluentd is described as an open source data collector, which unifies data collection and consumption for a better use and understanding of data. Fluentd combines all facets of processing log data: collecting, filtering, buffering, and outputting logs across multiple sources and destinations. Fluentd structures data as JSON as much as possible.

Logging Drivers

Docker includes multiple logging mechanisms to get logs from running containers and services. These mechanisms are called logging drivers. Fluentd is one of the ten current Docker logging drivers. According to Docker, The fluentd logging driver sends container logs to the Fluentd collector as structured log data. Then, users can utilize any of the various output plugins, from Fluentd, to write these logs to various destinations.

Elastic Stack

The ELK Stack, now known as the Elastic Stack, is the combination of Elastic’s very popular products: Elasticsearch, Logstash, and Kibana. According to Elastic, the Elastic Stack provides real-time insights from almost any type of structured and unstructured data source.

Setup

All code for this post has been tested on both MacOS and Linux. For this post, I am provisioning and deploying to a Linux workstation, running the most recent release of Fedora and Oracle VirtualBox. If you want to use AWS or another infrastructure provider instead of VirtualBox to build your swarm, it is fairly easy to switch the Docker Machine driver and change a few configuration items in the vms_create.sh script (see Provisioning, below).

Required Software

If you want to follow along with this post, you will need the latest versions of git, Docker, Docker Machine, Docker Compose, and VirtualBox installed.

Source Code

All source code for this post is located in two GitHub repositories. The first repository contains scripts to provision the VMs, create an overlay network and persistent host-mounted volumes, build the Docker swarm, and deploy Consul, Registrator, Swarm Visualizer, Fluentd, and the Elastic Stack. The second repository contains scripts to deploy two instances of the Widget Spring Boot RESTful service and a single instance of MongoDB. You can execute all scripts manually, from the command-line, or from a CI/CD pipeline, using tools such as Jenkins.

Provisioning the Swarm

To start, clone the first repository, and execute the single run_all.sh script, or execute the seven individual scripts necessary to provision the VMs, create the overlay network and host volumes, build the swarm, and deploy Consul, Registrator, Swarm Visualizer, Fluentd, and the Elastic Stack. Follow the steps below to complete this part.

When the scripts have completed, the resulting swarm should be configured similarly to the diagram below. Consul, Registrator, Swarm Visualizer, Fluentd, and the Elastic Stack containers should be distributed across the three swarm manager nodes and the three swarm worker nodes (VirtualBox VMs).

swarm_fluentd_diagram

Deploying the Application

Next, clone the second repository, and execute the single run_all.sh script, or execute the four scripts necessary to deploy the Widget Spring Boot RESTful service and a single instance of MongoDB. Follow the steps below to complete this part.

When the scripts have completed, the Widget service and MongoDB containers should be distributed across two of the three swarm worker nodes (VirtualBox VMs).

swarm_fluentd_diagram_b

To confirm the final state of the swarm and the running container stacks, use the following Docker commands.

Open the Swarm Visualizer web UI, using any of the swarm manager node IPs, on port 5001, to confirm the swarm health, as well as the running container’s locations.

Visualizer

Lastly, open the Consul Web UI, using any of the swarm manager node IPs, on port 5601, to confirm the running container’s health, as well as their placement on the swarm nodes.

Consul_1

Streaming Logs

Elastic Stack

If you read the previous post, Distributed Service Configuration with Consul, Spring Cloud, and Docker, you will notice we deployed a few additional components this time. First, the Elastic Stack (aka ELK), is deployed to the worker3 swarm worker node, within a single container. I have increased the CPU count and RAM assigned to this VM, to minimally run the Elastic Stack. If you review the docker-compose.yml file, you will note I am using Sébastien Pujadas’ sebp/elk:latest Docker base image from Docker Hub to provision the Elastic Stack. At the time of the post, this was based on the 5.3.0 version of ELK.

Docker Logging Driver

The Widget stack’s docker-compose.yml file has been modified since the last post. The compose file now incorporates a Fluentd logging configuration section for each service. The logging configuration includes the address of the Fluentd instance, on the same swarm worker node. The logging configuration also includes a tag for each log message.

Fluentd

In addition to the Elastic Stack, we have deployed Fluentd to the worker1 and worker2 swarm nodes. This is also where the Widget and MongoDB containers are deployed. Again, looking at the docker-compose.yml file, you will note we are using a custom Fluentd Docker image, garystafford/custom-fluentd:latest, which I created. The custom image is available on Docker Hub.

The custom Fluentd Docker image is based on Fluentd’s official onbuild Docker image, fluent/fluentd:onbuild. Fluentd provides instructions for building your own custom images, from their onbuild base images.

There were two reasons I chose to create a custom Fluentd Docker image. First, I added the Uken Games’ Fluentd Elasticsearch Plugin, to the Docker Image. This highly configurable Fluentd Output Plugin allows us to push Docker logs, processed by Fluentd to the Elasticsearch. Adding additional plugins is a common reason for creating a custom Fluentd Docker image.

The second reason to create a custom Fluentd Docker image was configuration. Instead of bind-mounting host directories or volumes to the multiple Fluentd containers, to provide Fluentd’s configuration, I baked the configuration file into the immutable Docker image. The bare-bones, basicFluentd configuration file defines three processes, which are Input, Filter, and Output. These processes are accomplished using Fluentd plugins. Fluentd has 6 types of plugins: Input, Parser, Filter, Output, Formatter and Buffer. Fluentd is primarily written in Ruby, and its plugins are Ruby gems.

Fluentd listens for input on tcp port 24224, using the forward Input Plugin. Docker logs are streamed locally on each swarm node, from the Widget and MongoDB containers to the local Fluentd container, over tcp port 24224, using Docker’s fluentd logging driver, introduced earlier. Fluentd

Fluentd then filters all input using the stdout Filter Plugin. This plugin prints events to stdout, or logs if launched with daemon mode. This is the most basic method of filtering.

Lastly, Fluentd outputs the filtered input to two destinations, a local log file and Elasticsearch. First, the Docker logs are sent to a local Fluentd log file. This is only for demonstration purposes and debugging. Outputting log files is not recommended for production, nor does it meet the 12-factor application recommendations for logging. Second, Fluentd outputs the Docker logs to Elasticsearch, over tcp port 9200, using the Fluentd Elasticsearch Plugin, introduced above.

log_message_flow

Additional Metadata

In addition to the log message itself, in JSON format, the fluentd log driver sends the following metadata in the structured log message: container_id, container_name, and source. This is helpful in identifying and categorizing log messages from multiple sources. Below is a sample of log messages from the raw Fluentd log file, with the metadata tags highlighted in yellow. At the bottom of the output is a log message parsed with jq, for better readability.

fluentd_logs

Using Elastic Stack

Now that our two Docker stacks are up and running on our swarm, we should be streaming logs to Elasticsearch. To confirm this, open the Kibana web console, which should be available at the IP address of the worker3 swarm worker node, on port 5601.

Kibana

For the sake of this demonstration, I increased the verbosity of the Spring Boot Widget service’s log level, from INFO to DEBUG, in Consul. At this level of logging, the two Widget services and the single MongoDB instance were generating an average of 250-400 log messages every 30 seconds, according to Kibana.

If that seems like a lot, keep in mind, these are Docker logs, which are single-line log entries. We have not aggregated multi-line messages, such as Java exceptions and stack traces messages, into single entries. That is for another post. Also, the volume of debug-level log messages generated by the communications between the individual services and Consul is fairly verbose.

Kibana_3

Inspecting log entries in Kibana, we find the metadata tags contained in the raw Fluentd log output are now searchable fields: container_id, container_name, and source, as well as log. Also, note the _type field, with a value of ‘fluentd’. We injected this field in the output section of our Fluentd configuration, using the Fluentd Elasticsearch Plugin. The _type fiel allows us to differentiate these log entries from other potential data sources.

Kibana_2.png

References

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , ,

Leave a comment

Provision and Deploy a Consul Cluster on AWS, using Terraform, Docker, and Jenkins

Cover2

Introduction

Modern DevOps tools, such as HashiCorp’s Packer and Terraform, make it easier to provision and manage complex cloud architecture. Utilizing a CI/CD server, such as Jenkins, to securely automate the use of these DevOps tools, ensures quick and consistent results.

In a recent post, Distributed Service Configuration with Consul, Spring Cloud, and Docker, we built a Consul cluster using Docker swarm mode, to host distributed configurations for a Spring Boot application. The cluster was built locally with VirtualBox. This architecture is fine for development and testing, but not for use in Production.

In this post, we will deploy a highly available three-node Consul cluster to AWS. We will use Terraform to provision a set of EC2 instances and accompanying infrastructure. The instances will be built from a hybrid AMIs containing the new Docker Community Edition (CE). In a recent post, Baking AWS AMI with new Docker CE Using Packer, we provisioned an Ubuntu AMI with Docker CE, using Packer. We will deploy Docker containers to each EC2 host, containing an instance of Consul server.

All source code can be found on GitHub.

Jenkins

I have chosen Jenkins to automate all of the post’s build, provisioning, and deployment tasks. However, none of the code is written specific to Jenkins; you may run all of it from the command line.

For this post, I have built four projects in Jenkins, as follows:

  1. Provision Docker CE AMI: Builds Ubuntu AMI with Docker CE, using Packer
  2. Provision Consul Infra AWS: Provisions Consul infrastructure on AWS, using Terraform
  3. Deploy Consul Cluster AWS: Deploys Consul to AWS, using Docker
  4. Destroy Consul Infra AWS: Destroys Consul infrastructure on AWS, using Terraform

Jenkins UI

We will primarily be using the ‘Provision Consul Infra AWS’, ‘Deploy Consul Cluster AWS’, and ‘Destroy Consul Infra AWS’ Jenkins projects in this post. The fourth Jenkins project, ‘Provision Docker CE AMI’, automates the steps found in the recent post, Baking AWS AMI with new Docker CE Using Packer, to build the AMI used to provision the EC2 instances in this post.

Consul AWS Diagram 2

Terraform

Using Terraform, we will provision EC2 instances in three different Availability Zones within the US East 1 (N. Virginia) Region. Using Terraform’s Amazon Web Services (AWS) provider, we will create the following AWS resources:

  • (1) Virtual Private Cloud (VPC)
  • (1) Internet Gateway
  • (1) Key Pair
  • (3) Elastic Cloud Compute (EC2) Instances
  • (2) Security Groups
  • (3) Subnets
  • (1) Route
  • (3) Route Tables
  • (3) Route Table Associations

The final AWS architecture should resemble the following:

Consul AWS Diagram

Production Ready AWS

Although we have provisioned a fairly complete VPC for this post, it is far from being ready for Production. I have created two security groups, limiting the ingress and egress to the cluster. However, to further productionize the environment would require additional security hardening. At a minimum, you should consider adding public/private subnets, NAT gateways, network access control list rules (network ACLs), and the use of HTTPS for secure communications.

In production, applications would communicate with Consul through local Consul clients. Consul clients would take part in the LAN gossip pool from different subnets, Availability Zones, Regions, or VPCs using VPC peering. Communications would be tightly controlled by IAM, VPC, subnet, IP address, and port.

Also, you would not have direct access to the Consul UI through a publicly exposed IP or DNS address. Access to the UI would be removed altogether or locked down to specific IP addresses, and accessed restricted to secure communication channels.

Consul

We will achieve high availability (HA) by clustering three Consul server nodes across the three Elastic Cloud Compute (EC2) instances. In this minimally sized, three-node cluster of Consul servers, we are protected from the loss of one Consul server node, one EC2 instance, or one Availability Zone(AZ). The cluster will still maintain a quorum of two nodes. An additional level of HA that Consul supports, multiple datacenters (multiple AWS Regions), is not demonstrated in this post.

Docker

Having Docker CE already installed on each EC2 instance allows us to execute remote Docker commands over SSH from Jenkins. These commands will deploy and configure a Consul server node, within a Docker container, on each EC2 instance. The containers are built from HashiCorp’s latest Consul Docker image pulled from Docker Hub.

Getting Started

Preliminary Steps

If you have built infrastructure on AWS with Terraform, these steps should be familiar to you:

  1. First, you will need an AMI with Docker. I suggest reading Baking AWS AMI with new Docker CE Using Packer.
  2. You will need an AWS IAM User with the proper access to create the required infrastructure. For this post, I created a separate Jenkins IAM User with PowerUser level access.
  3. You will need to have an RSA public-private key pair, which can be used to SSH into the EC2 instances and install Consul.
  4. Ensure you have your AWS credentials set. I usually source mine from a .env file, as environment variables. Jenkins can securely manage credentials, using secret text or files.
  5. Fork and/or clone the Consul cluster project from  GitHub.
  6. Change the aws_key_name and public_key_path variable values to your own RSA key, in the variables.tf file
  7. Change the aws_amis_base variable values to your own AMI ID (see step 1)
  8. If you are do not want to use the US East 1 Region and its AZs, modify the variables.tf, network.tf, and instances.tf files.
  9. Disable Terraform’s remote state or modify the resource to match your remote state configuration, in the main.tf file. I am using an Amazon S3 bucket to store my Terraform remote state.

Building an AMI with Docker

If you have not built an Amazon Machine Image (AMI) for use in this post already, you can do so using the scripts provided in the previous post’s GitHub repository. To automate the AMI build task, I built the ‘Provision Docker CE AMI’ Jenkins project. Identical to the other three Jenkins projects in this post, this project has three main tasks, which include: 1) SCM: clone the Packer AMI GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run Packer.

The SCM and Bindings tasks are identical to the other projects (see below for details), except for the use of a different GitHub repository. The project’s Build step, which runs the packer_build_ami.sh script looks as follows:

jenkins_13

The resulting AMI ID will need to be manually placed in Terraform’s variables.tf file, before provisioning the AWS infrastructure with Terraform. The new AMI ID will be displayed in Jenkin’s build output.

jenkins_14

Provisioning with Terraform

Based on the modifications you made in the Preliminary Steps, execute the terraform validate command to confirm your changes. Then, run the terraform plan command to review the plan. Assuming are were no errors, finally, run the terraform apply command to provision the AWS infrastructure components.

In Jenkins, I have created the ‘Provision Consul Infra AWS’ project. This project has three tasks, which include: 1) SCM: clone the GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run Terraform. Those tasks look as follows:

Jenkins_08.png

You will obviously need to use your modified GitHub project, incorporating the configuration changes detailed above, as the SCM source for Jenkins.

Jenkins Credentials

You will also need to configure your AWS credentials.

Jenkins_03.png

The provision_infra.sh script provisions the AWS infrastructure using Terraform. The script also updates Terraform’s remote state. Remember to update the remote state configuration in the script to match your personal settings.

The Jenkins build output should look similar to the following:

jenkins_12.png

Although the build only takes about 90 seconds to complete, the EC2 instances could take a few extra minutes to complete their Status Checks and be completely ready. The final results in the AWS EC2 Management Console should look as follows:

EC2 Management Console

Note each EC2 instance is running in a different US East 1 Availability Zone.

Installing Consul

Once the AWS infrastructure is running and the EC2 instances have completed their Status Checks successfully, we are ready to deploy Consul. In Jenkins, I have created the ‘Deploy Consul Cluster AWS’ project. This project has three tasks, which include: 1) SCM: clone the GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run an SSH remote Docker command on each EC2 instance to deploy Consul. The SCM and Bindings tasks are identical to the project above. The project’s Build step looks as follows:

Jenkins_04.png

First, the delete_containers.sh script deletes any previous instances of Consul containers. This is helpful if you need to re-deploy Consul. Next, the deploy_consul.sh script executes a series of SSH remote Docker commands to install and configure Consul on each EC2 instance.

The entire Jenkins build process only takes about 30 seconds. Afterward, the output from a successful Jenkins build should show that all three Consul server instances are running, have formed a quorum, and have elected a Leader.

Jenkins_05.png

Persisting State

The Consul Docker image exposes VOLUME /consul/data, which is a path were Consul will place its persisted state. Using Terraform’s remote-exec provisioner, we create a directory on each EC2 instance, at /home/ubuntu/consul/config. The docker run command bind-mounts the container’s /consul/data path to the EC2 host’s /home/ubuntu/consul/config directory.

According to Consul, the Consul server container instance will ‘store the client information plus snapshots and data related to the consensus algorithm and other state, like Consul’s key/value store and catalog’ in the /consul/data directory. That container directory is now bind-mounted to the EC2 host, as demonstrated below.

jenkins_15

Accessing Consul

Following a successful deployment, you should be able to use the public URL, displayed in the build output of the ‘Deploy Consul Cluster AWS’ project, to access the Consul UI. Clicking on the Nodes tab in the UI, you should see all three Consul server instances, one per EC2 instance, running and healthy.

Consul UI

Destroying Infrastructure

When you are finished with the post, you may want to remove the running infrastructure, so you don’t continue to get billed by Amazon. The ‘Destroy Consul Infra AWS’ project destroys all the AWS infrastructure, provisioned as part of this post, in about 60 seconds. The project’s SCM and Bindings tasks are identical to the both previous projects. The Build step calls the destroy_infra.sh script, which is included in the GitHub project. The script executes the terraform destroy -force command. It will delete all running infrastructure components associated with the post and update Terraform’s remote state.

Jenkins_09

Conclusion

This post has demonstrated how modern DevOps tooling, such as HashiCorp’s Packer and Terraform, make it easy to build, provision and manage complex cloud architecture. Using a CI/CD server, such as Jenkins, to securely automate the use of these tools, ensures quick and consistent results.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , , , ,

Leave a comment

Baking AWS AMI with new Docker CE Using Packer

AWS for Docker

Introduction

On March 2 (less than a week ago as of this post), Docker announced the release of Docker Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. As part of the release, Docker also renamed the free Docker products to Docker Community Edition (CE). Both products are adopting a new time-based versioning scheme for both Docker EE and CE. The initial release of Docker CE and EE, the 17.03 release, is the first to use the new scheme.

Along with the release, Docker delivered excellent documentation on installing, configuring, and troubleshooting the new Docker EE and CE. In this post, I will demonstrate how to partially bake an existing Amazon Machine Image (Amazon AMI) with the new Docker CE, preparing it as a base for the creation of Amazon Elastic Compute Cloud (Amazon EC2) compute instances.

Adding Docker and similar tooling to an AMI is referred to as partially baking an AMI, often referred to as a hybrid AMI. According to AWS, ‘hybrid AMIs provide a subset of the software needed to produce a fully functional instance, falling in between the fully baked and JeOS (just enough operating system) options on the AMI design spectrum.

Installing Docker CE on an AWS AMI should not be confused with Docker’s also recently announced Docker Community Edition (CE) for AWS. Docker for AWS offers multiple CloudFormation templates for Docker EE and CE. According to Docker, Docker for AWS ‘provides a Docker-native solution that avoids operational complexity and adding unneeded additional APIs to the Docker stack.

Base AMI

Docker provides detailed directions for installing Docker CE and EE onto several major Linux distributions. For this post, we will choose a widely used Linux distro, Ubuntu. According to Docker, currently Docker CE and EE can be installed on three popular Ubuntu releases:

  • Yakkety 16.10
  • Xenial 16.04 (LTS)
  • Trusty 14.04 (LTS)

To provision a small EC2 instance in Amazon’s US East (N. Virginia) Region, I will choose Ubuntu 16.04.2 LTS Xenial Xerus . According to Canonical’s Amazon EC2 AMI Locator website, a Xenial 16.04 LTS AMI is available, ami-09b3691f, for US East 1, as a t2.micro EC2 instance type.

Packer

HashiCorp Packer will be used to partially bake the base Ubuntu Xenial 16.04 AMI with Docker CE 17.03. HashiCorp describes Packer as ‘a tool for creating machine and container images for multiple platforms from a single source configuration.’ The JSON-format Packer file is as follows:

{
  "variables": {
    "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
    "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
    "us_east_1_ami": "ami-09b3691f",
    "name": "aws-docker-ce-base",
    "us_east_1_name": "ubuntu-xenial-docker-ce-base",
    "ssh_username": "ubuntu"
  },
  "builders": [
    {
      "name": "{{user `us_east_1_name`}}",
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "us-east-1",
      "vpc_id": "",
      "subnet_id": "",
      "source_ami": "{{user `us_east_1_ami`}}",
      "instance_type": "t2.micro",
      "ssh_username": "{{user `ssh_username`}}",
      "ssh_timeout": "10m",
      "ami_name": "{{user `us_east_1_name`}} {{timestamp}}",
      "ami_description": "{{user `us_east_1_name`}} AMI",
      "run_tags": {
        "ami-create": "{{user `us_east_1_name`}}"
      },
      "tags": {
        "ami": "{{user `us_east_1_name`}}"
      },
      "ssh_private_ip": false,
      "associate_public_ip_address": true
    }
  ],
  "provisioners": [
    {
      "type": "file",
      "source": "bootstrap_docker_ce.sh",
      "destination": "/tmp/bootstrap_docker_ce.sh"
    },
    {
          "type": "file",
          "source": "cleanup.sh",
          "destination": "/tmp/cleanup.sh"
    },
    {
      "type": "shell",
      "execute_command": "echo 'packer' | sudo -S sh -c '{{ .Vars }} {{ .Path }}'",
      "inline": [
        "whoami",
        "cd /tmp",
        "chmod +x bootstrap_docker_ce.sh",
        "chmod +x cleanup.sh",
        "ls -alh /tmp",
        "./bootstrap_docker_ce.sh",
        "sleep 10",
        "./cleanup.sh"
      ]
    }
  ]
}

The Packer file uses Packer’s amazon-ebs builder type. This builder is used to create Amazon AMIs backed by Amazon Elastic Block Store (EBS) volumes, for use in EC2.

Bootstrap Script

To install Docker CE on the AMI, the Packer file executes a bootstrap shell script. The bootstrap script and subsequent cleanup script are executed using  Packer’s remote shell provisioner. The bootstrap is like the following:

#!/bin/sh

sudo apt-get remove docker docker-engine

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get install -y docker-ce

sudo groupadd docker
sudo usermod -aG docker ubuntu

sudo systemctl enable docker

This script closely follows directions provided by Docker, for installing Docker CE on Ubuntu. After removing any previous copies of Docker, the script installs Docker CE. To ensure sudo is not required to execute Docker commands on any EC2 instance provisioned from resulting AMI, the script adds the ubuntu user to the docker group.

The bootstrap script also uses systemd to start the Docker daemon. Starting with Ubuntu 15.04, Systemd System and Service Manager is used by default instead of the previous init system, Upstart. Systemd ensures Docker will start on boot.

Cleaning Up

It is best good practice to clean up your activities after baking an AMI. I have included a basic clean up script. The cleanup script is as follows:

#!/bin/sh

set -e

echo 'Cleaning up after bootstrapping...'
sudo apt-get -y autoremove
sudo apt-get -y clean
sudo rm -rf /tmp/*
cat /dev/null > ~/.bash_history
history -c
exit

Partially Baking

Before running Packer to build the Docker CE AMI, I set both my AWS access key and AWS secret access key. The Packer file expects the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

Running the packer build ubuntu_docker_ce_ami.json command builds the AMI. The abridged output should look similar to the following:

$ packer build docker_ami.json
ubuntu-xenial-docker-ce-base output will be in this color.

==> ubuntu-xenial-docker-ce-base: Prevalidating AMI Name...
    ubuntu-xenial-docker-ce-base: Found Image ID: ami-09b3691f
==> ubuntu-xenial-docker-ce-base: Creating temporary keypair: packer_58bc7a49-9e66-7f76-ce8e-391a67d94987
==> ubuntu-xenial-docker-ce-base: Creating temporary security group for this instance...
==> ubuntu-xenial-docker-ce-base: Authorizing access to port 22 the temporary security group...
==> ubuntu-xenial-docker-ce-base: Launching a source AWS instance...
    ubuntu-xenial-docker-ce-base: Instance ID: i-0ca883ecba0c28baf
==> ubuntu-xenial-docker-ce-base: Waiting for instance (i-0ca883ecba0c28baf) to become ready...
==> ubuntu-xenial-docker-ce-base: Adding tags to source instance
==> ubuntu-xenial-docker-ce-base: Waiting for SSH to become available...
==> ubuntu-xenial-docker-ce-base: Connected to SSH!
==> ubuntu-xenial-docker-ce-base: Uploading bootstrap_docker_ce.sh => /tmp/bootstrap_docker_ce.sh
==> ubuntu-xenial-docker-ce-base: Uploading cleanup.sh => /tmp/cleanup.sh
==> ubuntu-xenial-docker-ce-base: Provisioning with shell script: /var/folders/kf/637b0qns7xb0wh9p8c4q0r_40000gn/T/packer-shell189662158
    ...
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Building dependency tree...
    ubuntu-xenial-docker-ce-base: Reading state information...
    ubuntu-xenial-docker-ce-base: E: Unable to locate package docker-engine
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Building dependency tree...
    ubuntu-xenial-docker-ce-base: Reading state information...
    ubuntu-xenial-docker-ce-base: ca-certificates is already the newest version (20160104ubuntu1).
    ubuntu-xenial-docker-ce-base: apt-transport-https is already the newest version (1.2.19).
    ubuntu-xenial-docker-ce-base: curl is already the newest version (7.47.0-1ubuntu2.2).
    ubuntu-xenial-docker-ce-base: software-properties-common is already the newest version (0.96.20.5).
    ubuntu-xenial-docker-ce-base: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    ubuntu-xenial-docker-ce-base: OK
    ubuntu-xenial-docker-ce-base: pub   4096R/0EBFCD88 2017-02-22
    ubuntu-xenial-docker-ce-base: Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    ubuntu-xenial-docker-ce-base: uid                  Docker Release (CE deb) <docker@docker.com>
    ubuntu-xenial-docker-ce-base: sub   4096R/F273FCD8 2017-02-22
    ubuntu-xenial-docker-ce-base:
    ubuntu-xenial-docker-ce-base: Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
    ubuntu-xenial-docker-ce-base: Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
    ...
    ubuntu-xenial-docker-ce-base: Get:27 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [89.5 kB]
    ubuntu-xenial-docker-ce-base: Fetched 10.6 MB in 2s (4,065 kB/s)
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Building dependency tree...
    ubuntu-xenial-docker-ce-base: Reading state information...
    ubuntu-xenial-docker-ce-base: Calculating upgrade...
    ubuntu-xenial-docker-ce-base: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Building dependency tree...
    ubuntu-xenial-docker-ce-base: Reading state information...
    ubuntu-xenial-docker-ce-base: The following additional packages will be installed:
    ubuntu-xenial-docker-ce-base: aufs-tools cgroupfs-mount libltdl7
    ubuntu-xenial-docker-ce-base: Suggested packages:
    ubuntu-xenial-docker-ce-base: mountall
    ubuntu-xenial-docker-ce-base: The following NEW packages will be installed:
    ubuntu-xenial-docker-ce-base: aufs-tools cgroupfs-mount docker-ce libltdl7
    ubuntu-xenial-docker-ce-base: 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
    ubuntu-xenial-docker-ce-base: Need to get 19.4 MB of archives.
    ubuntu-xenial-docker-ce-base: After this operation, 89.4 MB of additional disk space will be used.
    ubuntu-xenial-docker-ce-base: Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
    ...
    ubuntu-xenial-docker-ce-base: Get:4 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 17.03.0~ce-0~ubuntu-xenial [19.3 MB]
    ubuntu-xenial-docker-ce-base: debconf: unable to initialize frontend: Dialog
    ubuntu-xenial-docker-ce-base: debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
    ubuntu-xenial-docker-ce-base: debconf: falling back to frontend: Readline
    ubuntu-xenial-docker-ce-base: debconf: unable to initialize frontend: Readline
    ubuntu-xenial-docker-ce-base: debconf: (This frontend requires a controlling tty.)
    ubuntu-xenial-docker-ce-base: debconf: falling back to frontend: Teletype
    ubuntu-xenial-docker-ce-base: dpkg-preconfigure: unable to re-open stdin:
    ubuntu-xenial-docker-ce-base: Fetched 19.4 MB in 1s (17.8 MB/s)
    ubuntu-xenial-docker-ce-base: Selecting previously unselected package aufs-tools.
    ubuntu-xenial-docker-ce-base: (Reading database ... 53844 files and directories currently installed.)
    ubuntu-xenial-docker-ce-base: Preparing to unpack .../aufs-tools_1%3a3.2+20130722-1.1ubuntu1_amd64.deb ...
    ubuntu-xenial-docker-ce-base: Unpacking aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
    ...
    ubuntu-xenial-docker-ce-base: Setting up docker-ce (17.03.0~ce-0~ubuntu-xenial) ...
    ubuntu-xenial-docker-ce-base: Processing triggers for libc-bin (2.23-0ubuntu5) ...
    ubuntu-xenial-docker-ce-base: Processing triggers for systemd (229-4ubuntu16) ...
    ubuntu-xenial-docker-ce-base: Processing triggers for ureadahead (0.100.0-19) ...
    ubuntu-xenial-docker-ce-base: groupadd: group 'docker' already exists
    ubuntu-xenial-docker-ce-base: Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install...
    ubuntu-xenial-docker-ce-base: Executing /lib/systemd/systemd-sysv-install enable docker
    ubuntu-xenial-docker-ce-base: Cleanup...
    ubuntu-xenial-docker-ce-base: Reading package lists...
    ubuntu-xenial-docker-ce-base: Building dependency tree...
    ubuntu-xenial-docker-ce-base: Reading state information...
    ubuntu-xenial-docker-ce-base: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
==> ubuntu-xenial-docker-ce-base: Stopping the source instance...
==> ubuntu-xenial-docker-ce-base: Waiting for the instance to stop...
==> ubuntu-xenial-docker-ce-base: Creating the AMI: ubuntu-xenial-docker-ce-base 1288227081
    ubuntu-xenial-docker-ce-base: AMI: ami-e9ca6eff
==> ubuntu-xenial-docker-ce-base: Waiting for AMI to become ready...
==> ubuntu-xenial-docker-ce-base: Modifying attributes on AMI (ami-e9ca6eff)...
    ubuntu-xenial-docker-ce-base: Modifying: description
==> ubuntu-xenial-docker-ce-base: Modifying attributes on snapshot (snap-058a26c0250ee3217)...
==> ubuntu-xenial-docker-ce-base: Adding tags to AMI (ami-e9ca6eff)...
==> ubuntu-xenial-docker-ce-base: Tagging snapshot: snap-043a16c0154ee3217
==> ubuntu-xenial-docker-ce-base: Creating AMI tags
==> ubuntu-xenial-docker-ce-base: Creating snapshot tags
==> ubuntu-xenial-docker-ce-base: Terminating the source AWS instance...
==> ubuntu-xenial-docker-ce-base: Cleaning up any extra volumes...
==> ubuntu-xenial-docker-ce-base: No volumes to clean up, skipping
==> ubuntu-xenial-docker-ce-base: Deleting temporary security group...
==> ubuntu-xenial-docker-ce-base: Deleting temporary keypair...
Build 'ubuntu-xenial-docker-ce-base' finished.

==> Builds finished. The artifacts of successful builds are:
--> ubuntu-xenial-docker-ce-base: AMIs were created:

us-east-1: ami-e9ca6eff

Results

The result is an Ubuntu 16.04 AMI in US East 1 with Docker CE 17.03 installed. To confirm the new AMI is now available, I will use the AWS CLI to examine the resulting AMI:

aws ec2 describe-images \
  --filters Name=tag-key,Values=ami Name=tag-value,Values=ubuntu-xenial-docker-ce-base \
  --query 'Images[*].{ID:ImageId}'

Resulting output:

{
    "Images": [
        {
            "VirtualizationType": "hvm",
            "Name": "ubuntu-xenial-docker-ce-base 1488747081",
            "Tags": [
                {
                    "Value": "ubuntu-xenial-docker-ce-base",
                    "Key": "ami"
                }
            ],
            "Hypervisor": "xen",
            "SriovNetSupport": "simple",
            "ImageId": "ami-e9ca6eff",
            "State": "available",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda1",
                    "Ebs": {
                        "DeleteOnTermination": true,
                        "SnapshotId": "snap-048a16c0250ee3227",
                        "VolumeSize": 8,
                        "VolumeType": "gp2",
                        "Encrypted": false
                    }
                },
                {
                    "DeviceName": "/dev/sdb",
                    "VirtualName": "ephemeral0"
                },
                {
                    "DeviceName": "/dev/sdc",
                    "VirtualName": "ephemeral1"
                }
            ],
            "Architecture": "x86_64",
            "ImageLocation": "931066906971/ubuntu-xenial-docker-ce-base 1488747081",
            "RootDeviceType": "ebs",
            "OwnerId": "931066906971",
            "RootDeviceName": "/dev/sda1",
            "CreationDate": "2017-03-05T20:53:41.000Z",
            "Public": false,
            "ImageType": "machine",
            "Description": "ubuntu-xenial-docker-ce-base AMI"
        }
    ]
}

Finally, here is the new AMI as seen in the AWS EC2 Management Console:

EC2 Management Console - AMI

Terraform

To confirm Docker CE is installed and running, I can provision a new EC2 instance, using HashiCorp Terraform. This post is too short to detail all the Terraform code required to stand up a complete environment. I’ve included the complete code in the GitHub repo for this post. Not, the Terraform code is only used to testing. No security, including the use of a properly configured security groups, public/private subnets, and a NAT server, is configured.

Below is a greatly abridged version of the Terraform code I used to provision a new EC2 instance, using Terraform’s aws_instance resource. The resulting EC2 instance should have Docker CE available.

# test-docker-ce instance
resource "aws_instance" "test-docker-ce" {
  connection {
    user        = "ubuntu"
    private_key = "${file("~/.ssh/test-docker-ce")}"
    timeout     = "${connection_timeout}"
  }

  ami               = "ami-e9ca6eff"
  instance_type     = "t2.nano"
  availability_zone = "us-east-1a"
  count             = "1"

  key_name               = "${aws_key_pair.auth.id}"
  vpc_security_group_ids = ["${aws_security_group.test-docker-ce.id}"]
  subnet_id              = "${aws_subnet.test-docker-ce.id}"

  tags {
    Owner       = "Gary A. Stafford"
    Terraform   = true
    Environment = "test-docker-ce"
    Name        = "tf-instance-test-docker-ce"
  }
}

By using the AWS CLI, once again, we can confirm the new EC2 instance was built using the correct AMI:

aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-test-docker-ce' \
  --output text --query 'Reservations[*].Instances[*].ImageId'

Resulting output looks good:

ami-e9ca6eff

Finally, here is the new EC2 as seen in the AWS EC2 Management Console:

EC2 Management Console - EC2

SSHing into the new EC2 instance, I should observe that the operating system is Ubuntu 16.04.2 LTS and that Docker version 17.03.0-ce is installed and running:

Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-64-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

Last login: Sun Mar  5 22:06:01 2017 from <my_ip_address>

ubuntu@ip-<ec2_local_ip>:~$ docker --version
Docker version 17.03.0-ce, build 3a232c8

Conclusion

Docker EE and CE represent a significant step forward in expanding Docker’s enterprise-grade toolkit. Replacing or installing Docker EE or CE on your AWS AMIs is easy, using Docker’s guide along with HashiCorp Packer.

All source code for this post can be found on GitHub.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , , , , ,

1 Comment

Distributed Service Configuration with Consul, Spring Cloud, and Docker

Consul UI - Nodes

Introduction

In this post, we will explore the use of HashiCorp Consul for distributed configuration of containerized Spring Boot services, deployed to a Docker swarm cluster. In the first half of the post, we will provision a series of virtual machines, build a Docker swarm on top of those VMs, and install Consul and Registrator on each swarm host. In the second half of the post, we will configure and deploy multiple instances of a containerized Spring Boot service, backed by MongoDB, to the swarm cluster, using Docker Compose.

The final objective of this post is to have all the deployed services registered with Consul, via Registrator, and the service’s runtime configuration provided dynamically by Consul.

Objectives

  1. Provision a series of virtual machine hosts, using Docker Machine and Oracle VirtualBox
  2. Provide distributed and highly available cluster management and service orchestration, using Docker swarm mode
  3. Provide distributed and highly available service discovery, health checking, and a hierarchical key/value store, using HashiCorp Consul
  4. Provide service discovery and automatic registration of containerized services to Consul, using Registrator, Glider Labs’ service registry bridge for Docker
  5. Provide distributed configuration for containerized Spring Boot services using Consul and Pivotal Spring Cloud Consul Config
  6. Deploy multiple instances of a Spring Boot service, backed by MongoDB, to the swarm cluster, using Docker Compose version 3.

Technologies

  • Docker
  • Docker Compose (v3)
  • Docker Hub
  • Docker Machine
  • Docker swarm mode
  • Docker Swarm Visualizer (Mano Marks)
  • Glider Labs Registrator
  • Gradle
  • HashiCorp Consul
  • Java
  • MongoDB
  • Oracle VirtualBox VM Manager
  • Spring Boot
  • Spring Cloud Consul Config
  • Travis CI

Code Sources

All code in this post exists in two GitHub repositories. I have labeled each code snippet in the post with the corresponding file in the repositories. The first repository, consul-docker-swarm-compose, contains all the code necessary for provisioning the VMs and building the Docker swarm and Consul clusters. The repo also contains code for deploying Swarm Visualizer and Registrator. Make sure you clone the swarm-mode branch.

git clone --depth 1 --branch swarm-mode \
  https://github.com/garystafford/microservice-docker-demo-consul.git
cd microservice-docker-demo-consul

The second repository, microservice-docker-demo-widget, contains all the code necessary for configuring Consul and deploying the Widget service stack. Make sure you clone the consul branch.

git clone --depth 1 --branch consul \
  https://github.com/garystafford/microservice-docker-demo-widget.git
cd microservice-docker-demo-widget

Docker Versions

With the Docker toolset evolving so quickly, features frequently change or become outmoded. At the time of this post, I am running the following versions on my Mac.

  • Docker Engine version 1.13.1
  • Boot2Docker version 1.13.1
  • Docker Compose version 1.11.2
  • Docker Machine version 0.10.0
  • HashiCorp Consul version 0.7.5

Provisioning VM Hosts

First, we will provision a series of six virtual machines (aka machines, VMs, or hosts), using Docker Machine and Oracle VirtualBox.

consul-post-1

By switching Docker Machine’s driver, you can easily switch from VirtualBox to other vendors, such as VMware, AWS, GCE, or AWS. I have explicitly set several VirtualBox driver options, using the default values, for better clarification into the VirtualBox VMs being created.

vms=( "manager1" "manager2" "manager3"
      "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine create \
    --driver virtualbox \
    --virtualbox-memory "1024" \
    --virtualbox-cpu-count "1" \
    --virtualbox-disk-size "20000" \
    ${vm}
done

Using the docker-machine ls command, we should observe the resulting series of VMs looks similar to the following. Note each of the six VMs has a machine name and an IP address in the range of 192.168.99.1/24.

$ docker-machine ls
NAME       ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager1   -        virtualbox   Running   tcp://192.168.99.109:2376           v1.13.1
manager2   -        virtualbox   Running   tcp://192.168.99.110:2376           v1.13.1
manager3   -        virtualbox   Running   tcp://192.168.99.111:2376           v1.13.1
worker1    -        virtualbox   Running   tcp://192.168.99.112:2376           v1.13.1
worker2    -        virtualbox   Running   tcp://192.168.99.113:2376           v1.13.1
worker3    -        virtualbox   Running   tcp://192.168.99.114:2376           v1.13.1

We may also view the VMs using the Oracle VM VirtualBox Manager application.

Oracle VirtualBox VM Manager

Docker Swarm Mode

Next, we will provide, amongst other capabilities, cluster management and orchestration, using Docker swarm mode. It is important to understand that the relatively new Docker swarm mode is not the same the Docker Swarm. Legacy Docker Swarm was succeeded by Docker’s integrated swarm mode, with the release of Docker v1.12.0, in July 2016.

We will create a swarm (a cluster of Docker Engines or nodes), consisting of three Manager nodes and three Worker nodes, on the six VirtualBox VMs. Using this configuration, the swarm will be distributed and highly available, able to suffer the loss of one of the Manager nodes, without failing.

consul-post-2

Manager Nodes

First, we will create the initial Docker swarm Manager node.

SWARM_MANAGER_IP=$(docker-machine ip manager1)
echo ${SWARM_MANAGER_IP}

docker-machine ssh manager1 \
  "docker swarm init \
  --advertise-addr ${SWARM_MANAGER_IP}"

This initial Manager node advertises its IP address to future swarm members.

Next, we will create two additional swarm Manager nodes, which will join the initial Manager node, by using the initial Manager node’s advertised IP address. The three Manager nodes will then elect a single Leader to conduct orchestration tasks. According to Docker, Manager nodes implement the Raft Consensus Algorithm to manage the global cluster state.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

docker-machine env manager1
eval $(docker-machine env manager1)

MANAGER_SWARM_JOIN=$(docker-machine ssh ${vms[0]} "docker swarm join-token manager")
MANAGER_SWARM_JOIN=$(echo ${MANAGER_SWARM_JOIN} | grep -E "(docker).*(2377)" -o)
MANAGER_SWARM_JOIN=$(echo ${MANAGER_SWARM_JOIN//\\/''})
echo ${MANAGER_SWARM_JOIN}

for vm in ${vms[@]:1:2}
do
  docker-machine ssh ${vm} ${MANAGER_SWARM_JOIN}
done

A quick note on the string manipulation of the MANAGER_SWARM_JOIN variable, above. Running the docker swarm join-token manager command, outputs something similar to the following.

To add a manager to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-1s53oajsj19lgniar3x1gtz3z9x0iwwumlew0h9ism5alt2iic-1qxo1nx24pyd0pg61hr6pp47t \
    192.168.99.109:2377

Using a bit of string manipulation, the resulting value of the MANAGER_SWARM_JOIN variable will be similar to the following command. We then ssh into each host and execute this command, one for the Manager nodes and another similar command, for the Worker nodes.

docker swarm join --token SWMTKN-1-1s53oajsj19lgniar3x1gtz3z9x0iwwumlew0h9ism5alt2iic-1qxo1nx24pyd0pg61hr6pp47t 192.168.99.109:2377

Worker Nodes

Next, we will create three swarm Worker nodes, using a similar method. The three Worker nodes will join the three swarm Manager nodes, as part of the swarm cluster. The main difference, according to Docker, Worker node’s “sole purpose is to execute containers. Worker nodes don’t participate in the Raft distributed state, make in scheduling decisions, or serve the swarm mode HTTP API.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

WORKER_SWARM_JOIN=$(docker-machine ssh manager1 "docker swarm join-token worker")
WORKER_SWARM_JOIN=$(echo ${WORKER_SWARM_JOIN} | grep -E "(docker).*(2377)" -o)
WORKER_SWARM_JOIN=$(echo ${WORKER_SWARM_JOIN//\\/''})
echo ${WORKER_SWARM_JOIN}

for vm in ${vms[@]:3:3}
do
  docker-machine ssh ${vm} ${WORKER_SWARM_JOIN}
done

Using the docker node ls command, we should observe the resulting Docker swarm cluster looks similar to the following:

$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
u75gflqvu7blmt5mcr2dcp4g5 *  manager1  Ready   Active        Leader
0b7pkek76dzeedtqvvjssdt2s    manager2  Ready   Active        Reachable
s4mmra8qdgwukjsfg007rvbge    manager3  Ready   Active        Reachable
n89upik5v2xrjjaeuy9d4jybl    worker1   Ready   Active
nsy55qzavzxv7xmanraijdw4i    worker2   Ready   Active
hhn1l3qhej0ajmj85gp8qhpai    worker3   Ready   Active

Note the three swarm Manager nodes, three swarm Worker nodes, and the Manager, which was elected Leader. The other two Manager nodes are marked as Reachable. The asterisk indicates manager1 is the active machine.

I have also deployed Mano Marks’ Docker Swarm Visualizer to each of the swarm cluster’s three Manager nodes. This tool is described as a visualizer for Docker Swarm Mode using the Docker Remote API, Node.JS, and D3. It provides a great visualization the swarm cluster and its running components.

docker service create \
  --name swarm-visualizer \
  --publish 5001:8080/tcp \
  --constraint node.role==manager \
  --mode global \
  --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
  manomarks/visualizer:latest

The Swarm Visualizer should be available on any of the Manager’s IP addresses, on port 5001. We constrained the Visualizer to the Manager nodes, by using the node.role==manager constraint.

Docker Swarm Visualizer

HashiCorp Consul

Next, we will install HashiCorp Consul onto our swarm cluster of VirtualBox hosts. Consul will provide us with service discovery, health checking, and a hierarchical key/value store. Consul will be installed, such that we end up with six Consul Agents. Agents can run as Servers or Clients. Similar to Docker swarm mode, we will install three Consul servers and three Consul clients, all in one Consul datacenter (our set of six local VMs). This clustered configuration will ensure Consul is distributed and highly available, able to suffer the loss of one of the Consul servers instances, without failing.

consul-post-3

Consul Servers

Again, similar to Docker swarm mode, we will install the initial Consul server. Both the Consul servers and clients run inside Docker containers, one per swarm host.

consul_server="consul-server1"
docker-machine env manager1
eval $(docker-machine env manager1)

docker run -d \
  --net=host \
  --hostname ${consul_server} \
  --name ${consul_server} \
  --env "SERVICE_IGNORE=true" \
  --env "CONSUL_CLIENT_INTERFACE=eth0" \
  --env "CONSUL_BIND_INTERFACE=eth1" \
  --volume consul_data:/consul/data \
  --publish 8500:8500 \
  consul:latest \
  consul agent -server -ui -bootstrap-expect=3 -client=0.0.0.0 -advertise=${SWARM_MANAGER_IP} -data-dir="/consul/data"

The first Consul server advertises itself on its host’s IP address. The bootstrap-expect=3option instructs Consul to wait until three Consul servers are available before bootstrapping the cluster. This option also allows an initial Leader to be elected automatically. The three Consul servers form a consensus quorum, using the Raft consensus algorithm.

All Consul server and Consul client Docker containers will have a data directory (/consul/data), mapped to a volume (consul_data) on their corresponding VM hosts.

Consul provides a basic browser-based user interface. By using the -ui option each Consul server will have the UI available on port 8500.

Next, we will create two additional Consul Server instances, which will join the initial Consul server, by using the first Consul server’s advertised IP address.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

consul_servers=( "consul-server2" "consul-server3" )

i=0
for vm in ${vms[@]:1:2}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})

  docker run -d \
    --net=host \
    --hostname ${consul_servers[i]} \
    --name ${consul_servers[i]} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth1" \
    --volume consul_data:/consul/data \
    --publish 8500:8500 \
    consul:latest \
    consul agent -server -ui -client=0.0.0.0 -advertise='{{ GetInterfaceIP "eth1" }}' -retry-join=${SWARM_MANAGER_IP} -data-dir="/consul/data"
  let "i++"
done

Consul Clients

Next, we will install Consul clients on the three swarm worker nodes, using a similar method to the servers. According to Consul, The client is relatively stateless. The only background activity a client performs is taking part in the LAN gossip pool. The lack of the -server option, indicates Consul will install this agent as a client.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

consul_clients=( "consul-client1" "consul-client2" "consul-client3" )

i=0
for vm in ${vms[@]:3:3}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker rm -f $(docker ps -a -q)

  docker run -d \
    --net=host \
    --hostname ${consul_clients[i]} \
    --name ${consul_clients[i]} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth1" \
    --volume consul_data:/consul/data \
    consul:latest \
    consul agent -client=0.0.0.0 -advertise='{{ GetInterfaceIP "eth1" }}' -retry-join=${SWARM_MANAGER_IP} -data-dir="/consul/data"
  let "i++"
done

From inside the consul-server1 Docker container, on the manager1 host, using the consul members command, we should observe a current list of cluster members along with their current state.

$ docker exec -it consul-server1 consul members
Node            Address              Status  Type    Build  Protocol  DC
consul-client1  192.168.99.112:8301  alive   client  0.7.5  2         dc1
consul-client2  192.168.99.113:8301  alive   client  0.7.5  2         dc1
consul-client3  192.168.99.114:8301  alive   client  0.7.5  2         dc1
consul-server1  192.168.99.109:8301  alive   server  0.7.5  2         dc1
consul-server2  192.168.99.110:8301  alive   server  0.7.5  2         dc1
consul-server3  192.168.99.111:8301  alive   server  0.7.5  2         dc1

Note the three Consul servers, the three Consul clients, and the single datacenter, dc1. Also, note the address of each agent matches the IP address of their swarm hosts.

To further validate Consul is installed correctly, access Consul’s web-based UI using the IP address of any of Consul’s three server’s swarm host, on port 8500. Shown below is the initial view of Consul prior to services being registered. Note the Consol instance’s names. Also, notice the IP address of each Consul instance corresponds to the swarm host’s IP address, on which it resides.

Consul UI - Nodes

Service Container Registration

Having provisioned the VMs, and built the Docker swarm cluster and Consul cluster, there is one final step to prepare our environment for the deployment of containerized services. We want to provide service discovery and registration for Consul, using Glider Labs’ Registrator.

Registrator will be installed on each host, within a Docker container. According to Glider Labs’ website, “Registrator will automatically register and deregister services for any Docker container, as the containers come online.

We will install Registrator on only five of our six hosts. We will not install it on manager1. Since this host is already serving as both the Docker swarm Leader and Consul Leader, we will not be installing any additional service containers on that host. This is a personal choice, not a requirement.

vms=( "manager1" "manager2" "manager3"
      "worker1" "worker2" "worker3" )

for vm in ${vms[@]:1}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})

  HOST_IP=$(docker-machine ip ${vm})
  # echo ${HOST_IP}

  docker run -d \
    --name=registrator \
    --net=host \
    --volume=/var/run/docker.sock:/tmp/docker.sock \
    gliderlabs/registrator:latest \
      -internal consul://${HOST_IP:localhost}:8500
done

Multiple Service Discovery Options

You might be wondering why we are worried about service discovery and registration with Consul, using Registrator, considering Docker swarm mode already has service discovery. Deploying our services using swarm mode, we are relying on swarm mode for service discovery. I chose to include Registrator in this example, to demonstrate an alternative method of service discovery, which can be used with other tools such as Consul Template, for dynamic load-balancer templating.

We actually have a third option for automatic service registration, Spring Cloud Consul Discovery. I chose not use Spring Cloud Consul Discovery in this post, to register the Widget service. Spring Cloud Consul Discovery would have automatically registered the Spring Boot service with Consul. The actual Widget service stack contains MongoDB, as well as other non-Spring Boot service components, which I removed for this post, such as the NGINX load balancer. Using Registrator, all the containerized services, not only the Spring Boot services, are automatically registered.

You will note in the Widget source code, I commented out the @EnableDiscoveryClient annotation on the WidgetApplication class. If you want to use Spring Cloud Consul Discovery, simply uncomment this annotation.

Distributed Configuration

We are almost ready to deploy some services. For this post’s demonstration, we will deploy the Widget service stack. The Widget service stack is composed of a simple Spring Boot service, backed by MongoDB; it is easily deployed as a containerized application. I often use the service stack for testing and training.

Widgets represent inanimate objects, users purchase with points. Widgets have particular physical characteristics, such as product id, name, color, size, and price. The inventory of widgets is stored in the widgets MongoDB database.

Hierarchical Key/Value Store

Before we can deploy the Widget service, we need to store the Widget service’s Spring Profiles in Consul’s hierarchical key/value store. The Widget service’s profiles are sets of configuration the service needs to run in different environments, such as Development, QA, UAT, and Production. Profiles include configuration, such as port assignments, database connection information, and logging and security settings. The service’s active profile will be read by the service at startup, from Consul, and applied at runtime.

As opposed to the traditional Java properties’ key/value format the Widget service uses YAML to specify it’s hierarchical configuration data. Using Spring Cloud Consul Config, an alternative to Spring Cloud Config Server and Client, we will store the service’s Spring Profile in Consul as blobs of YAML.

There are multi ways to store configuration in Consul. Storing the Widget service’s profiles as YAML was the quickest method of migrating the existing application.yml file’s multiple profiles to Consul. I am not insisting YAML is the most effective method of storing configuration in Consul’s k/v store; that depends on the application’s requirements.

Using the Consul KV HTTP API, using the HTTP PUT method, we will place each profile, as a YAML blob, into the appropriate data key, in Consul. For convenience, I have separated the Widget service’s three existing Spring profiles into individual YAML files. We will load each of the YAML file’s contents into Consul, using curl.

Below is the Widget service’s default Spring profile. The default profile is activated if no other profiles are explicitly active when the application context starts.

endpoints:
  enabled: true
  sensitive: false
info:
  java:
    source: "${java.version}"
    target: "${java.version}"
logging:
  level:
    root: DEBUG
management:
  security:
    enabled: false
  info:
    build:
      enabled: true
    git:
      mode: full
server:
  port: 8030
spring:
  data:
    mongodb:
      database: widgets
      host: localhost
      port: 27017

Below is the Widget service’s docker-local Spring profile. This is the profile we will use when we deploy the Widget service to our swarm cluster. The docker-local profile overrides two properties of the default profile — the name of our MongoDB host and the logging level. All other configuration will come from the default profile.

logging:
  level:
    root: INFO
spring:
  data:
    mongodb:
      host: mongodb

To load the profiles into Consul, from the root of the Widget local git repository, we will execute curl commands. We will use the Consul cluster Leader’s IP address for our HTTP PUT methods.

docker-machine env manager1
eval $(docker-machine env manager1)

CONSUL_SERVER=$(docker-machine ip $(docker node ls | grep Leader | awk '{print $3}'))

# default profile
KEY="config/widget-service/data"
VALUE="consul-configs/default.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

# docker-local profile
KEY="config/widget-service/docker-local/data"
VALUE="consul-configs/docker-local.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

# docker-production profile
KEY="config/widget-service/docker-production/data"
VALUE="consul-configs/docker-production.yaml"
curl -X PUT --data-binary @${VALUE} \
  -H "Content-type: text/x-yaml" \
  ${CONSUL_SERVER:localhost}:8500/v1/kv/${KEY}

Returning to the Consul UI, we should now observe three Spring profiles, in the appropriate data keys, listed under the Key/Value tab. The default Spring profile YAML blob, the value, will be assigned to the config/widget-service/data key.

Consul UI - Docker Default Spring Profile

The docker-local profile will be assigned to the config/widget-service,docker-local/data key. The keys follow default spring.cloud.consul.config conventions.

Consul UI - Docker Local Spring Profile

Spring Cloud Consul Config

In order for our Spring Boot service to connect to Consul and load the requested active Spring Profile, we need to add a dependency to the gradle.build file, on Spring Cloud Consul Config.

dependencies {
  compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-consul-all';
  ...
}

Next, we need to configure the bootstrap.yml file, to connect and properly read the profile. We must properly set the CONSUL_SERVER environment variable. This value is the Consul server instance hostname or IP address, which the Widget service instances will contact to retrieve its configuration.

spring:
  application:
    name: widget-service
  cloud:
    consul:
      host: ${CONSUL_SERVER:localhost}
      port: 8500
      config:
        fail-fast: true
        format: yaml

Deployment Using Docker Compose

We are almost ready to deploy the Widget service instances and the MongoDB instance (also considered a ‘service’ by Docker), in Docker containers, to the Docker swarm cluster. The Docker Compose file is written using version 3 of the Docker Compose specification. It takes advantage of some of the specification’s new features.

version: '3.0'

services:
  widget:
    image: garystafford/microservice-docker-demo-widget:latest
    depends_on:
    - widget_stack_mongodb
    hostname: widget
    ports:
    - 8030:8030/tcp
    networks:
    - widget_overlay_net
    deploy:
      mode: global
      placement:
        constraints: [node.role == worker]
    environment:
    - "CONSUL_SERVER_URL=${CONSUL_SERVER}"
    - "SERVICE_NAME=widget-service"
    - "SERVICE_TAGS=service"
    command: "java -Dspring.profiles.active=${WIDGET_PROFILE} -Djava.security.egd=file:/dev/./urandom -jar widget/widget-service.jar"

  mongodb:
    image: mongo:latest
    command:
    - --smallfiles
    hostname: mongodb
    ports:
    - 27017:27017/tcp
    networks:
    - widget_overlay_net
    volumes:
    - widget_data_vol:/data/db
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == worker]
    environment:
    - "SERVICE_NAME=mongodb"
    - "SERVICE_TAGS=database"

networks:
  widget_overlay_net:
    external: true

volumes:
  widget_data_vol:
    external: true

External Container Volumes and Network

Note the external volume, widget_data_vol, which will be mounted to the MongoDB container, to the /data/db directory. The volume must be created on each host in the swarm cluster, which may contain the MongoDB instance.

Also, note the external overlay network, widget_overlay_net, which will be used by all the service containers in the service stack to communicate with each other. These must be created before deploying our services.

vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker volume create --name=widget_data_vol
  echo "Volume created: ${vm}..."
done

We should be able to view the new volume on one of the swarm Worker host, using the docker volume ls command.

$ docker volume ls
DRIVER              VOLUME NAME
local               widget_data_vol

The network is created once, for the whole swarm cluster.

docker-machine env manager1
eval $(docker-machine env manager1)

docker network create \
  --driver overlay \
  --subnet=10.0.0.0/16 \
  --ip-range=10.0.11.0/24 \
  --opt encrypted \
  --attachable=true \
  widget_overlay_net

We can view the new overlay network using the docker network ls command.

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
03bcf76d3cc4        bridge              bridge              local
533285df0ce8        docker_gwbridge     bridge              local
1f627d848737        host                host                local
6wdtuhwpuy4f        ingress             overlay             swarm
b8a1f277067f        none                null                local
4hy77vlnpkkt        widget_overlay_net  overlay             swarm

Deploying the Services

With the profiles loaded into Consul, and the overlay network and data volumes created on the hosts, we can deploy the Widget service instances and MongoDB service instance. We will assign the Consul cluster Leader’s IP address as the CONSUL_SERVER variable. We will assign the Spring profile, docker-local, to the WIDGET_PROFILE variable. Both these variables are used by the Widget service’s Docker Compose file.

docker-machine env manager1
eval $(docker-machine env manager1)

export CONSUL_SERVER=$(docker-machine ip $(docker node ls | grep Leader | awk '{print $3}'))
export WIDGET_PROFILE=docker-local

docker stack deploy --compose-file=docker-compose.yml widget_stack

The Docker images, used to instantiate the service’s Docker containers, must be pulled from Docker Hub, by each swarm host. Consequently, the initial deployment process can take up to several minutes, depending on your Internet connection.

After deployment, using the docker stack ls command, we should observe that the widget_stack stack is deployed to the swarm cluster with two services.

$ docker stack ls
NAME          SERVICES
widget_stack  2

After deployment, using the docker stack ps widget_stack command, we should observe that all the expected services in the widget_stack stack are running. Note this command also shows us where the services are running.

$ docker service ls
20:37:24-gstafford:~/Documents/projects/widget-docker-demo/widget-service$ docker stack ps widget_stack
ID            NAME                                           IMAGE                                                NODE     DESIRED STATE  CURRENT STATE          ERROR  PORTS
qic4j1pl6n4l  widget_stack_widget.hhn1l3qhej0ajmj85gp8qhpai  garystafford/microservice-docker-demo-widget:latest  worker3  Running        Running 4 minutes ago
zrbnyoikncja  widget_stack_widget.nsy55qzavzxv7xmanraijdw4i  garystafford/microservice-docker-demo-widget:latest  worker2  Running        Running 4 minutes ago
81z3ejqietqf  widget_stack_widget.n89upik5v2xrjjaeuy9d4jybl  garystafford/microservice-docker-demo-widget:latest  worker1  Running        Running 4 minutes ago
nx8dxlib3wyk  widget_stack_mongodb.1                         mongo:latest                                         worker1  Running        Running 4 minutes ago

Using the docker service ls command, we should observe that all the expected service instances (replicas) running, including all the widget_stack’s services and the three instances of the swarm-visualizer service.

$ docker service ls
ID            NAME                  MODE        REPLICAS  IMAGE
almmuqqe9v55  swarm-visualizer      global      3/3       manomarks/visualizer:latest
i9my74tl536n  widget_stack_widget   global      0/3       garystafford/microservice-docker-demo-widget:latest
ju2t0yjv9ily  widget_stack_mongodb  replicated  1/1       mongo:latest

Since it may take a few moments for the widget_stack’s services to come up, you may need to re-run the command until you see all expected replicas running.

$ docker service ls
ID            NAME                  MODE        REPLICAS  IMAGE
almmuqqe9v55  swarm-visualizer      global      3/3       manomarks/visualizer:latest
i9my74tl536n  widget_stack_widget   global      3/3       garystafford/microservice-docker-demo-widget:latest
ju2t0yjv9ily  widget_stack_mongodb  replicated  1/1       mongo:latest

If you see 3 of 3 replicas of the Widget service running, this is a good sign everything is working! We can confirm this, by checking back in with the Consul UI. We should see the three Widget services and the single instance of MongoDB. They were all registered with Registrator. For clarity, we have purposefully did not register the Swarm Visualizer instances with Consul, using Registrator’s "SERVICE_IGNORE=true" environment variable. Registrator will also not register itself with Consul.

Consul UI - Services

We can also view the Widget stack’s services, by revisiting the Swarm Visualizer, on any of the Docker swarm cluster Manager’s IP address, on port 5001.

Docker Swarm Visualizer - Deployed Services

Spring Boot Actuator

The most reliable way to confirm that the Widget service instances did indeed load their configuration from Consul, is to check the Spring Boot Actuator Environment endpoint for any of the Widget instances. As shown below, all configuration is loaded from the default profile, except for two configuration items, which override the default profile values and are loaded from the active docker-local Spring profile.

Spring Actuator Environment Endpoint

Consul Endpoint

There is yet another way to confirm Consul is working properly with our services, without accessing the Consul UI. This ability is provided by Spring Boot Actuator’s Consul endpoint. If the Spring Boot service has a dependency on Spring Cloud Consul Config, this endpoint will be available. It displays useful information about the Consul cluster and the services which have been registered with Consul.

Spring Actuator Consul Endpoint

Cleaning Up the Swarm

If you want to start over again, without destroying the Docker swarm cluster, you may use the following commands to delete all the stacks, services, stray containers, and unused images, networks, and volumes. The Docker swarm cluster and all Docker images pulled to the VMs will be left intact.

# remove all stacks and services
docker-machine env manager1
eval $(docker-machine env manager1)
for stack in $(docker stack ls | awk '{print $1}'); do docker stack rm ${stack}; done
for service in $(docker service ls | awk '{print $1}'); do docker service rm ${service}; done

# remove all containers, networks, and volumes
vms=( "manager1" "manager2" "manager3"
 "worker1" "worker2" "worker3" )

for vm in ${vms[@]}
do
  docker-machine env ${vm}
  eval $(docker-machine env ${vm})
  docker system prune -f
  docker stop $(docker ps -a -q)
  docker rm -f $(docker ps -a -q)
done

Conclusion

We have barely scratched the surface of the features and capabilities of Docker swarm mode or Consul. However, the post did demonstrate several key concepts, critical to configuring, deploying, and managing a modern, distributed, containerized service application platform. These concepts included cluster management, service discovery, service orchestration, distributed configuration, hierarchical key/value stores, and distributed and highly-available systems.

This post’s example was designed for demonstration purposes only and meant to simulate a typical Development or Test environment. Although the swarm cluster, Consul cluster, and the Widget service instances were deployed in a distributed and highly available configuration, this post’s example is far from being production-ready. Some things that would be considered, if you were to make this more production-like, include:

  • MongoDB database is a single point of failure. It should be deployed in a sharded and clustered configuration.
  • The swarm nodes were deployed to a single datacenter. For redundancy, the nodes should be spread across multiple physical hypervisors, separate availability zones and/or geographically separate datacenters (regions).
  • The example needs centralized logging, monitoring, and alerting, to better understand and react to how the swarm, Docker containers, and services are performing.
  • Most importantly, we made was no attempt to secure the services, containers, data, network, or hosts. Security is a critical component for moving this example to Production.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , ,

6 Comments

Spring Music Revisited: Java-Spring-MongoDB Web App with Docker 1.12

Build, test, deploy, and monitor a multi-container, MongoDB-backed, Java Spring web application, using the new Docker 1.12.

Spring Music Infrastructure

Introduction

** This post and associated project code were updated 9/3/2016 to use Tomcat 8.5.4 with OpenJDK 8.**

This post and the post’s example project represent an update to a previous post, Build and Deploy a Java-Spring-MongoDB Application using Docker. This new post incorporates many improvements made in Docker 1.12, including the use of the new Docker Compose v2 YAML format. The post’s project was also updated to use Filebeat with ELK, as opposed to Logspout, which was used previously.

In this post, we will demonstrate how to build, test, deploy, and manage a Java Spring web application, hosted on Apache Tomcat, load-balanced by NGINX, monitored by ELK with Filebeat, and all containerized with Docker.

We will use a sample Java Spring application, Spring Music, available on GitHub from Cloud Foundry. The Spring Music sample record album collection application was originally designed to demonstrate the use of database services on Cloud Foundry, using the Spring Framework. Instead of Cloud Foundry, we will host the Spring Music application locally, using Docker on VirtualBox, and optionally on AWS.

All files necessary to build this project are stored on the docker_v2 branch of the garystafford/spring-music-docker repository on GitHub. The Spring Music source code is stored on the springmusic_v2 branch of the garystafford/spring-music repository, also on GitHub.

Spring Music Application

Application Architecture

The Java Spring Music application stack contains the following technologies: JavaSpring Framework, AngularJS, Bootstrap, jQueryNGINXApache TomcatMongoDB, the ELK Stack, and Filebeat. Testing frameworks include the Spring MVC Test Framework, Mockito, Hamcrest, and JUnit.

A few changes were made to the original Spring Music application to make it work for this demonstration, including:

  • Move from Java 1.7 to 1.8 (including newer Tomcat version)
  • Add unit tests for Continuous Integration demonstration purposes
  • Modify MongoDB configuration class to work with non-local, containerized MongoDB instances
  • Add Gradle warNoStatic task to build WAR without static assets
  • Add Gradle zipStatic task to ZIP up the application’s static assets for deployment to NGINX
  • Add Gradle zipGetVersion task with a versioning scheme for build artifacts
  • Add context.xml file and MANIFEST.MF file to the WAR file
  • Add Log4j RollingFileAppender appender to send log entries to Filebeat
  • Update versions of several dependencies, including Gradle, Spring, and Tomcat

We will use the following technologies to build, publish, deploy, and host the Java Spring Music application: GradlegitGitHubTravis CIOracle VirtualBoxDockerDocker ComposeDocker MachineDocker Hub, and optionally, Amazon Web Services (AWS).

NGINX
To increase performance, the Spring Music web application’s static content will be hosted by NGINX. The application’s WAR file will be hosted by Apache Tomcat 8.5.4. Requests for non-static content will be proxied through NGINX on the front-end, to a set of three load-balanced Tomcat instances on the back-end. To further increase application performance, NGINX will also be configured for browser caching of the static content. In many enterprise environments, the use of a Java EE application server, like Tomcat, is still not uncommon.

Reverse proxying and caching are configured thought NGINX’s default.conf file, in the server configuration section:

The three Tomcat instances will be manually configured for load-balancing using NGINX’s default round-robin load-balancing algorithm. This is configured through the default.conf file, in the upstream configuration section:

Client requests are received through port 80 on the NGINX server. NGINX redirects requests, which are not for non-static assets, to one of the three Tomcat instances on port 8080.

MongoDB
The Spring Music application was designed to work with a number of data stores, including MySQL, Postgres, Oracle, MongoDB, Redis, and H2, an in-memory Java SQL database. Given the choice of both SQL and NoSQL databases, we will select MongoDB.

The Spring Music application, hosted by Tomcat, will store and modify record album data in a single instance of MongoDB. MongoDB will be populated with a collection of album data from a JSON file, when the Spring Music application first creates the MongoDB database instance.

ELK
Lastly, the ELK Stack with Filebeat, will aggregate NGINX, Tomcat, and Java Log4j log entries, providing debugging and analytics to our demonstration. A similar method for aggregating logs, using Logspout instead of Filebeat, can be found in this previous post.

Kibana 4 Web Console

Continuous Integration

In this post’s example, two build artifacts, a WAR file for the application and ZIP file for the static web content, are built automatically by Travis CI, whenever source code changes are pushed to the springmusic_v2 branch of the garystafford/spring-music repository on GitHub.

Travis CI Output

Following a successful build and a small number of unit tests, Travis CI pushes the build artifacts to the build-artifacts branch on the same GitHub project. The build-artifacts branch acts as a pseudo binary repository for the project, much like JFrog’s Artifactory. These artifacts are used later by Docker to build the project’s immutable Docker images and containers.

Build Artifact Repository

Build Notifications
Travis CI pushes build notifications to a Slack channel, which eliminates the need to actively monitor Travis CI.

Travis CI Slack Notifications

Automation Scripting
The .travis.yaml file, custom gradle.build Gradle tasks, and the deploy_travisci.sh script handles the Travis CI automation described, above.

Travis CI .travis.yaml file:

Custom gradle.build tasks:

The deploy.sh file:

You can easily replicate the project’s continuous integration automation using your choice of toolchains. GitHub or BitBucket are good choices for distributed version control. For continuous integration and deployment, I recommend Travis CI, Semaphore, Codeship, or Jenkins. Couple those with a good persistent chat application, such as Glider Labs’ Slack or Atlassian’s HipChat.

Building the Docker Environment

Make sure VirtualBox, Docker, Docker Compose, and Docker Machine, are installed and running. At the time of this post, I have the following versions of software installed on my Mac:

  • Mac OS X 10.11.6
  • VirtualBox 5.0.26
  • Docker 1.12.1
  • Docker Compose 1.8.0
  • Docker Machine 0.8.1

To build the project’s VirtualBox VM, Docker images, and Docker containers, execute the build script, using the following command: sh ./build_project.sh. A build script is useful when working with CI/CD automation tools, such as Jenkins CI or ThoughtWorks go. However, to understand the build process, I suggest first running the individual commands, locally.

Deploying to AWS
By simply changing the Docker Machine driver to AWS EC2 from VirtualBox, and providing your AWS credentials, the springmusic environment may also be built on AWS.

Build Process
Docker Machine provisions a single VirtualBox springmusic VM on which host the project’s containers. VirtualBox provides a quick and easy solution that can be run locally for initial development and testing of the application.

Next, the script creates a Docker data volume and project-specific Docker bridge network.

Next, using the project’s individual Dockerfiles, Docker Compose pulls base Docker images from Docker Hub for NGINX, Tomcat, ELK, and MongoDB. Project-specific immutable Docker images are then built for NGINX, Tomcat, and MongoDB. While constructing the project-specific Docker images for NGINX and Tomcat, the latest Spring Music build artifacts are pulled and installed into the corresponding Docker images.

Docker Compose builds and deploys (6) containers onto the VirtualBox VM: (1) NGINX, (3) Tomcat, (1) MongoDB, and (1) ELK.

The NGINX Dockerfile:

The Tomcat Dockerfile:

Docker Compose v2 YAML
This post was recently updated for Docker 1.12, and to use Docker Compose v2 YAML file format. The post’s docker-compose.yml takes advantage of improvements in Docker 1.12 and Docker Compose v2 YAML. Improvements to the YAML file include eliminating the need to link containers and expose ports, and the addition of named networks and volumes.

The Results

Spring Music Infrastructure

Below are the results of building the project.

Testing the Application

Below are partial results of the curl test, hitting the NGINX endpoint. Note the different IP addresses in the Upstream-Address field between requests. This test proves NGINX’s round-robin load-balancing is working across the three Tomcat application instances: music_app_1, music_app_2, and music_app_3.

Also, note the sharp decrease in the Request-Time between the first three requests and subsequent three requests. The Upstream-Response-Time to the Tomcat instances doesn’t change, yet the total Request-Time is much shorter, due to caching of the application’s static assets by NGINX.

Spring Music Application Links

Assuming the springmusic VM is running at 192.168.99.100, the following links can be used to access various project endpoints. Note the (3) Tomcat instances each map to randomly exposed ports. These ports are not required by NGINX, which maps to port 8080 for each instance. The port is only required if you want access to the Tomcat Web Console. The port, shown below, 32771, is merely used as an example.

* The Tomcat user name is admin and the password is t0mcat53rv3r.

Helpful Links

TODOs

  • Automate the Docker image build and publish processes
  • Automate the Docker container build and deploy processes
  • Automate post-deployment verification testing of project infrastructure
  • Add Docker Swarm multi-host capabilities with overlay networking
  • Update Spring Music with latest CF project revisions
  • Include scripting example to stand-up project on AWS
  • Add Consul and Consul Template for NGINX configuration

, , , , , , , , , , , ,

9 Comments

Diving Deeper into ‘Getting Started with Spring Cloud’

Spring_Cloud_Config_2

Explore the integration of Spring Cloud and Spring Cloud Netflix tooling, through a deep dive into Pivotal’s ‘Getting Started with Spring Cloud’ presentation.

Introduction

Keeping current with software development and DevOps trends can often make us feel we are, as the overused analogy describes, drinking from a firehose, often several hoses at once. Recently joining a large client engagement, I found it necessary to supplement my knowledge of cloud-native solutions, built with the support of Spring Cloud and Spring Cloud Netflix technologies. One of my favorite sources of information on these subjects is presentations by people like Josh Long, Dr. Dave Syer, and Cornelia Davis of Pivotal Labs, and Jon Schneider and Taylor Wicksell of Netflix.

One presentation, in particular, Getting Started with Spring Cloud, by Long and Syer, provides an excellent end-to-end technical overview of the latest Spring and Netflix technologies. Josh Long’s fast-paced, eighty-minute presentation, available on YouTube, was given at SpringOne2GX 2015 with co-presenter, Dr. Dave Syer, founder of Spring Cloud, Spring Boot, and Spring Batch.

As the presenters of Getting Started with Spring Cloud admit, the purpose of the presentation was to get people excited about Spring Cloud and Netflix technologies, not to provide a deep dive into each technology. However, I believe the presentation’s Reservation Service example provides an excellent learning opportunity. In the following post, we will examine the technologies, components, code, and configuration presented in Getting Started with Spring Cloud. The goal of the post is to provide a greater understanding of the Spring Cloud and Spring Cloud Netflix technologies.

System Overview

Technologies

The presentation’s example introduces a dizzying array of technologies, which include:

Spring Boot
Stand-alone, production-grade Spring-based applications

Spring Data REST / Spring HATEOAS
Spring-based applications following HATEOAS principles

Spring Cloud Config
Centralized external configuration management, backed by Git

Netflix Eureka
REST-based service discovery and registration for failover and load-balancing

Netflix Ribbon
IPC library with built-in client-side software load-balancers

Netflix Zuul
Dynamic routing, monitoring, resiliency, security, and more

Netflix Hystrix
Latency and fault tolerance for distributed system

Netflix Hystrix Dashboard
Web-based UI for monitoring Hystrix

Spring Cloud Stream
Messaging microservices, backed by Redis

Spring Data Redis
Configuration and access to Redis from a Spring app, using Jedis

Spring Cloud Sleuth
Distributed tracing solution for Spring Cloud, sends traces via Thrift to the Zipkin collector service

Twitter Zipkin
Distributed tracing system, backed by Apache Cassandra

H2
In-memory Java SQL database, embedded and server modes

Docker
Package applications with dependencies into standardized Linux containers

System Components

Several components and component sub-systems comprise the presentation’s overall Reservation Service example. Each component implements a combination of the technologies mentioned above. Below is a high-level architectural diagram of the presentation’s example. It includes a few additional features, added as part of this post.

Overall Reservation System Diagram

Individual system components include:

Spring Cloud Config Server
Stand-alone Spring Boot application provides centralized external configuration to multiple Reservation system components

Spring Cloud Config Git Repo
Git repository containing multiple Reservation system components configuration files, served by Spring Cloud Config Server

H2 Java SQL Database Server (New)
This post substitutes the original example’s use of H2’s embedded version with a TCP Server instance, shared by Reservation Service instances

Reservation Service
Multi load-balanced instances of stand-alone Spring Boot application, backed by H2 database

Reservation Client
Stand-alone Spring Boot application (aka edge service or client-side proxy), forwards client-side load-balanced requests to the Reservation Service, using Eureka, Zuul, and Ribbon

Reservation Data Seeder (New)
Stand-alone Spring Boot application, seeds H2 with initial data, instead of the Reservation Service

Eureka Service
Stand-alone Spring Boot application provides service discovery and registration for failover and load-balancing

Hystrix Dashboard
Stand-alone Spring Boot application provides web-based Hystrix UI for monitoring system performance and Hystrix circuit-breakers

Zipkin
Zipkin Collector, Query, and Web, and Cassandra database, receives, correlates, and displays traces from Spring Cloud Sleuth

Redis
In-memory data structure store, acting as message broker/transport for Spring Cloud Stream

Github

All the code for this post is available on Github, split between two repositories. The first repository, spring-cloud-demo, contains the source code for all of the components listed above, except the Spring Cloud Config Git Repo. To function correctly, the configuration files, consumed by the Spring Cloud Config Server, needs to be placed into a separate repository, spring-cloud-demo-config-repo.

The first repository contains a git submodule , docker-zipkin. If you are not familiar with submodules, you may want to take a moment to read the git documentation. The submodule contains a dockerized version of Twitter’s OpenZipkin, docker-zipkin. To  clone the two repositories, use the following commands. The --recursive option is required to include the docker-zipkin submodule in the project.

Configuration

To try out the post’s Reservation system example, you need to configure at least one property. The Spring Cloud Config Server needs to know the location of the Spring Cloud Config Repository, which is the second GitHub repository you cloned, spring-cloud-demo-config-repo. From the root of the spring-cloud-demo repo, edit the Spring Cloud Config Server application.properties file, located in config-server/src/main/resources/application.properties. Change the following property’s value to your local path to the spring-cloud-demo-config-repo repository:

Startup

There are a few ways you could run the multiple components that make up the post’s example. I suggest running one component per terminal window, in the foreground. In this way, you can monitor the output from the bootstrap and startup processes of the system’s components. Furthermore, you can continue to monitor the system’s components once they are up and running, and receiving traffic. Yes, that is twelve terminal windows…

ReservationServices.png

There is a required startup order for the components. For example, Spring Cloud Config Server needs to start before the other components that rely on it for configuration. Netflix’s Eureka needs to start before the Reservation Client and ReservationServices, so they can register with Eureka on startup. Similarly, Zipkin needs to be started in its Docker container before the Reservation Client and Services, so Spring Cloud Sleuth can start sending traces. Redis needs to be started in its Docker container before Spring Cloud Stream tries to create the message queue. All instances of the Reservation Service needs to start before the Reservation Client. Once every component is started, the Reservation Data Seeder needs to be run once to create initial data in H2. For best results, follow the instructions below. Let each component start completely, before starting the next component.

Docker

Both Zipkin and Redis run in Docker containers. Redis runs in a single container. Zipkin’s four separate components run in four separate containers. Be advised, Zipkin seems to have trouble successfully starting all four of its components on a consistent basis. I believe it’s a race condition caused by Docker Compose simultaneously starting the four Docker containers, ignoring a proper startup order. More than half of the time, I have to stop Zipkin and rerun the docker command to get Zipkin to start without any errors.

If you’ve followed the instructions above, you should see the following Docker images and Docker containers installed and running in your local environment.

Components

Spring Cloud Config Server

At the center of the Reservation system is Spring Cloud Config. Configuration, typically found in the application.properties file, for the Reservation Services, Reservation Client, Reservation Data Seeder, Eureka Service, and Hystix Dashboard, has been externalized with Spring Cloud Config.

Spring_Cloud_Config_2

Each component has a bootstrap.properties file, which modifies its startup behavior during the bootstrap phase of an application context. Each bootstrap.properties file contains the component’s name and the address of the Spring Cloud Config Server. Components retrieve their configuration from the Spring Cloud Config Server at runtime. Below, is an example of the Reservation Client’s bootstrap.properties file.

Spring Cloud Config Git Repo

In the presentation, as in this post, the Spring Cloud Config Server is backed by a locally cloned Git repository, the Spring Cloud Config Git Repo. The Spring Cloud Config Server’s application.properties file contains the address of the Git repository. Each properties file within the Git repository corresponds to a system component. Below, is an example of the reservation-client.properties file, from the Spring Cloud Config Git Repo.

As shown in the original presentation, the configuration files can be viewed using HTTP endpoints of the Spring Cloud Config Server. To view the Reservation Service’s configuration stored in the Spring Cloud Config Git Repo, issue an HTTP GET request to http://localhost:8888/reservation-service/master. The master URI refers to the Git repo branch in which the configuration resides. This will return the configuration, in the response body, as JSON:

SpringCloudConfig

In a real Production environment, the Spring Cloud Config Server would be backed by a highly-available Git Server or GitHub repository.

Reservation Service

The Reservation Service is the core component in the presentation’s example. The Reservation Service is a stand-alone Spring Boot application. By implementing Spring Data REST and Spring HATEOAS, Spring automatically creates REST representations from the Reservation JPA Entity class of the Reservation Service. There is no need to write a Spring Rest Controller and explicitly code each endpoint.

HATEOAS

Spring HATEOAS allows us to interact with the Reservation Entity, using HTTP methods, such as GET and POST. These endpoints, along with all addressable endpoints, are displayed in the terminal output when a Spring Boot application starts. For example, we can use an HTTP GET request to call the reservations/{id} endpoint, such as:

The Reservation Service also makes use of the Spring RepositoryRestResource annotation. By annotating the RepositoryReservation Interface, which extends JpaRepository, we can customize export mapping and relative paths of the Reservation JPA Entity class. As shown below, the RepositoryReservation Interface contains the findByReservationName method signature, annotated with /by-name endpoint, which accepts the rn input parameter.

Calling the findByReservationName method, we can search for a particular reservation by using an HTTP GET request to call the reservations/search/by-name?rn={reservationName} endpoint.

Spring Screengrab 04

Reservation Client

Querying the Reservation Service directly is possible, however, is not the recommended. Instead, the presentation suggests using the Reservation Client as a proxy to the Reservation Service. The presentation offers three examples of using the Reservation Client as a proxy.

The first demonstration of the Reservation Client uses the /message endpoint on the Reservation Client to return a string from the Reservation Service. The message example has been modified to include two new endpoints on the Reservation Client. The first endpoint, /reservations/client-message, returns a message directly from the Reservation Client. The second endpoint, /reservations/service-message, returns a message indirectly from the Reservation Service. To retrieve the message from the Reservation Service, the Reservation Client sends a request to the endpoint Reservation Service’s /message endpoint.

To retrieve both messages, send separate HTTP GET requests to each endpoint:

Spring Screengrab 02

The second demonstration of the Reservation Client uses a Data Transfer Object (DTO). Calling the Reservation Client’s reservations/names endpoint, invokes the getReservationNames method. This method, in turn, calls the Reservation Service’s /reservations endpoint. The response object returned from the Reservation Service, a JSON array of reservation records, is deserialized and mapped to the Reservation Client’s Reservation DTO. Finally, the method returns a collection of strings, representing just the names from the reservations.

To retrieve the collection of reservation names, an HTTP GET request is sent to the /reservations/names endpoint:

Spring Screengrab 05

Spring Cloud Stream

One of the more interesting technologies in the presentation is Spring’s Spring Cloud Stream. The Spring website describes Spring Cloud Stream as a project that allows users  to develop and run messaging microservices using Spring Integration. In other words, it provides native Spring messaging capabilities, backed by a choice of message buses, including Redis, RabbitMQ, and Apache Kafka, to Spring Boot applications.

A detailed explanation of Spring Cloud Stream would take an entire post. The best technical demonstration I have found is the presentation, Message Driven Microservices in the Cloud, by speakers Dr. David Syer and Dr. Mark Pollack, given in January 2016, also at SpringOne2GX 2015.

Diagram_03

In the presentation, a new reservation is submitted via an HTTP POST to the acceptNewReservations method of the Reservation Client. The method, in turn, builds (aka produces) a message, containing the new reservation, and publishes that message to the queue.reservation queue.

The queue.reservation queue is located in Redis, which is running inside a Docker container. To view the messages being published to the queue in real-time, use the redis-cli, with the monitor command, from within the Redis Docker container. Below is an example of tests messages pushed (LPUSH) to the reservations queue from the Reservation Client.

The published messages are consumed by subscribers to the reservation queue. In this example, the consumer is the Reservation Service. The Reservation Service’s acceptNewReservation method processes the message and saves the new reservation to the H2 database. In Spring Cloud Stream terms, the Reservation Client is the Sink.

Netflix Eureka

Netflix’s Eureka, in combination with Netflix’s Zuul and Ribbon, provide the ability to scale the Reservation Service horizontally, and to load balance those instances. By using the @EnableEurekaClient annotation on the Reservation Client and Reservation Services, each instance will automatically register with Eureka on startup, as shown in the Eureka Web UI, below.

Diagram9

The names of the registered instances are in three parts: the address of the host on which the instance is running, followed by the value of the spring.application.name property of the instance’s bootstrap.properties file, and finally, the port number the instance is running on. Eureka displays each instance’s status, along with additional AWS information, if you are running on AWS, as Netflix does.

Diagram_07

According to Spring in their informative post, Spring Cloud, service discovery is one of the key tenets of a microservice based architecture. Trying to hand-configure each client, or to rely on convention over configuration, can be difficult to do and is brittle. Eureka is the Netflix Service Discovery Server and Client. A client (Spring Boot application), registers with Eureka, providing metadata about itself. Eureka then receives heartbeat messages from each instance. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.

The Reservation Client application is also annotated with @EnableZuulProxy. Adding this annotation pulls in Spring Cloud’s embedded Zuul proxy. Again, according to Spring, the proxy is used by front-end applications to proxy calls to one or more back-end services, avoiding the need to manage CORS and authentication concerns independently for all the backends. In the presentation and this post, the front end is the Reservation Client and the back end is the Reservation Service.

In the code snippet below from the ReservationApiGatewayRestController, note the URL of the endpoint requested in the getReservationNames method. Instead of directly calling http://localhost:8000/reservations, the method calls http://reservation-service/reservations. The reservation-service segment of the URL is the registered name of the service in Eureka and contained in the Reservation Service’s bootstrap.properties file.

In the following abridged output from the Reservation Client, you can clearly see the interaction of Zuul, Ribbon, Eureka, and Spring Cloud Config. Note the Client application has successfully registering itself with Eureka, along with the Reservation Client’s status. Also, note Zuul mapping the Reservation Service’s URL path.

Load Balancing

One shortcoming of the original presentation was true load balancing. With only a single instance of the Reservation Service in the original presentation, there is nothing to load balance; it’s more of a reverse proxy example. To demonstrate load balancing, we need to spin up additional instances of the Reservation Service. Following the post’s component start-up instructions, we should have three instances of the Reservation Service running, on ports 8000, 8001, and 8002, each in separate terminal windows.

ReservationServices.png

To confirm the three instances of the Reservation Service were successfully registered with Eureka, review the output from the Eureka Server terminal window. The output should show three instances of the Reservation Service registering on startup, in addition to the Reservation Client.

Viewing Eureka’s web console, we should observe three members in the pool of Reservation Services.

Diagram9b

Lastly, looking at the terminal output of the Reservation Client, we should see three instances of the Reservation Service being returned by Ribbon (aka the DynamicServerListLoadBalancer).

Requesting

Requesting http://localhost:8050/reservations/names, Ribbon forwards the request to one of the three Reservation Service instances registered with Eureka. By default, Ribbon uses a round-robin load-balancing strategy to select an instance from the pool of available Reservation Services.

H2 Server

The original presentation’s Reservation Service used an embedded instance of H2. To scale out the Reservation Service, we need a common database for multiple instances to share. Otherwise, queries would return different results, specific to the particular instance of Reservation Service chosen by the load-balancer. To solve this, the original presentation’s embedded version of H2 has been replaced with the TCP Server client/server version of H2.

Reservation Service Instances

Thanks to more Spring magic, the only change we need to make to the original presentation’s code is a few additional properties added to the Reservation Service’s reservation-service.properties file. This changes H2 from the embedded version to the TCP Server version.

Reservation Data Seeder

In the original presentation, the Reservation Service created several sample reservation records in its embedded H2 database on startup. Since we now have multiple instances of the Reservation Service running, the sample data creation task has been moved from the Reservation Service to the new Reservation Data Seeder. The Reservation Service only now validates the H2 database schema on startup. The Reservation Data Seeder now updates the schema based on its entities. This also means the seed data will be persisted across restarts of the Reservation Service, unlike in the original configuration.

Running the Reservation Data Seeder once will create several reservation records into the H2 database. To confirm the H2 Server is running and the initial reservation records were created by the Reservation Data Seeder, point your web browser to the H2 login page at http://192.168.99.1:6889. and log in using the credentials in the reservation-service.properties file.

H2_grab1

The H2 Console should contain the RESERVATION table, which holds the reservation sample records.

H2_grab2

Spring Cloud Sleuth and Twitter’s Zipkin

According to the project description, “Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud. All your interactions with external systems should be instrumented automatically. You can capture data simply in logs, or by sending it to a remote collector service.” In our case, that remote collector service is Zipkin.

Zipkin describes itself as, “a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data through a Collector and a Query service.” Zipkin provides critical insights into how microservices perform in a distributed system.

Zipkin_Diagram

In the presentation, as in this post, the Reservation Client’s main ReservationClientApplication class contains the alwaysSampler bean, which returns a new instance of org.springframework.cloud.sleuth.sampler.AlwaysSampler. As long as Spring Cloud Sleuth is on the classpath and you have added alwaysSampler bean, the Reservation Client will automatically generate trace data.

Sending a request to the Reservation Client’s service/message endpoint (http://localhost:8050/reservations/service-message,), will generate a trace, composed of spans. in this case, the spans are individual segments of the HTTP request/response lifecycle. Traces are sent by Sleuth to Zipkin, to be collected. According to Spring, if spring-cloud-sleuth-zipkin is available, then the application will generate and collect Zipkin-compatible traces using Brave). By default, it sends them via Apache Thrift to a Zipkin collector service on port 9410.

Zipkin’s web-browser interface, running on port 8080, allows us to view traces and drill down into individual spans.

Zipkin_UI

Zipkin contains fine-grain details about each span within a trace, as shown below.

Zipkin_UI_Popup

Correlation IDs

Note the x-trace-id and x-span-id in the request header, shown below. Sleuth injects the trace and span IDs to the SLF4J MDC (Simple Logging Facade for Java – Mapped Diagnostic Context). According to Spring, IDs provides the ability to extract all the logs from a given trace or span in a log aggregator. The use of correlation IDs and log aggregation are essential for monitoring and supporting a microservice architecture.

Zipkin_UI_Popup2

Hystix and Hystrix Dashboard

The last major technology highlighted in the presentation is Netflix’s Hystrix. According to Netflix, “Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services, and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.” Hystrix is essential, it protects applications from cascading dependency failures, an issue common to complex distributed architectures, with multiple dependency chains. According to Netflix, Hystrix uses multiple isolation techniques, such as bulkhead, swimlane, and circuit breaker patterns, to limit the impact of any one dependency on the entire system.

The presentation demonstrates one of the simpler capabilities of Hystrix, fallback. The getReservationNames method is decorated with the @HystrixCommand annotation. This annotation contains the fallbackMethod. According to Netflix, a graceful degradation of a method is provided by adding a fallback method. Hystrix will call to obtain a default value or values, in case the main command fails. In the presentation’s example, the Reservation Service, a direct dependency of the Reservation Client, has failed. The Reservation Service failure causes the failure of the Reservation Client.

In the presentation’s example, the Reservation Service, a direct dependency of the Reservation Client, has failed. The Reservation Service failure causes the failure of the Reservation Client’s getReservationNames method to return a collection of reservation names. Hystrix redirects the application to the getReservationNameFallback method. Instead of returning a collection of reservation names, the getReservationNameFallback returns an empty collection, as opposed to an error message to the client.

A more relevant example  involves Netflix movie recommendation service. In the event a failure of the recommendation service’s method to return a collection of personalized list of movie recommendations to a customer, Hystrix fallbacks to a method that returns a generic list of the most popular movies to the customer. Netflix has determined that, in the event of a failure of their recommendation service, falling back to a generic list of movies is better than returning no movies at all.

The Hystrix Dashboard is a tool, available with Hystrix, to visualize the current state of Hystrix instrumented methods. Although visually simplistic, the dashboard effectively presents the health of calls to external systems, which are wrapped in a HystrixCommand or HystrixObservableCommand.

Hystrix_Stream_Diagram

The Hystrix dashboard is a visual representation of the Hystrix Stream. This stream is a live feed of data sent by the Hystrix instrumented application, in this case, the Reservation Client. For a single Hystrix application, such as the Reservation Client, the feed requested from the application’s hystrix.stream endpoint is http://localhost:8050/hystrix.stream. The dashboard consumes the stream resource’s response and visualizes it in the browser using JavaScript, jQuery, and d3.

In the post, as in the presentation, hitting the Reservation Client with a volume of requests, we observe normal activity in Hystrix Dashboard. All three instances of the Reservation Service are running and returning the collection of reservations from H2, to the Reservation Client.

Hystrix_success

If all three instances of the Reservation Service fail or the maximum latency is exceeded, the Reservation Client falls back to returning an empty collection in the response body. In the example below, 15 requests, representing 100% of the current traffic, to the getReservationNames method failed and subsequently fell back to return an empty collection. Hystrix succeeded in helping the application gracefully fall back to an alternate response.

Hystrix_failures

Conclusion

It’s easy to see how Spring Cloud and Netflix’s technologies are easily combined to create a performant, horizontally scalable, reliable system. With the addition of a few missing components, such metrics monitoring and log aggregation, this example could easily be scaled up to support a production-grade microservices-based, enterprise software platform.

, , , , , , , , , , , , , , ,

8 Comments

Using Weave to Network a Docker Multi-Container Java Application

Use the latest version of Weaveworks’ Weave Net to network a multi-container, Dockerized Java Spring web application.

Introduction Weave Image

Introduction

The last post demonstrated how to build and deploy the Java Spring Music application to a VirtualBox, multi-container test environment. The environment contained (1) NGINX container, (2) load-balanced Tomcat containers, (1) MongoDB container, (1) ELK Stack container, and (1) Logspout container, all on one VM.

Spring Music

In that post, we used Docker’s links option. The links options, which modifies the container’s /etc/hosts file, allows two Docker containers to communicate with each other. For example, the NGINX container is linked to both Tomcat containers:

proxy:
  build: nginx/
  ports: "80:80"
  links:
   - app01
   - app02

Although container linking works, links are not very practical beyond a small number of static containers or a single container host. With linking, you must explicitly define each service-to-container relationship you want Docker to configure. Linking is not an option with Docker Swarm to link containers across multiple virtual machine container hosts. With Docker Networking in its early ‘experimental’ stages and the Swarm limitation, it’s hard to foresee the use of linking for any uses beyond limited development and test environments.

Weave Net

Weave Net, aka Weave, is one of a trio of products developed by Weaveworks. The other two members of the trio include Weave Run and Weave Scope. According to Weaveworks’ website, ‘Weave Net connects all your containers into a transparent, dynamic and resilient mesh. This is one of the easiest ways to set up clustered applications that run anywhere.‘ Weave allows us to eliminate the dependency on the links connect our containers. Weave does all the linking of containers for us automatically.

Weave v1.1.0

If you worked with previous editions of Weave, you will appreciate that Weave versions v1.0.x and v1.1.0 are significant steps forward in the evolution of Weave. Weaveworks’ GitHub Weave Release page details the many improvements. I also suggest reading Weave ‘Gossip’ DNS, on Weavework’s blog, before continuing. The post details the improvements of Weave v1.1.0. Some of those key new features include:

  • Completely redesigned weaveDNS, dubbed ‘Gossip DNS’
  • Registrations are broadcast to all weaveDNS instances
  • Registered entries are stored in-memory and handle lookups locally
  • Weave router’s gossip implementation periodically synchronizes DNS mappings between peers
  • Ability to recover from network partitions and other transient failures
  • Each peer is aware of the hostnames and IP address of all containers in the Weave network.
  • weave launch now launches all weave components, including the router, weaveDNS and the proxy, greatly simplifying setup
  • weaveDNS is now embedded in the Weave router

Weave-based Network

In this post, we will reuse the Java Spring Music application from the last post. However, we will replace the project’s static dependencies on Docker links with Weave. This post will demonstrate the most basic features of Weave, using a single cluster. In a future post, we will demonstrate how easily Weave also integrates with multiple clusters.

All files for this post can be found in the swarm-weave branch of the GitHub Repository. Instructions to clone are below.

Configuration

If you recall from the previous post, the Docker Compose YAML file (docker-compose.yml) looked similar to this:

proxy:
  build: nginx/
  ports: "80:80"
  links:
   - app01
   - app02
  hostname: "proxy"

app01:
  build: tomcat/
  expose: "8080"
  ports: "8180:8080"
  links:
   - nosqldb
   - elk
  hostname: "app01"

app02:
  build: tomcat/
  expose: "8080"
  ports: "8280:8080"
  links:
   - nosqldb
   - elk
  hostname: "app01"

nosqldb:
  build: mongo/
  hostname: "nosqldb"
  volumes: "/opt/mongodb:/data/db"

elk:
  build: elk/
  ports:
   - "8081:80"
   - "8082:9200"
  expose: "5000/upd"

logspout:
  build: logspout/
  volumes: "/var/run/docker.sock:/tmp/docker.sock"
  links: elk
  ports: "8083:80"
  environment: ROUTE_URIS=logstash://elk:5000

Implementing Weave simplifies the docker-compose.yml, considerably. Below is the new Weave version of the docker-compose.yml. The links option have been removed from all containers. Additionally, the hostnames have been removed, as they serve no real purpose moving forward. The logspout service’s environment option has been modified to use the elk container’s full name as opposed to the hostname.

The only addition is the volumes_from option to the proxy service. We must ensure that the two Tomcat containers start before the NGINX containers. The links option indirectly provided this functionality, previously.

proxy:
  build: nginx/
  ports:
   - "80:80"
  volumes_from:
   - app01
   - app02

app01:
  build: tomcat/
  expose:
   - "8080"
  ports:
   - "8180:8080"

app02:
  build: tomcat/
  expose:
   - "8080"
  ports:
   - "8280:8080"

nosqldb:
  build: mongo/
  volumes:
   - "/opt/mongodb:/data/db"

elk:
  build: elk/
  ports:
   - "8081:80"
   - "8082:9200"
  expose:
   - "5000/upd"

logspout:
  build: logspout/
  volumes:
   - "/var/run/docker.sock:/tmp/docker.sock"
  ports:
   - "8083:80"
  environment:
    - ROUTE_URIS=logstash://music_elk_1:5000

Next, we need to modify the NGINX configuration, slightly. In the previous post we referenced the Tomcat service names, as shown below.

upstream backend {
  server app01:8080;
  server app02:8080;
}

Weave will automatically add the two Tomcat container names to the NGINX container’s /etc/hosts file. We will add these Tomcat container names to NGINX’s configuration file.

upstream backend {
  server music_app01_1:8080;
  server music_app02_1:8080;
}

In an actual Production environment, we would use a template, along with a service discovery tool, such as Consul, to automatically populate the container names, as containers are dynamically created or destroyed.

Installing and Running Weave

After cloning this post’s GitHub repository, I recommend first installing and configuring Weave. Next, build the container host VM using Docker Machine. Lastly, build the containers using Docker Compose. The build_project.sh script below will take care of all the necessary steps.

#!/bin/sh

########################################################################
#
# title:          Build Complete Project
# author:         Gary A. Stafford (https://programmaticponderings.com)
# url:            https://github.com/garystafford/sprint-music-docker  
# description:    Clone and build complete Spring Music Docker project
#
# to run:         sh ./build_project.sh
#
########################################################################

# install latest weave
curl -L git.io/weave -o /usr/local/bin/weave && 
chmod a+x /usr/local/bin/weave && 
weave version

# clone project
git clone -b swarm-weave \
  --single-branch --branch swarm-weave \
  https://github.com/garystafford/spring-music-docker.git && 
cd spring-music-docker

# build VM
docker-machine create --driver virtualbox springmusic --debug

# create diectory to store mongo data on host
docker ssh springmusic mkdir /opt/mongodb

# set new environment
docker-machine env springmusic && 
eval "$(docker-machine env springmusic)"

# launch weave and weaveproxy/weaveDNS containers
weave launch &&
tlsargs=$(docker-machine ssh springmusic \
  "cat /proc/\$(pgrep /usr/local/bin/docker)/cmdline | tr '\0' '\n' | grep ^--tls | tr '\n' ' '")
weave launch-proxy $tlsargs &&
eval "$(weave env)" &&

# test/confirm weave status
weave status &&
docker logs weaveproxy

# pull and build images and containers
# this step will take several minutes to pull images first time
docker-compose -f docker-compose.yml -p music up -d

# wait for container apps to fully start
sleep 15

# test weave (should list entries for all containers)
docker exec -it music_proxy_1 cat /etc/hosts 

# run quick test of Spring Music application
for i in {1..10}
do
  curl -I --url $(docker-machine ip springmusic)
done

One last test, to ensure that MongoDB is using the host’s volume, and not storing data in the MongoDB container’s /data/db directory, execute the following command: docker-machine ssh springmusic ls -Alh /opt/mongodb. You should see MongoDB-related content being stored here.

Testing Weave

Running the weave status command, we should observe that Weave returned a status similar to the example below:

gstafford@gstafford-X555LA:$ weave status

       Version: v1.1.0

       Service: router
      Protocol: weave 1..2
          Name: 6a:69:11:1b:b4:e3(springmusic)
    Encryption: disabled
 PeerDiscovery: enabled
       Targets: 0
   Connections: 0
         Peers: 1

       Service: ipam
     Consensus: achieved
         Range: [10.32.0.0-10.48.0.0)
 DefaultSubnet: 10.32.0.0/12

       Service: dns
        Domain: weave.local.
           TTL: 1
       Entries: 2

       Service: proxy
       Address: tcp://192.168.99.100:12375

Running the docker exec -it music_proxy_1 cat /etc/hosts command, we should observe that WeaveDNS has automatically added entries for all containers to the music_proxy_1 container’s /etc/hosts file. WeaveDNS will also remove the addresses of any containers that die. This offers a simple way to implement redundancy.

gstafford@gstafford-X555LA:$ docker exec -it music_proxy_1 cat /etc/hosts

# modified by weave
10.32.0.6       music_proxy_1
127.0.0.1       localhost

172.17.0.131    weave weave.bridge
172.17.0.133    music_elk_1 music_elk_1.bridge
172.17.0.134    music_nosqldb_1 music_nosqldb_1.bridge
172.17.0.138    music_app02_1 music_app02_1.bridge
172.17.0.139    music_logspout_1 music_logspout_1.bridge
172.17.0.140    music_app01_1 music_app01_1.bridge

::1             ip6-localhost ip6-loopback localhost
fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

Weave resolves the container’s name to eth0 IP address, created by Docker’s docker0 Ethernet bridge. Each container can now communicate with all other containers in the cluster.

Weave eth0 Network

Results

Resulting virtual machines, network, images, and containers:

gstafford@gstafford-X555LA:$ docker-machine ls
NAME            ACTIVE   DRIVER       STATE     URL                         SWARM
springmusic     *        virtualbox   Running   tcp://192.168.99.100:2376   


gstafford@gstafford-X555LA:$ docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
music_app02            latest              632c782010ac        3 days ago          370.4 MB
music_app01            latest              632c782010ac        3 days ago          370.4 MB
music_proxy            latest              171624a31920        3 days ago          144.5 MB
music_nosqldb          latest              2b3b46af5ef3        3 days ago          260.8 MB
music_elk              latest              5c18dae84b26        3 days ago          1.05 GB
weaveworks/weaveexec   v1.1.0              69c6bfa7934f        5 days ago          58.18 MB
weaveworks/weave       v1.1.0              5dccf0533147        5 days ago          17.53 MB
music_logspout         latest              fe64597ab0c4        8 days ago          24.36 MB
gliderlabs/logspout    master              40a52d6ca462        9 days ago          14.75 MB
willdurand/elk         latest              04cd7334eb5d        2 weeks ago         1.05 GB
tomcat                 latest              6fe1972e6b08        2 weeks ago         347.7 MB
mongo                  latest              5c9464760d54        2 weeks ago         260.8 MB
nginx                  latest              cd3cf76a61ee        2 weeks ago         132.9 MB


gstafford@gstafford-X555LA:$ weave ps
weave:expose 6a:69:11:1b:b4:e3
2bce66e3b33b fa:07:7e:85:37:1b 10.32.0.5/12
604dbbc4473f 6a:73:8d:54:cc:fe 10.32.0.4/12
ea64b42cf5a1 c2:69:73:84:67:69 10.32.0.3/12
85b1e8a9b8d0 aa:f7:12:cd:b7:13 10.32.0.6/12
81041fc97d1f 2e:1e:82:67:89:5d 10.32.0.2/12
e80c04bdbfaf 1e:95:a5:b2:9d:30 10.32.0.1/12
18c22e7f1c33 7e:43:54:db:8d:b8


gstafford@gstafford-X555LA:$ docker ps -a
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS              PORTS                                                                                            NAMES
2bce66e3b33b        music_app01                   "/w/w catalina.sh run"   3 days ago          Up 3 days           0.0.0.0:8180->8080/tcp                                                                           music_app01_1
604dbbc4473f        music_logspout                "/w/w /bin/logspout"     3 days ago          Up 3 days           8000/tcp, 0.0.0.0:8083->80/tcp                                                                   music_logspout_1
ea64b42cf5a1        music_app02                   "/w/w catalina.sh run"   3 days ago          Up 3 days           0.0.0.0:8280->8080/tcp                                                                           music_app02_1
85b1e8a9b8d0        music_proxy                   "/w/w nginx -g 'daemo"   3 days ago          Up 3 days           0.0.0.0:80->80/tcp, 443/tcp                                                                      music_proxy_1
81041fc97d1f        music_nosqldb                 "/w/w /entrypoint.sh "   3 days ago          Up 3 days           27017/tcp                                                                                        music_nosqldb_1
e80c04bdbfaf        music_elk                     "/w/w /usr/bin/superv"   3 days ago          Up 3 days           5000/0, 0.0.0.0:8081->80/tcp, 0.0.0.0:8082->9200/tcp                                             music_elk_1
8eafc6225fc1        weaveworks/weaveexec:v1.1.0   "/home/weave/weavepro"   3 days ago          Up 3 days                                                                                                            weaveproxy
18c22e7f1c33        weaveworks/weave:v1.1.0       "/home/weave/weaver -"   3 days ago          Up 3 days           172.17.42.1:53->53/udp, 0.0.0.0:6783->6783/tcp, 0.0.0.0:6783->6783/udp, 172.17.42.1:53->53/tcp   weave

Spring Music Application Links

Assuming springmusic VM is running at 192.168.99.100, these are the accessible URL for each of the environment’s major components:

* The Tomcat user name is admin and the password is t0mcat53rv3r.

Helpful Links

, , , , , , , , ,

Leave a comment