Posts Tagged Docker Compose
Spring Music Revisited: Java-Spring-MongoDB Web App with Docker 1.12
Posted by Gary A. Stafford in Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development, Software Development on August 7, 2016
Build, test, deploy, and monitor a multi-container, MongoDB-backed, Java Spring web application, using the new Docker 1.12.
Introduction
This post and the post’s example project represent an update to a previous post, Build and Deploy a Java-Spring-MongoDB Application using Docker. This new post incorporates many improvements made in Docker 1.12, including the use of the new Docker Compose v2 YAML format. The post’s project was also updated to use Filebeat with ELK, as opposed to Logspout, which was used previously.
In this post, we will demonstrate how to build, test, deploy, and manage a Java Spring web application, hosted on Apache Tomcat, load-balanced by NGINX, monitored by ELK with Filebeat, and all containerized with Docker.
We will use a sample Java Spring application, Spring Music, available on GitHub from Cloud Foundry. The Spring Music sample record album collection application was originally designed to demonstrate the use of database services on Cloud Foundry, using the Spring Framework. Instead of Cloud Foundry, we will host the Spring Music application locally, using Docker on VirtualBox, and optionally on AWS.
All files necessary to build this project are stored on the docker_v2
branch of the garystafford/spring-music-docker repository on GitHub. The Spring Music source code is stored on the springmusic_v2
branch of the garystafford/spring-music repository, also on GitHub.
Application Architecture
The Java Spring Music application stack contains the following technologies: Java, Spring Framework, AngularJS, Bootstrap, jQuery, NGINX, Apache Tomcat, MongoDB, the ELK Stack, and Filebeat. Testing frameworks include the Spring MVC Test Framework, Mockito, Hamcrest, and JUnit.
A few changes were made to the original Spring Music application to make it work for this demonstration, including:
- Move from Java 1.7 to 1.8 (including newer Tomcat version)
- Add unit tests for Continuous Integration demonstration purposes
- Modify MongoDB configuration class to work with non-local, containerized MongoDB instances
- Add Gradle
warNoStatic
task to build WAR without static assets - Add Gradle
zipStatic
task to ZIP up the application’s static assets for deployment to NGINX - Add Gradle
zipGetVersion
task with a versioning scheme for build artifacts - Add
context.xml
file andMANIFEST.MF
file to the WAR file - Add Log4j
RollingFileAppender
appender to send log entries to Filebeat - Update versions of several dependencies, including Gradle, Spring, and Tomcat
We will use the following technologies to build, publish, deploy, and host the Java Spring Music application: Gradle, git, GitHub, Travis CI, Oracle VirtualBox, Docker, Docker Compose, Docker Machine, Docker Hub, and optionally, Amazon Web Services (AWS).
NGINX
To increase performance, the Spring Music web application’s static content will be hosted by NGINX. The application’s WAR file will be hosted by Apache Tomcat 8.5.4. Requests for non-static content will be proxied through NGINX on the front-end, to a set of three load-balanced Tomcat instances on the back-end. To further increase application performance, NGINX will also be configured for browser caching of the static content. In many enterprise environments, the use of a Java EE application server, like Tomcat, is still not uncommon.
Reverse proxying and caching are configured thought NGINX’s default.conf
file, in the server
configuration section:
server { | |
listen 80; | |
server_name proxy; | |
location ~* \/assets\/(css|images|js|template)\/* { | |
root /usr/share/nginx/; | |
expires max; | |
add_header Pragma public; | |
add_header Cache-Control "public, must-revalidate, proxy-revalidate"; | |
add_header Vary Accept-Encoding; | |
access_log off; | |
} |
The three Tomcat instances will be manually configured for load-balancing using NGINX’s default round-robin load-balancing algorithm. This is configured through the default.conf
file, in the upstream
configuration section:
upstream backend { | |
server music_app_1:8080; | |
server music_app_2:8080; | |
server music_app_3:8080; | |
} |
Client requests are received through port 80
on the NGINX server. NGINX redirects requests, which are not for non-static assets, to one of the three Tomcat instances on port 8080
.
MongoDB
The Spring Music application was designed to work with a number of data stores, including MySQL, Postgres, Oracle, MongoDB, Redis, and H2, an in-memory Java SQL database. Given the choice of both SQL and NoSQL databases, we will select MongoDB.
The Spring Music application, hosted by Tomcat, will store and modify record album data in a single instance of MongoDB. MongoDB will be populated with a collection of album data from a JSON file, when the Spring Music application first creates the MongoDB database instance.
ELK
Lastly, the ELK Stack with Filebeat, will aggregate NGINX, Tomcat, and Java Log4j log entries, providing debugging and analytics to our demonstration. A similar method for aggregating logs, using Logspout instead of Filebeat, can be found in this previous post.
Continuous Integration
In this post’s example, two build artifacts, a WAR file for the application and ZIP file for the static web content, are built automatically by Travis CI, whenever source code changes are pushed to the springmusic_v2
branch of the garystafford/spring-music repository on GitHub.
Following a successful build and a small number of unit tests, Travis CI pushes the build artifacts to the build-artifacts
branch on the same GitHub project. The build-artifacts
branch acts as a pseudo binary repository for the project, much like JFrog’s Artifactory. These artifacts are used later by Docker to build the project’s immutable Docker images and containers.
Build Notifications
Travis CI pushes build notifications to a Slack channel, which eliminates the need to actively monitor Travis CI.
Automation Scripting
The .travis.yaml
file, custom gradle.build
Gradle tasks, and the deploy_travisci.sh
script handles the Travis CI automation described, above.
Travis CI .travis.yaml
file:
language: java | |
jdk: oraclejdk8 | |
before_install: | |
- chmod +x gradlew | |
before_deploy: | |
- chmod ugo+x deploy_travisci.sh | |
script: | |
- "./gradlew clean build" | |
- "./gradlew warNoStatic warCopy zipGetVersion zipStatic" | |
- sh ./deploy_travisci.sh | |
env: | |
global: | |
- GH_REF: github.com/garystafford/spring-music.git | |
- secure: <GH_TOKEN_secure_hash_here> | |
- secure: <COMMIT_AUTHOR_EMAIL_secure_hash_here> | |
notifications: | |
slack: | |
- secure: <SLACK_secure_hash_here> |
Custom gradle.build
tasks:
// new Gradle build tasks | |
task warNoStatic(type: War) { | |
// omit the version from the war file name | |
version = '' | |
exclude '**/assets/**' | |
manifest { | |
attributes | |
'Manifest-Version': '1.0', | |
'Created-By': currentJvm, | |
'Gradle-Version': GradleVersion.current().getVersion(), | |
'Implementation-Title': archivesBaseName + '.war', | |
'Implementation-Version': artifact_version, | |
'Implementation-Vendor': 'Gary A. Stafford' | |
} | |
} | |
task warCopy(type: Copy) { | |
from 'build/libs' | |
into 'build/distributions' | |
include '**/*.war' | |
} | |
task zipGetVersion (type: Task) { | |
ext.versionfile = | |
new File("${projectDir}/src/main/webapp/assets/buildinfo.properties") | |
versionfile.text = 'build.version=' + artifact_version | |
} | |
task zipStatic(type: Zip) { | |
from 'src/main/webapp/assets' | |
appendix = 'static' | |
version = '' | |
} |
The deploy.sh
file:
#!/bin/bash | |
set -e | |
cd build/distributions | |
git init | |
git config user.name "travis-ci" | |
git config user.email "${COMMIT_AUTHOR_EMAIL}" | |
git add . | |
git commit -m "Deploy Travis CI Build #${TRAVIS_BUILD_NUMBER} artifacts to GitHub" | |
git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:build-artifacts > /dev/null 2>&1 |
You can easily replicate the project’s continuous integration automation using your choice of toolchains. GitHub or BitBucket are good choices for distributed version control. For continuous integration and deployment, I recommend Travis CI, Semaphore, Codeship, or Jenkins. Couple those with a good persistent chat application, such as Glider Labs’ Slack or Atlassian’s HipChat.
Building the Docker Environment
Make sure VirtualBox, Docker, Docker Compose, and Docker Machine, are installed and running. At the time of this post, I have the following versions of software installed on my Mac:
- Mac OS X 10.11.6
- VirtualBox 5.0.26
- Docker 1.12.1
- Docker Compose 1.8.0
- Docker Machine 0.8.1
To build the project’s VirtualBox VM, Docker images, and Docker containers, execute the build script, using the following command: sh ./build_project.sh
. A build script is useful when working with CI/CD automation tools, such as Jenkins CI or ThoughtWorks go. However, to understand the build process, I suggest first running the individual commands, locally.
#!/bin/sh | |
set -ex | |
# clone project | |
git clone -b docker_v2 --single-branch \ | |
https://github.com/garystafford/spring-music-docker.git music \ | |
&& cd "$_" | |
# provision VirtualBox VM | |
docker-machine create --driver virtualbox springmusic | |
# set new environment | |
docker-machine env springmusic \ | |
&& eval "$(docker-machine env springmusic)" | |
# mount a named volume on host to store mongo and elk data | |
# ** assumes your project folder is 'music' ** | |
docker volume create --name music_data | |
docker volume create --name music_elk | |
# create bridge network for project | |
# ** assumes your project folder is 'music' ** | |
docker network create -d bridge music_net | |
# build images and orchestrate start-up of containers (in this order) | |
docker-compose -p music up -d elk && sleep 15 \ | |
&& docker-compose -p music up -d mongodb && sleep 15 \ | |
&& docker-compose -p music up -d app \ | |
&& docker-compose scale app=3 && sleep 15 \ | |
&& docker-compose -p music up -d proxy && sleep 15 | |
# optional: configure local DNS resolution for application URL | |
#echo "$(docker-machine ip springmusic) springmusic.com" | sudo tee --append /etc/hosts | |
# run a simple connectivity test of application | |
for i in {1..9}; do curl -I $(docker-machine ip springmusic); done |
Deploying to AWS
By simply changing the Docker Machine driver to AWS EC2 from VirtualBox, and providing your AWS credentials, the springmusic
environment may also be built on AWS.
Build Process
Docker Machine provisions a single VirtualBox springmusic
VM on which host the project’s containers. VirtualBox provides a quick and easy solution that can be run locally for initial development and testing of the application.
Next, the script creates a Docker data volume and project-specific Docker bridge network.
Next, using the project’s individual Dockerfiles, Docker Compose pulls base Docker images from Docker Hub for NGINX, Tomcat, ELK, and MongoDB. Project-specific immutable Docker images are then built for NGINX, Tomcat, and MongoDB. While constructing the project-specific Docker images for NGINX and Tomcat, the latest Spring Music build artifacts are pulled and installed into the corresponding Docker images.
Docker Compose builds and deploys (6) containers onto the VirtualBox VM: (1) NGINX, (3) Tomcat, (1) MongoDB, and (1) ELK.
The NGINX Dockerfile
:
# NGINX image with build artifact | |
FROM nginx:latest | |
MAINTAINER Gary A. Stafford <garystafford@rochester.rr.com> | |
ENV REFRESHED_AT 2016-09-17 | |
ENV GITHUB_REPO https://github.com/garystafford/spring-music/raw/build-artifacts | |
ENV STATIC_FILE spring-music-static.zip | |
RUN apt-get update -qq \ | |
&& apt-get install -qqy curl wget unzip nano \ | |
&& apt-get clean \ | |
\ | |
&& wget -O /tmp/${STATIC_FILE} ${GITHUB_REPO}/${STATIC_FILE} \ | |
&& unzip /tmp/${STATIC_FILE} -d /usr/share/nginx/assets/ | |
COPY default.conf /etc/nginx/conf.d/default.conf | |
# tweak nginx image set-up, remove log symlinks | |
RUN rm /var/log/nginx/access.log /var/log/nginx/error.log | |
# install Filebeat | |
ENV FILEBEAT_VERSION=filebeat_1.2.3_amd64.deb | |
RUN curl -L -O https://download.elastic.co/beats/filebeat/${FILEBEAT_VERSION} \ | |
&& dpkg -i ${FILEBEAT_VERSION} \ | |
&& rm ${FILEBEAT_VERSION} | |
# configure Filebeat | |
ADD filebeat.yml /etc/filebeat/filebeat.yml | |
# CA cert | |
RUN mkdir -p /etc/pki/tls/certs | |
ADD logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt | |
# start Filebeat | |
ADD ./start.sh /usr/local/bin/start.sh | |
RUN chmod +x /usr/local/bin/start.sh | |
CMD [ "/usr/local/bin/start.sh" ] |
The Tomcat Dockerfile
:
# Apache Tomcat image with build artifact | |
FROM tomcat:8.5.4-jre8 | |
MAINTAINER Gary A. Stafford <garystafford@rochester.rr.com> | |
ENV REFRESHED_AT 2016-09-17 | |
ENV GITHUB_REPO https://github.com/garystafford/spring-music/raw/build-artifacts | |
ENV APP_FILE spring-music.war | |
ENV TERM xterm | |
ENV JAVA_OPTS -Djava.security.egd=file:/dev/./urandom | |
RUN apt-get update -qq \ | |
&& apt-get install -qqy curl wget \ | |
&& apt-get clean \ | |
\ | |
&& touch /var/log/spring-music.log \ | |
&& chmod 666 /var/log/spring-music.log \ | |
\ | |
&& wget -q -O /usr/local/tomcat/webapps/ROOT.war ${GITHUB_REPO}/${APP_FILE} \ | |
&& mv /usr/local/tomcat/webapps/ROOT /usr/local/tomcat/webapps/_ROOT | |
COPY tomcat-users.xml /usr/local/tomcat/conf/tomcat-users.xml | |
# install Filebeat | |
ENV FILEBEAT_VERSION=filebeat_1.2.3_amd64.deb | |
RUN curl -L -O https://download.elastic.co/beats/filebeat/${FILEBEAT_VERSION} \ | |
&& dpkg -i ${FILEBEAT_VERSION} \ | |
&& rm ${FILEBEAT_VERSION} | |
# configure Filebeat | |
ADD filebeat.yml /etc/filebeat/filebeat.yml | |
# CA cert | |
RUN mkdir -p /etc/pki/tls/certs | |
ADD logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt | |
# start Filebeat | |
ADD ./start.sh /usr/local/bin/start.sh | |
RUN chmod +x /usr/local/bin/start.sh | |
CMD [ "/usr/local/bin/start.sh" ] |
Docker Compose v2 YAML
This post was recently updated for Docker 1.12, and to use Docker Compose v2 YAML file format. The post’s docker-compose.yml
takes advantage of improvements in Docker 1.12 and Docker Compose v2 YAML. Improvements to the YAML file include eliminating the need to link containers and expose ports, and the addition of named networks and volumes.
version: '2' | |
services: | |
proxy: | |
build: nginx/ | |
ports: | |
- 80:80 | |
networks: | |
- net | |
depends_on: | |
- app | |
hostname: proxy | |
container_name: proxy | |
app: | |
build: tomcat/ | |
ports: | |
- 8080 | |
networks: | |
- net | |
depends_on: | |
- mongodb | |
hostname: app | |
mongodb: | |
build: mongodb/ | |
ports: | |
- 27017:27017 | |
networks: | |
- net | |
depends_on: | |
- elk | |
hostname: mongodb | |
container_name: mongodb | |
volumes: | |
- music_data:/data/db | |
- music_data:/data/configdb | |
elk: | |
image: sebp/elk:latest | |
ports: | |
- 5601:5601 | |
- 9200:9200 | |
- 5044:5044 | |
- 5000:5000 | |
networks: | |
- net | |
volumes: | |
- music_elk:/var/lib/elasticsearch | |
hostname: elk | |
container_name: elk | |
volumes: | |
music_data: | |
external: true | |
music_elk: | |
external: true | |
networks: | |
net: | |
driver: bridge |
The Results
Below are the results of building the project.
# Resulting Docker Machine VirtualBox VM: | |
$ docker-machine ls | |
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS | |
springmusic * virtualbox Running tcp://192.168.99.100:2376 v1.12.1 | |
# Resulting external volume: | |
$ docker volume ls | |
DRIVER VOLUME NAME | |
local music_data | |
local music_elk | |
# Resulting bridge network: | |
$ docker network ls | |
NETWORK ID NAME DRIVER SCOPE | |
f564dfa1b440 music_net bridge local | |
# Resulting Docker images - (4) base images and (3) project images: | |
$ docker images | |
REPOSITORY TAG IMAGE ID CREATED SIZE | |
music_proxy latest 7a8dd90bcf32 About an hour ago 250.2 MB | |
music_app latest c93c713d03b8 About an hour ago 393 MB | |
music_mongodb latest fbcbbe9d4485 25 hours ago 366.4 MB | |
tomcat 8.5.4-jre8 98cc750770ba 2 days ago 334.5 MB | |
mongo latest 48b8b08dca4d 2 days ago 366.4 MB | |
nginx latest 4efb2fcdb1ab 10 days ago 183.4 MB | |
sebp/elk latest 07a3e78b01f5 13 days ago 884.5 MB | |
# Resulting (6) Docker containers | |
$ docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
b33922767517 music_proxy "/usr/local/bin/start" 3 hours ago Up 13 minutes 0.0.0.0:80->80/tcp, 443/tcp proxy | |
e16d2372f2df music_app "/usr/local/bin/start" 3 hours ago Up About an hour 0.0.0.0:32770->8080/tcp music_app_3 | |
6b7accea7156 music_app "/usr/local/bin/start" 3 hours ago Up About an hour 0.0.0.0:32769->8080/tcp music_app_2 | |
2e94f766df1b music_app "/usr/local/bin/start" 3 hours ago Up About an hour 0.0.0.0:32768->8080/tcp music_app_1 | |
71f8dc574148 sebp/elk:latest "/usr/local/bin/start" 3 hours ago Up About an hour 0.0.0.0:5000->5000/tcp, 0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp elk | |
f7e7d1af7cca music_mongodb "/entrypoint.sh mongo" 20 hours ago Up About an hour 0.0.0.0:27017->27017/tcp mongodb |
Testing the Application
Below are partial results of the curl test, hitting the NGINX endpoint. Note the different IP addresses in the Upstream-Address
field between requests. This test proves NGINX’s round-robin load-balancing is working across the three Tomcat application instances: music_app_1
, music_app_2
, and music_app_3
.
Also, note the sharp decrease in the Request-Time
between the first three requests and subsequent three requests. The Upstream-Response-Time
to the Tomcat instances doesn’t change, yet the total Request-Time
is much shorter, due to caching of the application’s static assets by NGINX.
for i in {1..6}; do curl -I $(docker-machine ip springmusic);done | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:50 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.575 | |
Upstream-Address: 172.18.0.4:8080 | |
Upstream-Response-Time: 1474137230.048 | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:51 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.711 | |
Upstream-Address: 172.18.0.5:8080 | |
Upstream-Response-Time: 1474137230.865 | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:52 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.326 | |
Upstream-Address: 172.18.0.6:8080 | |
Upstream-Response-Time: 1474137231.812 | |
# assets now cached... | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:53 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.012 | |
Upstream-Address: 172.18.0.4:8080 | |
Upstream-Response-Time: 1474137233.111 | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:53 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.017 | |
Upstream-Address: 172.18.0.5:8080 | |
Upstream-Response-Time: 1474137233.350 | |
HTTP/1.1 200 | |
Server: nginx/1.11.4 | |
Date: Sat, 17 Sep 2016 18:33:53 GMT | |
Content-Type: text/html;charset=ISO-8859-1 | |
Content-Length: 2094 | |
Connection: keep-alive | |
Accept-Ranges: bytes | |
ETag: W/"2094-1473924940000" | |
Last-Modified: Thu, 15 Sep 2016 07:35:40 GMT | |
Content-Language: en | |
Request-Time: 0.013 | |
Upstream-Address: 172.18.0.6:8080 | |
Upstream-Response-Time: 1474137233.594 |
Spring Music Application Links
Assuming the springmusic
VM is running at 192.168.99.100
, the following links can be used to access various project endpoints. Note the (3) Tomcat instances each map to randomly exposed ports. These ports are not required by NGINX, which maps to port 8080 for each instance. The port is only required if you want access to the Tomcat Web Console. The port, shown below, 32771, is merely used as an example.
- Spring Music Application: 192.168.99.100
- NGINX Status: 192.168.99.100/nginx_status
- Tomcat Web Console – music_app_1*: 192.168.99.100:32771/manager
- Environment Variables – music_app_1: 192.168.99.100:32771/env
- Album List (RESTful endpoint) – music_app_1: 192.168.99.100:32771/albums
- Elasticsearch Info: 192.168.99.100:9200
- Elasticsearch Status: 192.168.99.100:9200/_status?pretty
- Kibana Web Console: 192.168.99.100:5601
* The Tomcat user name is admin
and the password is t0mcat53rv3r
.
Helpful Links
- Cloud Foundry’s Spring Music Example
- Getting Started with Gradle for Java
- Introduction to Gradle
- Spring Framework
- Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching
- Common conversion patterns for log4j’s PatternLayout
- Spring @PropertySource example
- Java log4j logging
TODOs
- Automate the Docker image build and publish processes
- Automate the Docker container build and deploy processes
- Automate post-deployment verification testing of project infrastructure
- Add Docker Swarm multi-host capabilities with overlay networking
- Update Spring Music with latest CF project revisions
- Include scripting example to stand-up project on AWS
- Add Consul and Consul Template for NGINX configuration
Using Weave to Network a Docker Multi-Container Java Application
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development, Software Development on September 17, 2015
Use the latest version of Weaveworks’ Weave Net to network a multi-container, Dockerized Java Spring web application.
Introduction
The last post demonstrated how to build and deploy the Java Spring Music application to a VirtualBox, multi-container test environment. The environment contained (1) NGINX container, (2) load-balanced Tomcat containers, (1) MongoDB container, (1) ELK Stack container, and (1) Logspout container, all on one VM.
In that post, we used Docker’s links
option. The links
options, which modifies the container’s /etc/hosts
file, allows two Docker containers to communicate with each other. For example, the NGINX container is linked to both Tomcat containers:
proxy: build: nginx/ ports: "80:80" links: - app01 - app02
Although container linking works, links are not very practical beyond a small number of static containers or a single container host. With linking, you must explicitly define each service-to-container relationship you want Docker to configure. Linking is not an option with Docker Swarm to link containers across multiple virtual machine container hosts. With Docker Networking in its early ‘experimental’ stages and the Swarm limitation, it’s hard to foresee the use of linking for any uses beyond limited development and test environments.
Weave Net
Weave Net, aka Weave, is one of a trio of products developed by Weaveworks. The other two members of the trio include Weave Run and Weave Scope. According to Weaveworks’ website, ‘Weave Net connects all your containers into a transparent, dynamic and resilient mesh. This is one of the easiest ways to set up clustered applications that run anywhere.‘ Weave allows us to eliminate the dependency on the links
connect our containers. Weave does all the linking of containers for us automatically.
Weave v1.1.0
If you worked with previous editions of Weave, you will appreciate that Weave versions v1.0.x and v1.1.0 are significant steps forward in the evolution of Weave. Weaveworks’ GitHub Weave Release page details the many improvements. I also suggest reading Weave ‘Gossip’ DNS, on Weavework’s blog, before continuing. The post details the improvements of Weave v1.1.0. Some of those key new features include:
- Completely redesigned weaveDNS, dubbed ‘Gossip DNS’
- Registrations are broadcast to all weaveDNS instances
- Registered entries are stored in-memory and handle lookups locally
- Weave router’s gossip implementation periodically synchronizes DNS mappings between peers
- Ability to recover from network partitions and other transient failures
- Each peer is aware of the hostnames and IP address of all containers in the Weave network.
weave launch
now launches all weave components, including the router, weaveDNS and the proxy, greatly simplifying setup- weaveDNS is now embedded in the Weave router
Weave-based Network
In this post, we will reuse the Java Spring Music application from the last post. However, we will replace the project’s static dependencies on Docker links with Weave. This post will demonstrate the most basic features of Weave, using a single cluster. In a future post, we will demonstrate how easily Weave also integrates with multiple clusters.
All files for this post can be found in the swarm-weave
branch of the GitHub Repository. Instructions to clone are below.
Configuration
If you recall from the previous post, the Docker Compose YAML file (docker-compose.yml
) looked similar to this:
proxy: build: nginx/ ports: "80:80" links: - app01 - app02 hostname: "proxy" app01: build: tomcat/ expose: "8080" ports: "8180:8080" links: - nosqldb - elk hostname: "app01" app02: build: tomcat/ expose: "8080" ports: "8280:8080" links: - nosqldb - elk hostname: "app01" nosqldb: build: mongo/ hostname: "nosqldb" volumes: "/opt/mongodb:/data/db" elk: build: elk/ ports: - "8081:80" - "8082:9200" expose: "5000/upd" logspout: build: logspout/ volumes: "/var/run/docker.sock:/tmp/docker.sock" links: elk ports: "8083:80" environment: ROUTE_URIS=logstash://elk:5000
Implementing Weave simplifies the docker-compose.yml
, considerably. Below is the new Weave version of the docker-compose.yml
. The links
option have been removed from all containers. Additionally, the hostnames
have been removed, as they serve no real purpose moving forward. The logspout service’s environment
option has been modified to use the elk container’s full name as opposed to the hostname.
The only addition is the volumes_from
option to the proxy service. We must ensure that the two Tomcat containers start before the NGINX containers. The links
option indirectly provided this functionality, previously.
proxy: build: nginx/ ports: - "80:80" volumes_from: - app01 - app02 app01: build: tomcat/ expose: - "8080" ports: - "8180:8080" app02: build: tomcat/ expose: - "8080" ports: - "8280:8080" nosqldb: build: mongo/ volumes: - "/opt/mongodb:/data/db" elk: build: elk/ ports: - "8081:80" - "8082:9200" expose: - "5000/upd" logspout: build: logspout/ volumes: - "/var/run/docker.sock:/tmp/docker.sock" ports: - "8083:80" environment: - ROUTE_URIS=logstash://music_elk_1:5000
Next, we need to modify the NGINX configuration, slightly. In the previous post we referenced the Tomcat service names, as shown below.
upstream backend { server app01:8080; server app02:8080; }
Weave will automatically add the two Tomcat container names to the NGINX container’s /etc/hosts
file. We will add these Tomcat container names to NGINX’s configuration file.
upstream backend { server music_app01_1:8080; server music_app02_1:8080; }
In an actual Production environment, we would use a template, along with a service discovery tool, such as Consul, to automatically populate the container names, as containers are dynamically created or destroyed.
Installing and Running Weave
After cloning this post’s GitHub repository, I recommend first installing and configuring Weave. Next, build the container host VM using Docker Machine. Lastly, build the containers using Docker Compose. The build_project.sh
script below will take care of all the necessary steps.
#!/bin/sh ######################################################################## # # title: Build Complete Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/sprint-music-docker # description: Clone and build complete Spring Music Docker project # # to run: sh ./build_project.sh # ######################################################################## # install latest weave curl -L git.io/weave -o /usr/local/bin/weave && chmod a+x /usr/local/bin/weave && weave version # clone project git clone -b swarm-weave \ --single-branch --branch swarm-weave \ https://github.com/garystafford/spring-music-docker.git && cd spring-music-docker # build VM docker-machine create --driver virtualbox springmusic --debug # create diectory to store mongo data on host docker ssh springmusic mkdir /opt/mongodb # set new environment docker-machine env springmusic && eval "$(docker-machine env springmusic)" # launch weave and weaveproxy/weaveDNS containers weave launch && tlsargs=$(docker-machine ssh springmusic \ "cat /proc/\$(pgrep /usr/local/bin/docker)/cmdline | tr '\0' '\n' | grep ^--tls | tr '\n' ' '") weave launch-proxy $tlsargs && eval "$(weave env)" && # test/confirm weave status weave status && docker logs weaveproxy # pull and build images and containers # this step will take several minutes to pull images first time docker-compose -f docker-compose.yml -p music up -d # wait for container apps to fully start sleep 15 # test weave (should list entries for all containers) docker exec -it music_proxy_1 cat /etc/hosts # run quick test of Spring Music application for i in {1..10} do curl -I --url $(docker-machine ip springmusic) done
One last test, to ensure that MongoDB is using the host’s volume, and not storing data in the MongoDB container’s /data/db
directory, execute the following command: docker-machine ssh springmusic ls -Alh /opt/mongodb
. You should see MongoDB-related content being stored here.
Testing Weave
Running the weave status
command, we should observe that Weave returned a status similar to the example below:
gstafford@gstafford-X555LA:$ weave status Version: v1.1.0 Service: router Protocol: weave 1..2 Name: 6a:69:11:1b:b4:e3(springmusic) Encryption: disabled PeerDiscovery: enabled Targets: 0 Connections: 0 Peers: 1 Service: ipam Consensus: achieved Range: [10.32.0.0-10.48.0.0) DefaultSubnet: 10.32.0.0/12 Service: dns Domain: weave.local. TTL: 1 Entries: 2 Service: proxy Address: tcp://192.168.99.100:12375
Running the docker exec -it music_proxy_1 cat /etc/hosts
command, we should observe that WeaveDNS has automatically added entries for all containers to the music_proxy_1
container’s /etc/hosts
file. WeaveDNS will also remove the addresses of any containers that die. This offers a simple way to implement redundancy.
gstafford@gstafford-X555LA:$ docker exec -it music_proxy_1 cat /etc/hosts # modified by weave 10.32.0.6 music_proxy_1 127.0.0.1 localhost 172.17.0.131 weave weave.bridge 172.17.0.133 music_elk_1 music_elk_1.bridge 172.17.0.134 music_nosqldb_1 music_nosqldb_1.bridge 172.17.0.138 music_app02_1 music_app02_1.bridge 172.17.0.139 music_logspout_1 music_logspout_1.bridge 172.17.0.140 music_app01_1 music_app01_1.bridge ::1 ip6-localhost ip6-loopback localhost fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Weave resolves the container’s name to eth0
IP address, created by Docker’s docker0
Ethernet bridge. Each container can now communicate with all other containers in the cluster.
Results
Resulting virtual machines, network, images, and containers:
gstafford@gstafford-X555LA:$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM springmusic * virtualbox Running tcp://192.168.99.100:2376 gstafford@gstafford-X555LA:$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE music_app02 latest 632c782010ac 3 days ago 370.4 MB music_app01 latest 632c782010ac 3 days ago 370.4 MB music_proxy latest 171624a31920 3 days ago 144.5 MB music_nosqldb latest 2b3b46af5ef3 3 days ago 260.8 MB music_elk latest 5c18dae84b26 3 days ago 1.05 GB weaveworks/weaveexec v1.1.0 69c6bfa7934f 5 days ago 58.18 MB weaveworks/weave v1.1.0 5dccf0533147 5 days ago 17.53 MB music_logspout latest fe64597ab0c4 8 days ago 24.36 MB gliderlabs/logspout master 40a52d6ca462 9 days ago 14.75 MB willdurand/elk latest 04cd7334eb5d 2 weeks ago 1.05 GB tomcat latest 6fe1972e6b08 2 weeks ago 347.7 MB mongo latest 5c9464760d54 2 weeks ago 260.8 MB nginx latest cd3cf76a61ee 2 weeks ago 132.9 MB gstafford@gstafford-X555LA:$ weave ps weave:expose 6a:69:11:1b:b4:e3 2bce66e3b33b fa:07:7e:85:37:1b 10.32.0.5/12 604dbbc4473f 6a:73:8d:54:cc:fe 10.32.0.4/12 ea64b42cf5a1 c2:69:73:84:67:69 10.32.0.3/12 85b1e8a9b8d0 aa:f7:12:cd:b7:13 10.32.0.6/12 81041fc97d1f 2e:1e:82:67:89:5d 10.32.0.2/12 e80c04bdbfaf 1e:95:a5:b2:9d:30 10.32.0.1/12 18c22e7f1c33 7e:43:54:db:8d:b8 gstafford@gstafford-X555LA:$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2bce66e3b33b music_app01 "/w/w catalina.sh run" 3 days ago Up 3 days 0.0.0.0:8180->8080/tcp music_app01_1 604dbbc4473f music_logspout "/w/w /bin/logspout" 3 days ago Up 3 days 8000/tcp, 0.0.0.0:8083->80/tcp music_logspout_1 ea64b42cf5a1 music_app02 "/w/w catalina.sh run" 3 days ago Up 3 days 0.0.0.0:8280->8080/tcp music_app02_1 85b1e8a9b8d0 music_proxy "/w/w nginx -g 'daemo" 3 days ago Up 3 days 0.0.0.0:80->80/tcp, 443/tcp music_proxy_1 81041fc97d1f music_nosqldb "/w/w /entrypoint.sh " 3 days ago Up 3 days 27017/tcp music_nosqldb_1 e80c04bdbfaf music_elk "/w/w /usr/bin/superv" 3 days ago Up 3 days 5000/0, 0.0.0.0:8081->80/tcp, 0.0.0.0:8082->9200/tcp music_elk_1 8eafc6225fc1 weaveworks/weaveexec:v1.1.0 "/home/weave/weavepro" 3 days ago Up 3 days weaveproxy 18c22e7f1c33 weaveworks/weave:v1.1.0 "/home/weave/weaver -" 3 days ago Up 3 days 172.17.42.1:53->53/udp, 0.0.0.0:6783->6783/tcp, 0.0.0.0:6783->6783/udp, 172.17.42.1:53->53/tcp weave
Spring Music Application Links
Assuming springmusic
VM is running at 192.168.99.100
, these are the accessible URL for each of the environment’s major components:
- Spring Music: 192.168.99.100
- NGINX: 192.168.99.100/nginx_status
- Tomcat Node 1*: 192.168.99.100:8180/manager
- Tomcat Node 2*: 192.168.99.100:8280/manager
- Kibana: 192.168.99.100:8081
- Elasticsearch: 192.168.99.100:8082
- Elasticsearch: 192.168.99.100:8082/_status?pretty
- Logspout: 192.168.99.100:8083/logs
* The Tomcat user name is admin
and the password is t0mcat53rv3r
.
Helpful Links
Build and Deploy a Java-Spring-MongoDB Application using Docker
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Software Development on September 7, 2015
Build a multi-container, MongoDB-backed, Java Spring web application, and deploy to a test environment using Docker.
Introduction
Application Architecture
Spring Music Environment
Building the Environment
Spring Music Application Links
Helpful Links
Introduction
In this post, we will demonstrate how to build, deploy, and host a multi-tier Java application using Docker. For the demonstration, we will use a sample Java Spring application, available on GitHub from Cloud Foundry. Cloud Foundry’s Spring Music sample record album collection application was originally designed to demonstrate the use of database services on Cloud Foundry and Spring Framework. Instead of Cloud Foundry, we will host the Spring Music application using Docker with VirtualBox and optionally, AWS.
All files required to build this post’s demonstration are located in the master
branch of this GitHub repository. Instructions to clone the repository are below. The Java Spring Music application’s source code, used in this post’s demonstration, is located in the master
branch of this GitHub repository.
A few changes were necessary to the original Spring Music application to make it work for the this demonstration. At a high-level, the changes included:
- Modify MongoDB configuration class to work with non-local MongoDB instances
- Add Gradle
warNoStatic
task to build WAR file without the static assets, which will be host separately in NGINX - Create new Gradle task,
zipStatic
, to ZIP up the application’s static assets for deployment to NGINX - Add versioning scheme for build artifacts
- Add
context.xml
file andMANIFEST.MF
file to the WAR file - Add log4j
syslog
appender to send log entries to Logstash - Update versions of several dependencies, including Gradle to 2.6
Application Architecture
The Java Spring Music application stack contains the following technologies:
The Spring Music web application’s static content will be hosted by NGINX for increased performance. The application’s WAR file will be hosted by Apache Tomcat. Requests for non-static content will be proxied through a single instance of NGINX on the front-end, to one of two load-balanced Tomcat instances on the back-end. NGINX will also be configured to allow for browser caching of the static content, to further increase application performance. Reverse proxying and caching are configured thought NGINX’s default.conf
file’s server
configuration section:
server { listen 80; server_name localhost; location ~* \/assets\/(css|images|js|template)\/* { root /usr/share/nginx/; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary Accept-Encoding; access_log off; }
The two Tomcat instances will be configured on NGINX, in a load-balancing pool, using NGINX’s default round-robin load-balancing algorithm. This is configured through NGINX’s default.conf
file’s upstream
configuration section:
upstream backend { server app01:8080; server app02:8080; }
The Spring Music application can be run with MySQL, Postgres, Oracle, MongoDB, Redis, or H2, an in-memory Java SQL database. Given the choice of both SQL and NoSQL databases available for use with the Spring Music application, we will select MongoDB.
The Spring Music application, hosted by Tomcat, will store and modify record album data in a single instance of MongoDB. MongoDB will be populated with a collection of album data when the Spring Music application first creates the MongoDB database instance.
Lastly, the ELK Stack with Logspout, will aggregate both Docker and Java Log4j log entries, providing debugging and analytics to our demonstration. I’ve used the same method for Docker and Java Log4j log entries, as detailed in this previous post.
Spring Music Environment
To build, deploy, and host the Java Spring Music application, we will use the following technologies:
- Gradle
- GitHub
- Travis CI
- git
- Oracle VirtualBox
- Docker
- Docker Compose
- Docker Machine
- Docker Hub
- optional: Amazon Web Services (AWS)
All files necessary to build this project are stored in the garystafford/spring-music-docker repository on GitHub. The Spring Music source code and build artifacts are stored in a seperate garystafford/spring-music repository, also on GitHub.
Build artifacts are automatically built by Travis CI when changes are checked into the garystafford/spring-music repository on GitHub. Travis CI then overwrites the build artifacts back to a build artifact branch of that same project. The build artifact branch acts as a pseudo binary repository for the project. The .travis.yaml
file, gradle.build
file, and deploy.sh
script handles these functions.
.travis.yaml file:
language: java jdk: oraclejdk7 before_install: - chmod +x gradlew before_deploy: - chmod ugo+x deploy.sh script: - bash ./gradlew clean warNoStatic warCopy zipGetVersion zipStatic - bash ./deploy.sh env: global: - GH_REF: github.com/garystafford/spring-music.git - secure: <secure hash here>
gradle.build file snippet:
// new Gradle build tasks task warNoStatic(type: War) { // omit the version from the war file name version = '' exclude '**/assets/**' manifest { attributes 'Manifest-Version': '1.0', 'Created-By': currentJvm, 'Gradle-Version': GradleVersion.current().getVersion(), 'Implementation-Title': archivesBaseName + '.war', 'Implementation-Version': artifact_version, 'Implementation-Vendor': 'Gary A. Stafford' } } task warCopy(type: Copy) { from 'build/libs' into 'build/distributions' include '**/*.war' } task zipGetVersion (type: Task) { ext.versionfile = new File("${projectDir}/src/main/webapp/assets/buildinfo.properties") versionfile.text = 'build.version=' + artifact_version } task zipStatic(type: Zip) { from 'src/main/webapp/assets' appendix = 'static' version = '' }
deploy.sh file:
#!/bin/bash # reference: https://gist.github.com/domenic/ec8b0fc8ab45f39403dd set -e # exit with nonzero exit code if anything fails # go to the distributions directory and create a *new* Git repo cd build/distributions && git init # inside this git repo we'll pretend to be a new user git config user.name "travis-ci" git config user.email "auto-deploy@travis-ci.com" # The first and only commit to this new Git repo contains all the # files present with the commit message. git add . git commit -m "Deploy Travis CI build #${TRAVIS_BUILD_NUMBER} artifacts to GitHub" # Force push from the current repo's master branch to the remote # repo's build-artifacts branch. (All previous history on the gh-pages branch # will be lost, since we are overwriting it.) We redirect any output to # /dev/null to hide any sensitive credential data that might otherwise be exposed. Environment variables pre-configured on Travis CI. git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:build-artifacts > /dev/null 2>&1
Base Docker images, such as NGINX, Tomcat, and MongoDB, used to build the project’s images and subsequently the containers, are all pulled from Docker Hub.
This NGINX and Tomcat Dockerfiles pull the latest build artifacts down to build the project-specific versions of the NGINX and Tomcat Docker images used for this project. For example, the NGINX Dockerfile
looks like:
# NGINX image with build artifact FROM nginx:latest MAINTAINER Gary A. Stafford <garystafford@rochester.rr.com> ENV REFRESHED_AT 2015-09-20 ENV GITHUB_REPO https://github.com/garystafford/spring-music/raw/build-artifacts ENV STATIC_FILE spring-music-static.zip RUN apt-get update -y && apt-get install wget unzip nano -y && wget -O /tmp/${STATIC_FILE} ${GITHUB_REPO}/${STATIC_FILE} && unzip /tmp/${STATIC_FILE} -d /usr/share/nginx/assets/ COPY default.conf /etc/nginx/conf.d/default.conf
Docker Machine builds a single VirtualBox VM. After building the VM, Docker Compose then builds and deploys (1) NGINX container, (2) load-balanced Tomcat containers, (1) MongoDB container, (1) ELK container, and (1) Logspout container, onto the VM. Docker Machine’s VirtualBox driver provides a basic solution that can be run locally for testing and development. The docker-compose.yml
for the project is as follows:
proxy: build: nginx/ ports: "80:80" links: - app01 - app02 hostname: "proxy" app01: build: tomcat/ expose: "8080" ports: "8180:8080" links: - nosqldb - elk hostname: "app01" app02: build: tomcat/ expose: "8080" ports: "8280:8080" links: - nosqldb - elk hostname: "app01" nosqldb: build: mongo/ hostname: "nosqldb" volumes: "/opt/mongodb:/data/db" elk: build: elk/ ports: - "8081:80" - "8082:9200" expose: "5000/upd" logspout: build: logspout/ volumes: "/var/run/docker.sock:/tmp/docker.sock" links: elk ports: "8083:80" environment: ROUTE_URIS=logstash://elk:5000
Building the Environment
Before continuing, ensure you have nothing running on ports 80
, 8080
, 8081
, 8082
, and 8083
. Also, make sure VirtualBox, Docker, Docker Compose, Docker Machine, VirtualBox, cURL, and git are all pre-installed and running.
docker --version && docker-compose --version && docker-machine --version && echo "VirtualBox $(vboxmanage --version)" && curl --version && git --version
All of the below commands may be executed with the following single command (sh ./build_project.sh
). This is useful for working with Jenkins CI, ThoughtWorks go, or similar CI tools. However, I suggest building the project step-by-step, as shown below, to better understand the process.
# clone project git clone -b master --single-branch https://github.com/garystafford/spring-music-docker.git && cd spring-music-docker # build VM docker-machine create --driver virtualbox springmusic --debug # create directory to store mongo data on host docker-machine ssh springmusic mkdir /opt/mongodb # set new environment docker-machine env springmusic && eval "$(docker-machine env springmusic)" # build images and containers docker-compose -f docker-compose.yml -p music up -d # wait for container apps to start sleep 15 # run quick test of project for i in {1..10} do curl -I --url $(docker-machine ip springmusic) done
By simply changing the driver to AWS EC2 and providing your AWS credentials, the same environment can be built on AWS within a single EC2 instance. The ‘springmusic’ environment has been fully tested both locally with VirtualBox, as well as on AWS.
Results
Resulting Docker images and containers:
gstafford@gstafford-X555LA:$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE music_proxy latest 46af4c1ffee0 52 seconds ago 144.5 MB music_logspout latest fe64597ab0c4 About a minute ago 24.36 MB music_app02 latest d935211139f6 2 minutes ago 370.1 MB music_app01 latest d935211139f6 2 minutes ago 370.1 MB music_elk latest b03731595114 2 minutes ago 1.05 GB gliderlabs/logspout master 40a52d6ca462 14 hours ago 14.75 MB willdurand/elk latest 04cd7334eb5d 9 days ago 1.05 GB tomcat latest 6fe1972e6b08 10 days ago 347.7 MB mongo latest 5c9464760d54 10 days ago 260.8 MB nginx latest cd3cf76a61ee 10 days ago 132.9 MB gstafford@gstafford-X555LA:$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES facb6eddfb96 music_proxy "nginx -g 'daemon off" 46 seconds ago Up 46 seconds 0.0.0.0:80->80/tcp, 443/tcp music_proxy_1 abf9bb0821e8 music_app01 "catalina.sh run" About a minute ago Up About a minute 0.0.0.0:8180->8080/tcp music_app01_1 e4c43ed84bed music_logspout "/bin/logspout" About a minute ago Up About a minute 8000/tcp, 0.0.0.0:8083->80/tcp music_logspout_1 eca9a3cec52f music_app02 "catalina.sh run" 2 minutes ago Up 2 minutes 0.0.0.0:8280->8080/tcp music_app02_1 b7a7fd54575f mongo:latest "/entrypoint.sh mongo" 2 minutes ago Up 2 minutes 27017/tcp music_nosqldb_1 cbfe43800f3e music_elk "/usr/bin/supervisord" 2 minutes ago Up 2 minutes 5000/0, 0.0.0.0:8081->80/tcp, 0.0.0.0:8082->9200/tcp music_elk_1
Partial result of the curl test, calling NGINX. Note the two different upstream addresses for Tomcat. Also, note the sharp decrease in request times, due to caching.
HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Mon, 07 Sep 2015 17:56:11 GMT Content-Type: text/html;charset=ISO-8859-1 Content-Length: 2090 Connection: keep-alive Accept-Ranges: bytes ETag: W/"2090-1441648256000" Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT Content-Language: en Request-Time: 0.521 Upstream-Address: 172.17.0.121:8080 Upstream-Response-Time: 1441648570.774 HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Mon, 07 Sep 2015 17:56:11 GMT Content-Type: text/html;charset=ISO-8859-1 Content-Length: 2090 Connection: keep-alive Accept-Ranges: bytes ETag: W/"2090-1441648256000" Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT Content-Language: en Request-Time: 0.326 Upstream-Address: 172.17.0.123:8080 Upstream-Response-Time: 1441648571.506 HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Mon, 07 Sep 2015 17:56:12 GMT Content-Type: text/html;charset=ISO-8859-1 Content-Length: 2090 Connection: keep-alive Accept-Ranges: bytes ETag: W/"2090-1441648256000" Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT Content-Language: en Request-Time: 0.006 Upstream-Address: 172.17.0.121:8080 Upstream-Response-Time: 1441648572.050 HTTP/1.1 200 OK Server: nginx/1.9.4 Date: Mon, 07 Sep 2015 17:56:12 GMT Content-Type: text/html;charset=ISO-8859-1 Content-Length: 2090 Connection: keep-alive Accept-Ranges: bytes ETag: W/"2090-1441648256000" Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT Content-Language: en Request-Time: 0.006 Upstream-Address: 172.17.0.123:8080 Upstream-Response-Time: 1441648572.266
Spring Music Application Links
Assuming springmusic
VM is running at 192.168.99.100
:
- Spring Music: 192.168.99.100
- NGINX: 192.168.99.100/nginx_status
- Tomcat Node 1*: 192.168.99.100:8180/manager
- Tomcat Node 2*: 192.168.99.100:8280/manager
- Kibana: 192.168.99.100:8081
- Elasticsearch: 192.168.99.100:8082
- Elasticsearch: 192.168.99.100:8082/_status?pretty
- Logspout: 192.168.99.100:8083/logs
* The Tomcat user name is admin
and the password is t0mcat53rv3r
.
Helpful Links
Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development on June 27, 2015
Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.
Introduction
In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.
In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBox, Jenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.
Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).
Docker Machine
If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.
We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.
I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.
The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.
Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.
The bash script executed by Jenkins contains the following commands:
# optional: record current versions of docker apps with each build docker -v && docker-compose -v && docker-machine -v # set-up: clean up any previous machine failures docker-machine stop test || echo "nothing to stop" && \ docker-machine rm test || echo "nothing to remove" # use docker-machine to create and configure 'test' environment # add a -D (debug) if having issues docker-machine create --driver virtualbox test eval "$(docker-machine env test)" # use docker-compose to pull and build new images and containers docker-compose -p jenkins up -d # optional: list machines, images, and containers docker-machine ls && docker images && docker ps -a # wait for containers to fully start before tests fire up sleep 30 # test the services sh tests.sh $(docker-machine ip test) # tear down: stop and remove 'test' environment docker-machine stop test && docker-machine rm test
As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.
docker-machine create --driver virtualbox test eval "$(docker-machine env test)"
Once this step is complete you will have the following VirtualBox VM created, running, and active.
NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376
Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.
docker-compose -p jenkins up -d
The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Pulls (5) images, builds (5) images, and builds (11) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p <your_project_name_here> up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8500:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ links: - graphite - mongoAuthentication - "ambassador:nginx" expose: - "8587" valet: build: valet/ links: - graphite - mongoValet - "ambassador:nginx" expose: - "8585" maintenance: build: maintenance/ links: - graphite - mongoMaintenance - "ambassador:nginx" expose: - "8583" vehicle: build: vehicle/ links: - graphite - mongoVehicle - "ambassador:nginx" expose: - "8581" nginx: build: nginx/ ports: - "80:80" links: - "ambassador:vehicle" - "ambassador:valet" - "ambassador:authentication" - "ambassador:maintenance" ambassador: image: cpuguy83/docker-grand-ambassador volumes: - "/var/run/docker.sock:/var/run/docker.sock" command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"
Running the docker-compose.yaml
file, will pull these (5) Docker Hub images:
REPOSITORY TAG IMAGE ID ========== === ======== java 8u45-jdk 1f80eb0f8128 nginx latest 319d2015d149 mongo latest 66b43e3cae49 hopsoft/graphite-statsd latest b03e373279e8 cpuguy83/docker-grand-ambassador latest c635b1699f78
And, build these (5) Docker images from Dockerfiles:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_nginx latest 0b53a9adb296 jenkins_vehicle latest d80f79e605f4 jenkins_valet latest cbe8bdf909b8 jenkins_maintenance latest 15b8a94c00f4 jenkins_authentication latest ef0345369079
And, build these (11) Docker containers from corresponding image:
CONTAINER ID IMAGE NAME ============ ===== ==== 17992acc6542 jenkins_nginx jenkins_nginx_1 bcbb2a4b1a7d jenkins_vehicle jenkins_vehicle_1 4ac1ac69f230 mongo:latest jenkins_mongoVehicle_1 bcc8b9454103 jenkins_valet jenkins_valet_1 7c1794ca7b8c jenkins_maintenance jenkins_maintenance_1 2d0e117fa5fb jenkins_authentication jenkins_authentication_1 d9146a1b1d89 hopsoft/graphite-statsd:latest jenkins_graphite_1 56b34cee9cf3 cpuguy83/docker-grand-ambassador jenkins_ambassador_1 a72199d51851 mongo:latest jenkins_mongoAuthentication_1 307cb2c01cc4 mongo:latest jenkins_mongoMaintenance_1 4e0807431479 mongo:latest jenkins_mongoValet_1
Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.
Integration Testing
As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:
sh tests.sh $(docker-machine ip test)
The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test)
, is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100
. If a parameter is not provided, the test script’s hostname
variable will use the default value of localhost
, ‘hostname=${1-'localhost'}
‘.
Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581
for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:
http://192.168.99.100/vehicles/utils/ping.json http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad
Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh # docker-machine: sh tests.sh $(docker-machine ip test) # ######################################################################## echo --- Integration Tests --- echo ### VARIABLES ### hostname=${1-'localhost'} # use input param or default to localhost application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized make="Test" model="Foo" echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} echo make: ${make} echo model: ${model} echo ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"year\": 2015, \"make\": \"${make}\", \"model\": \"${model}\", \"color\": \"White\", \"type\": \"Sedan\", \"mileage\": 250 }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new maintenance record in the response body with an 'id'" url="http://${hostname}/maintenances" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\", \"mileage\": 1000, \"type\": \"Test Maintenance\", \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new valet transaction in the response body with an 'id'" url="http://${hostname}/valets" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\", \"parkingLot\": \"Test Parking Ramp\", \"parkingSpot\": 10, \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo
Tear Down
In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.
docker-machine stop test && \ docker-machine rm test
Jenkins CI Console Output
Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.
Started by user anonymous Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10 Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git > git --version # timeout=10 using GIT_SSH to set credentials using .gitcredentials to set credentials > git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10 > git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/* > git config --local --remove-section credential # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5 > git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10 [workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh + docker -v Docker version 1.7.0, build 0baf609 + docker-compose -v docker-compose version: 1.3.1 CPython version: 2.7.9 OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013 + docker-machine -v docker-machine version 0.3.0 (0a251fe) + docker-machine stop test + docker-machine rm test Successfully removed test + docker-machine create --driver virtualbox test Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env test + docker-machine env test + eval export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test" export DOCKER_MACHINE_NAME="test" # Run this command to configure your shell: # eval "$(docker-machine env test)" + export DOCKER_TLS_VERIFY=1 + export DOCKER_HOST=tcp://192.168.99.100:2376 + export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test + export DOCKER_MACHINE_NAME=test + docker-compose -p jenkins up -d Pulling mongoValet (mongo:latest)... latest: Pulling from mongo ...Abridged output... + docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376 + docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jenkins_vehicle latest fdd7f9d02ff7 2 seconds ago 837.1 MB jenkins_valet latest 8a592e0fe69a 4 seconds ago 837.1 MB jenkins_maintenance latest 5a4a44e136e5 5 seconds ago 837.1 MB jenkins_authentication latest e521e067a701 7 seconds ago 838.7 MB jenkins_nginx latest 085d183df8b4 25 minutes ago 132.8 MB java 8u45-jdk 1f80eb0f8128 12 days ago 816.4 MB nginx latest 319d2015d149 12 days ago 132.8 MB mongo latest 66b43e3cae49 12 days ago 260.8 MB hopsoft/graphite-statsd latest b03e373279e8 4 weeks ago 740 MB cpuguy83/docker-grand-ambassador latest c635b1699f78 5 months ago 525.7 MB + docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4ea39fa187bf jenkins_vehicle "java -classpath .:c 2 seconds ago Up 1 seconds 8581/tcp jenkins_vehicle_1 b248a836546b mongo:latest "/entrypoint.sh mong 3 seconds ago Up 3 seconds 27017/tcp jenkins_mongoVehicle_1 0c94e6409afc jenkins_valet "java -classpath .:c 4 seconds ago Up 3 seconds 8585/tcp jenkins_valet_1 657f8432004b jenkins_maintenance "java -classpath .:c 5 seconds ago Up 5 seconds 8583/tcp jenkins_maintenance_1 8ff6de1208e3 jenkins_authentication "java -classpath .:c 7 seconds ago Up 6 seconds 8587/tcp jenkins_authentication_1 c799d5f34a1c hopsoft/graphite-statsd:latest "/sbin/my_init" 12 minutes ago Up 12 minutes 2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp jenkins_graphite_1 040872881b25 jenkins_nginx "nginx -g 'daemon of 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 443/tcp jenkins_nginx_1 c6a2dc726abc mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoAuthentication_1 db22a44239f4 mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoMaintenance_1 d5fd655474ba cpuguy83/docker-grand-ambassador "/usr/bin/grand-amba 26 minutes ago Up 26 minutes jenkins_ambassador_1 2b46bd6f8cfb mongo:latest "/entrypoint.sh mong 31 minutes ago Up 31 minutes 27017/tcp jenkins_mongoValet_1 + sleep 30 + docker-machine ip test + sh tests.sh 192.168.99.100 --- Integration Tests --- hostname: 192.168.99.100 application: Test API Client 1435585062 secret: NGM5OTI5ODAxMTZ make: Test model: Foo TEST: GET request should return 'true' in the response body http://192.168.99.100/vehicles/utils/ping.json % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 RESULT: pass TEST: POST request should return a new client in the response body with an 'id' http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 847 225 --:--:-- --:--:-- --:--:-- 849 RESULT: pass SETUP: Get the new client's apiKey for next test http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 20482 5461 --:--:-- --:--:-- --:--:-- 21000 apiKey: sv1CA9NdhmXh72NrGKBN3Abb TEST: GET request should return a new jwt in the response body http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 686 0 --:--:-- --:--:-- --:--:-- 687 RESULT: pass SETUP: Get a new jwt using the new client for the next test http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 16843 0 --:--:-- --:--:-- --:--:-- 17076 jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM TEST: POST request should return a new vehicle in the response body with an 'id' http://192.168.99.100/vehicles % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 123 0 0 100 123 0 612 --:--:-- --:--:-- --:--:-- 611 100 419 0 296 100 123 649 270 --:--:-- --:--:-- --:--:-- 649 RESULT: pass SETUP: Get id from new vehicle for the next test http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 377 0 377 0 0 5564 0 --:--:-- --:--:-- --:--:-- 5626 vehicle id: 55914a28e4b04658471dc03a TEST: GET request should return a vehicle in the response body with the requested 'id' http://192.168.99.100/vehicles/55914a28e4b04658471dc03a % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 296 0 296 0 0 7051 0 --:--:-- --:--:-- --:--:-- 7219 RESULT: pass TEST: POST request should return a new maintenance record in the response body with an 'id' http://192.168.99.100/maintenances % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 RESULT: pass TEST: POST request should return a new valet transaction in the response body with an 'id' http://192.168.99.100/valets % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 561 0 368 100 193 514 269 --:--:-- --:--:-- --:--:-- 514 RESULT: pass + docker-machine stop test + docker-machine rm test Successfully removed test Finished: SUCCESS
Graphite and Statsd
If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.
The Complete Process
The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.
Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development on June 22, 2015
Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose
Previous Posts
In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.
Introduction
In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:
- Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
- Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
- Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
- Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
- Tear Down: Using Jenkins, automatically stop and remove the containers and images
For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.
Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Build the Microservices
In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.
To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml
below, corresponding to each microservice.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <name>Virtual-Vehicles API</name> <description>Virtual-Vehicles API https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3 </description> <url>https://github.com/garystafford/virtual-vehicle-demo</url> <groupId>com.example</groupId> <artifactId>Virtual-Vehicles-API</artifactId> <version>1</version> <packaging>pom</packaging> <modules> <module>Maintenance</module> <module>Valet</module> <module>Vehicle</module> <module>Authentication</module> </modules> </project>
Below is the view of the four individual Maven modules, within the single Jenkins Maven job.
Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate
‘.
Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.
Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.
Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml
file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.
Deploy Build Artifacts
Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.
The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.
Build and Compose the Containers
The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.
The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable export DOCKER_HOST=tcp://localhost:4243 # record installed version of Docker and Maven with each build mvn --version && \ docker --version && \ docker-compose --version # use docker-compose to build new images and containers docker-compose -p jenkins up -d # list virtual-vehicles related images docker images | grep 'jenkins' | awk '{print $0}' # list all containers docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Builds (4) images, pulls (2) images, and builds (9) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p virtualvehicles up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8481:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ ports: - "8587:8587" links: - graphite - mongoAuthentication valet: build: valet/ ports: - "8585:8585" links: - graphite - mongoValet - authentication maintenance: build: maintenance/ ports: - "8583:8583" links: - graphite - mongoMaintenance - authentication vehicle: build: vehicle/ ports: - "8581:8581" links: - graphite - mongoVehicle - authentication
Running the docker-compose.yaml
file, produces the following images:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_vehicle latest a6ea4dfe7cf5 jenkins_valet latest 162d3102d43c jenkins_maintenance latest 0b6f530cc968 jenkins_authentication latest 45b50487155e
And, these containers:
CONTAINER ID IMAGE NAME ============ ===== ==== 2b4d5a918f1f jenkins_vehicle jenkins_vehicle_1 492fbd88d267 mongo:latest jenkins_mongoVehicle_1 01f410bb1133 jenkins_valet jenkins_valet_1 6a63a664c335 jenkins_maintenance jenkins_maintenance_1 00babf484cf7 jenkins_authentication jenkins_authentication_1 548a31034c1e hopsoft/graphite-statsd:latest jenkins_graphite_1 cdc18bbb51b4 mongo:latest jenkins_mongoAuthentication_1 6be5c0558e92 mongo:latest jenkins_mongoMaintenance_1 8b71d50a4b4d mongo:latest jenkins_mongoValet_1
Integration Testing
Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.
Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.
sleep 15 sh tests.sh -v
The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh -v # ######################################################################## echo --- Integration Tests --- ### VARIABLES ### hostname="localhost" application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}:8581/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}:8587/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}:8587/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}:8581/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d '{ "year": 2015, "make": "Test", "model": "Foo", "color": "White", "type": "Sedan", "mileage": 250 }' --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}:8581/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass"
Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.
Tear Down
Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
The Complete Process
The below diagram show the entire process, start to finish.