Posts Tagged container
Managing Applications Across Multiple Kubernetes Environments with Istio: Part 2
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, GCP, Software Development on April 17, 2018
In this two-part post, we are exploring the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.
Part Two
In Part One of this post, we created a Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the Election microservice had already been released to Production, we deployed v1 to each of the three namespaces.
In Part Two of this post, we will learn how to utilize the advanced API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new v2 of the Election microservice, locally on Kubernetes Minikube. Once confident v2 is functioning as intended, we will promote and test v2 across the dev
, test
, and uat
namespaces.
Source Code
As a reminder, all source code for this post can be found on GitHub. The project’s README file contains a list of the Election microservice’s endpoints. To get started quickly, use one of the two following options (gist).
# clone the official v3.0.0 release for this post | |
git clone --depth 1 --branch v3.0.0 \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo \ | |
&& git checkout -b v3.0.0 | |
# clone the latest version of code (newer than article) | |
git clone --depth 1 --branch master \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo |
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post.
Testing Locally with Minikube
Deploying to GKE, no matter how automated, takes time and resources, whether those resources are team members or just compute and system resources. Before deploying v2 of the Election service to the non-prod GKE cluster, we should ensure that it has been thoroughly tested locally. Local testing should include the following test criteria:
- Source code builds successfully
- All unit-tests pass
- A new Docker Image can be created from the build artifact
- The Service can be deployed to Kubernetes (Minikube)
- The deployed instance can connect to the database and execute the Liquibase changesets
- The deployed instance passes a minimal set of integration tests
Minikube gives us the ability to quickly iterate and test an application, as well as the Kubernetes and Istio resources required for its operation, before promoting to GKE. These resources include Kubernetes Namespaces, Secrets, Deployments, Services, Route Rules, and Istio Ingresses. Since Minikube is just that, a miniature version of our GKE cluster, we should be able to have a nearly one-to-one parity between the Kubernetes resources we apply locally and those applied to GKE. This post assumes you have the latest version of Minikube installed, and are familiar with its operation.
This project includes a minikube sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the Minikube deployment example shown in this post. The three included scripts are designed to be easily adapted to a CI/CD DevOps workflow. You may need to modify the scripts to match your environment’s configuration. Note this Minikube-deployed version of the Election service relies on the external Amazon RDS database instance.
Local Database Version
To eliminate the AWS costs, I have included a second, alternate version of the Minikube Kubernetes resource files, minikube_db_local This version deploys a single containerized PostgreSQL database instance to Minikube, as opposed to relying on the external Amazon RDS instance. Be aware, the database does not have persistent storage or an Istio sidecar proxy.
Minikube Cluster
If you do not have a running Minikube cluster, create one with the minikube start
command.
Minikube allows you to use normal kubectl
CLI commands to interact with the Minikube cluster. Using the kubectl get nodes
command, we should see a single Minikube node running the latest Kubernetes v1.10.0.
Istio on Minikube
Next, install Istio following Istio’s online installation instructions. A basic Istio installation on Minikube, without the additional add-ons, should only require a single Istio install script.
If successful, you should observe a new istio-system
namespace, containing the four main Istio components: istio-ca
, istio-ingress
, istio-mixer
, and istio-pilot
.
Deploy v2 to Minikube
Next, create a Minikube Development environment, consisting of a dev
Namespace, Istio Ingress, and Secret, using the part1-create-environment.sh
script. Next, deploy v2 of the Election service to thedev
Namespace, along with an associated Route Rule, using the part2-deploy-v2.sh
script. One v2 instance should be sufficient to satisfy the testing requirements.
Access to v2 of the Election service on Minikube is a bit different than with GKE. When routing external HTTP requests, there is no load balancer, no external public IP address, and no public DNS or subdomains. To access the single instance of v2 running on Minikube, we use the local IP address of the Minikube cluster, obtained with the minikube ip
command. The access port required is the Node Port (nodePort
) of the istio-ingress
Service. The command is shown below (gist) and included in the part3-smoke-test.sh
script.
export GATEWAY_URL="$(minikube ip):"\ | |
"$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')" | |
echo $GATEWAY_URL | |
curl $GATEWAY_URL/v2/actuator/health && echo |
The second part of our HTTP request routing is the same as with GKE, relying on an Istio Route Rules. The /v2/
sub-collection resource in the HTTP request URL is rewritten and routed to the v2 election Pod by the Route Rule. To confirm v2 of the Election service is running and addressable, curl the /v2/actuator/health
endpoint. Spring Actuator’s /health
endpoint is frequently used at the end of a CI/CD server’s deployment pipeline to confirm success. The Spring Boot application can take a few minutes to fully start up and be responsive to requests, depending on the speed of your local machine.
Using the Kubernetes Dashboard, we should see our deployment of the single Election service Pod is running successfully in Minikube’s dev
namespace.
Once deployed, we run a battery of integration tests to confirm that the new v2 functionality is working as intended before deploying to GKE. In the next section of this post, we will explore the process creating and managing Postman Collections and Postman Environments, and how to automate those Collections of tests with Newman and Jenkins.
Integration Testing
The typical reason an application is deployed to lower environments, prior to Production, is to perform application testing. Although definitions vary across organizations, testing commonly includes some or all of the following types: Integration Testing, Functional Testing, System Testing, Stress or Load Testing, Performance Testing, Security Testing, Usability Testing, Acceptance Testing, Regression Testing, Alpha and Beta Testing, and End-to-End Testing. Test teams may also refer to other testing forms, such as Whitebox (Glassbox), Blackbox Testing, Smoke, Validation, or Sanity Testing, and Happy Path Testing.
The site, softwaretestinghelp.com, defines integration testing as, ‘testing of all integrated modules to verify the combined functionality after integration is termed so. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.’
In this post, we are concerned that our integrated modules are functioning cohesively, primarily the Election service, Amazon RDS database, DNS, Istio Ingress, Route Rules, and the Istio sidecar Proxy. Unlike Unit Testing and Static Code Analysis (SCA), which is done pre-deployment, integration testing requires an application to be deployed and running in an environment.
Postman
I have chosen Postman, along with Newman, to execute a Collection of integration tests before promoting to the next environment. The integration tests confirm the deployed application’s name and version. The integration tests then perform a series of HTTP GET, POST, PUT, PATCH, and DELETE actions against the service’s resources. The integration tests verify a successful HTTP response code is returned, based on the type of request made.
Postman tests are written in JavaScript, similar to other popular, modern testing frameworks. Postman offers advanced features such as test-chaining. Tests can be chained together through the use of environment variables to store response values and pass them onto to other tests. Values shared between tests are also stored in the Postman Environments. Below, we store the ID of the new candidate, the result of an HTTP POST to the /candidates
endpoint. We then use the stored candidate ID in proceeding HTTP GET, PUT, and PATCH test requests to the same /candidates
endpoint.
Environment-specific variables, such as the resource host, port, and environment sub-collection resource, are abstracted and stored as key/value pairs within Postman Environments, and called through variables in the request URL and within the tests. Thus, the same Postman Collection of tests may be run against multiple environments using different Postman Environments.
Postman Runner allows us to run multiple iterations of our Collection. We also have the option to build in delays between tests. Lastly, Postman Runner can load external JSON and CSV formatted test data, which is beyond the scope of this post.
Postman contains a simple Run Summary UI for viewing test results.
Test Automation
To support running tests from the command line, Postman provides Newman. According to Postman, Newman is a command-line collection runner for Postman. Newman offers the same functionality as Postman’s Collection Runner, all part of the newman
CLI. Newman is Node.js module, installed globally as an npm package, npm install newman --global
.
Typically, Development and Testing teams compose Postman Collections and define Postman Environments, locally. Teams run their tests locally in Postman, during their development cycle. Then, those same Postman Collections are executed from the command line, or more commonly as part of a CI/CD pipeline, such as with Jenkins.
Below, the same Collection of integration tests ran in the Postman Runner UI, are run from the command line, using Newman.
Jenkins
Without a doubt, Jenkins is the leading open-source CI/CD automation server. The building, testing, publishing, and deployment of microservices to Kubernetes is relatively easy with Jenkins. Generally, you would build, unit-test, push a new Docker image, and then deploy your application to Kubernetes using a series of CI/CD pipelines. Below, we see examples of these pipelines using Jenkins Blue Ocean, starting with a continuous integration pipeline, which includes unit-testing and Static Code Analysis (SCA) with SonarQube.
Followed by a pipeline to build the Docker Image, using the build artifact from the above pipeline, and pushes the Image to Docker Hub.
The third pipeline that demonstrates building the three Kubernetes environments and deploying v1 of the Election service to the dev namespace. This pipeline is just for demonstration purposes; typically, you would separate these functions.
Spinnaker
An alternative to Jenkins for the deployment of microservices is Spinnaker, created by Netflix. According to Netflix, ‘Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.’ Spinnaker is designed to integrate easily with Jenkins, dividing responsibilities for continuous integration and delivery, with deployment. Below, Spinnaker two sample deployment pipelines, similar to Jenkins, for deploying v1 and v2 of the Election service to the non-prod GKE cluster.
Below, Spinnaker has deployed v2 of the Election service to dev using a Highlander deployment strategy. Subsequently, Spinnaker has deployed v2 to test using a Red/Black deployment strategy, leaving the previously released v1 Server Group in place, in case a rollback is required.
Once Spinnaker is has completed the deployment tasks, the Postman Collections of smoke and integration tests are executed by Newman, as part of another Jenkins CI/CD pipeline.
In this pipeline, a set of basic smoke tests is run first to ensure the new deployment is running properly, and then the integration tests are executed.
In this simple example, we have a three-stage pipeline created from a Jenkinsfile (gist).
#!groovy | |
def ACCOUNT = "garystafford" | |
def PROJECT_NAME = "spring-postgresql-demo" | |
def ENVIRONMENT = "dev" // assumes 'api.dev.voter-demo.com' reachable | |
pipeline { | |
agent any | |
stages { | |
stage('Checkout SCM') { | |
steps { | |
git changelog: true, poll: false, | |
branch: 'master', | |
url: "https://github.com/${ACCOUNT}/${PROJECT_NAME}" | |
} | |
} | |
stage('Smoke Test') { | |
steps { | |
dir('postman') { | |
nodejs('nodejs') { | |
sh "sh ./newman-smoke-tests-${ENVIRONMENT}.sh" | |
} | |
junit '**/newman/*.xml' | |
} | |
} | |
} | |
stage('Integration Tests') { | |
steps { | |
dir('postman') { | |
nodejs('nodejs') { | |
sh "sh ./newman-integration-tests-${ENVIRONMENT}.sh" | |
} | |
junit '**/newman/*.xml' | |
} | |
} | |
} | |
} | |
} |
Test Results
Newman offers several options for displaying test results. For easy integration with Jenkins, Newman results can be delivered in a format that can be displayed as JUnit test reports. The JUnit test report format, XML, is a popular method of standardizing test results from different testing tools. Below is a truncated example of a test report file (gist).
<?xml version="1.0" encoding="UTF-8"?> | |
<testsuites name="spring-postgresql-demo-v2" time="13.339000000000002"> | |
<testsuite name="/candidates/{{candidateId}}" id="31cee570-95a1-4768-9ac3-3d714fc7e139" tests="1" time="0.669"> | |
<testcase name="Status code is 200" time="0.669"/> | |
</testsuite> | |
<testsuite name="/candidates/{{candidateId}}" id="a5a62fe9-6271-4c89-a076-c95bba458ef8" tests="1" time="0.575"> | |
<testcase name="Status code is 200" time="0.575"/> | |
</testsuite> | |
<testsuite name="/candidates/{{candidateId}}" id="2fc4c902-b931-4b35-b28a-7e264f40ee9c" tests="1" time="0.568"> | |
<testcase name="Status code is 204" time="0.568"/> | |
</testsuite> | |
<testsuite name="/candidates/summary" id="94fe972e-32f4-4f58-a5d5-999cacdf7460" tests="1" time="0.337"> | |
<testcase name="Status code is 200" time="0.337"/> | |
</testsuite> | |
<testsuite name="/candidates/summary/{election}" id="f8f817c8-4785-49f1-8d09-8055b84c4fc0" tests="1" time="0.351"> | |
<testcase name="Status code is 200" time="0.351"/> | |
</testsuite> | |
<testsuite name="/candidates/search/findByLastName?lastName=Paul" id="504f8741-e9d2-4f05-b1ad-c14136030f34" tests="1" time="0.256"> | |
<testcase name="Status code is 200" time="0.256"/> | |
</testsuite> | |
</testsuites> |
Translating Newman test results to JUnit reports allows the percentage of test cases successfully executed, to be tracked over multiple deployments, a universal testing metric. Below we see the JUnit Test Reports Test Result Trend graph for a series of test runs.
Deploying to Development
Development environments typically have a rapid turnover of application versions. Many teams use their Development environment as a continuous integration environment, where every commit that successfully builds and passes all unit tests, is deployed. The purpose of the CI deployments is to ensure build artifacts will successfully deploy through the CI/CD pipeline, start properly, and pass a basic set of smoke tests.
Other teams use the Development environments as an extension of their local Minikube environment. The Development environment will possess some or all of the required external integration points, which the Developer’s local Minikube environment may not. The goal of the Development environment is to help Developers ensure their application is functioning correctly and is ready for the Test teams to evaluate, prior to promotion to the Test environment.
Some external integration points, such as external payment gateways, customer relationship management (CRM) systems, content management systems (CMS), or data analytics engines, are often stubbed-out in lower environments. Generally, third-party providers only offer a limited number of parallel non-Production integration environments. While an application may pass through several non-prod environments, testing against all external integration points will only occur in one or two of those environments.
With v2 of the Election service ready for testing on GKE, we deploy it to the GKE cluster’s dev namespace using the part4a-deploy-v2-dev.sh
script. We will also delete the previous v1 version of the Election service. Similar to the v1 deployment script, the v2 scripts perform a kube-inject
command, which manually injects the Istio sidecar proxy alongside the Election service, into each election v2 Pod. The deployment script also deploys an alternate Istio Route Rule, which routes requests to api.dev.voter-demo.com/v2/*
resource of v2 of the Election service.
Once deployed, we run our Postman Collection of integration tests with Newman or as part of a CI/CD pipeline. In the Development environment, we may choose to run a limited set of tests for the sake of expediency, or because not all external integration points are accessible.
Promotion to Test
With local Minikube and Development environment testing complete, we promote and deploy v2 of the Election service to the Test environment, using the part4b-deploy-v2-test.sh
script. In Test, we will not delete v1 of the Election service.
Often, an organization will maintain a running copy of all versions of an application currently deployed to Production, in a lower environment. Let’s look at two scenarios where this is common. First, v1 of the Election service has an issue in Production, which needs to be confirmed and may require a hot-fix by the Development team. Validation of the v1 Production bug is often done in a lower environment. The second scenario for having both versions running in an environment is when v1 and v2 both need to co-exist in Production. Organizations frequently support multiple API versions. Cutting over an entire API user-base to a new API version is often completed over a series of releases, and requires careful coordination with API consumers.
Testing All Versions
An essential role of integration testing should be to confirm that both versions of the Election service are functioning correctly, while simultaneously running in the same namespace. For example, we want to verify traffic is routed correctly, based on the HTTP request URL, to the correct version. Another common test scenario is database schema changes. Suppose we make what we believe are backward-compatible database changes to v2 of the Election service. We should be able to prove, through testing, that both the old and new versions function correctly against the latest version of the database schema.
There are different automation strategies that could be employed to test multiple versions of an application without creating separate Collections and Environments. A simple solution would be to templatize the Environments file, and then programmatically change the Postman Environment’s version
variable injected from a pipeline parameter (abridged environment file shown below).
Once initial automated integration testing is complete, Test teams will typically execute additional forms of application testing if necessary, before signing off for UAT and Performance Testing to begin.
User-Acceptance Testing
With testing in the Test environments completed, we continue onto UAT. The term UAT suggest that a set of actual end-users (API consumers) of the Election service will perform their own testing. Frequently, UAT is only done for a short, fixed period of time, often with a specialized team of Testers. Issues experienced during UAT can be expensive and impact the ability to release an application to Production on-time if sign-off is delayed.
After deploying v2 of the Election service to UAT, and before opening it up to the UAT team, we would naturally want to repeat the same integration testing process we conducted in the previous Test environment. We must ensure that v2 is functioning as expected before our end-users begin their testing. This is where leveraging a tool like Jenkins makes automated integration testing more manageable and repeatable. One strategy would be to duplicate our existing Development and Test pipelines, and re-target the new pipeline to call v2 of the Election service in UAT.
Again, in a JUnit report format, we can examine individual results through the Jenkins Console.
We can also examine individual results from each test run using a specific build’s Console Output.
Testing and Instrumentation
To fully evaluate the integration test results, you must look beyond just the percentage of test cases executed successfully. It makes little sense to release a new version of an application if it passes all functional tests, but significantly increases client response times, unnecessarily increases memory consumption or wastes other compute resources, or is grossly inefficient in the number of calls it makes to the database or third-party dependencies. Often times, integration testing uncovers potential performance bottlenecks that are incorporated into performance test plans.
Critical intelligence about the performance of the application can only be obtained through the use of logging and metrics collection and instrumentation. Istio provides this telemetry out-of-the-box with Zipkin, Jaeger, Service Graph, Fluentd, Prometheus, and Grafana. In the included Grafana Istio Dashboard below, we see the performance of v1 of the Election service, under test, in the Test environment. We can compare request and response payload size and timing, as well as request and response times to external integration points, such as our Amazon RDS database. We are able to observe the impact of individual test requests on the application and all its integration points.
As part of integration testing, we should monitor the Amazon RDS CloudWatch metrics. CloudWatch allows us to evaluate critical database performance metrics, such as the number of concurrent database connections, CPU utilization, read and write IOPS, Memory consumption, and disk storage requirements.
A discussion of metrics starts moving us toward load and performance testing against Production service-level agreements (SLAs). Using a similar approach to integration testing, with load and performance testing, we should be able to accurately estimate the sizing requirements our new application for Production. Load and Performance Testing helps answer questions like the type and size of compute resources are required for our GKE Production cluster and for our Amazon RDS database, or how many compute nodes and number of instances (Pods) are necessary to support the expected user-load.
All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.
Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1
Posted by Gary A. Stafford in Build Automation, Cloud, DevOps, Enterprise Software Development, GCP on April 13, 2018
In the following two-part post, we will explore the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.
Application Environment Management
Container orchestration engines, such as Kubernetes, have revolutionized the deployment and management of microservice-based architectures. Combined with a Service Mesh, such as Istio, Kubernetes provides a secure, instrumented, enterprise-grade platform for modern, distributed applications.
One of many challenges with any platform, even one built on Kubernetes, is managing multiple application environments. Whether applications run on bare-metal, virtual machines, or within containers, deploying to and managing multiple application environments increases operational complexity.
As Agile software development practices continue to increase within organizations, the need for multiple, ephemeral, on-demand environments also grows. Traditional environments that were once only composed of Development, Test, and Production, have expanded in enterprises to include a dozen or more environments, to support the many stages of the modern software development lifecycle. Current application environments often include Continous Integration and Delivery (CI), Sandbox, Development, Integration Testing (QA), User Acceptance Testing (UAT), Staging, Performance, Production, Disaster Recovery (DR), and Hotfix. Each environment requiring its own compute, security, networking, configuration, and corresponding dependencies, such as databases and message queues.
Environments and Kubernetes
There are various infrastructure architectural patterns employed by Operations and DevOps teams to provide Kubernetes-based application environments to Development teams. One pattern consists of separate physical Kubernetes clusters. Separate clusters provide a high level of isolation. Isolation offers many advantages, including increased performance and security, the ability to tune each cluster’s compute resources to meet differing SLAs, and ensuring a reduced blast radius when things go terribly wrong. Conversely, separate clusters often result in increased infrastructure costs and operational overhead, and complex deployment strategies. This pattern is often seen in heavily regulated, compliance-driven organizations, where security, auditability, and separation of duties are paramount.
Namespaces
An alternative to separate physical Kubernetes clusters is virtual clusters. Virtual clusters are created using Kubernetes Namespaces. According to Kubernetes documentation, ‘Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces’.
In most enterprises, Operations and DevOps teams deliver a combination of both virtual and physical Kubernetes clusters. For example, lower environments, such as those used for Development, Test, and UAT, often reside on the same physical cluster, each in a separate virtual cluster (namespace). At the same time, environments such as Performance, Staging, Production, and DR, often require the level of isolation only achievable with physical Kubernetes clusters.
In the Cloud, physical clusters may be further isolated and secured using separate cloud accounts. For example, with AWS you might have a Non-Production AWS account and a Production AWS account, both managed by an AWS Organization.
In a multi-environment scenario, a single physical cluster would contain multiple namespaces, into which separate versions of an application or applications are independently deployed, accessed, and tested. Below we see a simple example of a single Kubernetes non-prod cluster on the left, containing multiple versions of different microservices, deployed across three namespaces. You would likely see this type of deployment pattern as applications are deployed, tested, and promoted across lower environments, before being released to Production.
Example Application
To demonstrate the promotion and testing of an application across multiple environments, we will use a simple election-themed microservice, developed for a previous post, Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry. The Spring Boot-based application allows API consumers to create, read, update, and delete, candidates, elections, and votes, through an exposed set of resources, accessed via RESTful endpoints.
Source Code
All source code for this post can be found on GitHub. The project’s README file contains a list of the Election microservice’s endpoints. To get started quickly, use one of the two following options (gist).
# clone the official v3.0.0 release for this post | |
git clone --depth 1 --branch v3.0.0 \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo \ | |
&& git checkout -b v3.0.0 | |
# clone the latest version of code (newer than article) | |
git clone --depth 1 --branch master \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo |
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post. The scripts are designed to be easily adapted to a CI/CD DevOps workflow. You will need to modify the script’s variables to match your own environment’s configuration.
Database
The post’s Spring Boot application relies on a PostgreSQL database. In the previous post, ElephantSQL was used to host the PostgreSQL instance. This time, I have used Amazon RDS for PostgreSQL. Amazon RDS for PostgreSQL and ElephantSQL are equivalent choices. For simplicity, you might also consider a containerized version of PostgreSQL, managed as part of your Kubernetes environment.
Ideally, each environment should have a separate database instance. Separate database instances provide better isolation, fine-grained RBAC, easier test data lifecycle management, and improved performance. Although, for this post, I suggest a single, shared, minimally-sized RDS instance.
The PostgreSQL database’s sensitive connection information, including database URL, username, and password, are stored as Kubernetes Secrets, one secret for each namespace, and accessed by the Kubernetes Deployment controllers.
Istio
Although not required, Istio makes the task of managing multiple virtual and physical clusters significantly easier. Following Istio’s online installation instructions, download and install Istio 0.7.1.
To create a Google Kubernetes Engine (GKE) cluster with Istio, you could use gcloud
CLI’s container clusters create
command, followed by installing Istio manually using Istio’s supplied Kubernetes resource files. This was the method used in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).
Alternatively, you could use Istio’s Google Cloud Platform (GCP) Deployment Manager files, along with the gcloud
CLI’s deployment-manager deployments create
command to create a Kubernetes cluster, replete with Istio, in a single step. Although arguably simpler, the deployment-manager
method does not provide the same level of fine-grain control over cluster configuration as the container clusters create method. For this post, the deployment-manager
method will suffice.
The latest version of the Google Kubernetes Engine, available at the time of this post, is 1.9.6-gke.0. However, to install this version of Kubernetes Engine using the Istio’s supplied deployment Manager Jinja template requires updating the hardcoded value in the istio-cluster.jinja
file from 1.9.2-gke.1. This has been updated in the next release of Istio.
Another change, the latest version of Istio offered as an option in the istio-cluster-jinja.schema file. Specifically, the installIstioRelease
configuration variable is only 0.6.0. The template does not include 0.7.1 as an option. Modify the istio-cluster-jinja.schema
file to include the choice of 0.7.1. Optionally, I also set 0.7.1 as the default. This change should also be included in the next version of Istio.
There are a limited number of GKE and Istio configuration defaults defined in the istio-cluster.yaml
file, all of which can be overridden from the command line.
To optimize the cluster, and keep compute costs to a minimum, I have overridden several of the default configuration values using the properties flag with the gcloud CLI’s deployment-manager deployments create
command. The README file provided by Istio explains how to use this feature. Configuration changes include the name of the cluster, the version of Istio (0.7.1), the number of nodes (2), the GCP zone (us-east1-b), and the node instance type (n1-standard-1). I also disabled automatic sidecar injection and chose not to install the Istio sample book application onto the cluster (gist).
# change to match your environment | |
ISTIO_HOME="/Applications/istio-0.7.1" | |
GCP_DEPLOYMENT_MANAGER="$ISTIO_HOME/install/gcp/deployment_manager" | |
GCP_PROJECT="springdemo-199819" | |
GKE_CLUSTER="election-nonprod-cluster" | |
GCP_ZONE="us-east1-b" | |
ISTIO_VER="0.7.1" | |
NODE_COUNT="1" | |
INSTANCE_TYPE="n1-standard-1" | |
# deploy gke istio cluster | |
gcloud deployment-manager deployments create springdemo-istio-demo-deployment \ | |
--template=$GCP_DEPLOYMENT_MANAGER/istio-cluster.jinja \ | |
--properties "gkeClusterName:$GKE_CLUSTER,installIstioRelease:$ISTIO_VER,"\ | |
"zone:$GCP_ZONE,initialNodeCount:$NODE_COUNT,instanceType:$INSTANCE_TYPE,"\ | |
"enableAutomaticSidecarInjection:false,enableMutualTLS:true,enableBookInfoSample:false" | |
# get creds for cluster | |
gcloud container clusters get-credentials $GKE_CLUSTER \ | |
--zone $GCP_ZONE --project $GCP_PROJECT | |
# required dashboard access | |
kubectl apply -f ./roles/clusterrolebinding-dashboard.yaml | |
# use dashboard token to sign into dashboard: | |
kubectl -n kube-system describe secret kubernetes-dashboard-token |
Cluster Provisioning
To provision the GKE cluster and deploy Istio, first modify the variables in the part1-create-gke-cluster.sh
file (shown above), then execute the script. The script also retrieves your cluster’s credentials, to enable command line interaction with the cluster using the kubectl
CLI.
Once complete, validate the version of Istio by examining Istio’s Docker image versions, using the following command (gist).
kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ | |
tr -s '[[:space:]]' '\n' | sort | uniq -c | \ | |
egrep -oE "\b(docker.io/istio).*\b" |
The result should be a list of Istio 0.7.1 Docker images.
The new cluster should be running GKE version 1.9.6.gke.0. This can be confirmed using the following command (gist).
gcloud container clusters describe election-nonprod-cluster | \ | |
egrep currentMasterVersion |
Or, from the GCP Cloud Console.
The new GKE cluster should be composed of (2) n1-standard-1 nodes, running in the us-east-1b zone.
As part of the deployment, all of the separate Istio components should be running within the istio-system
namespace.
As part of the deployment, an external IP address and a load balancer were provisioned by GCP and associated with the Istio Ingress. GCP’s Deployment Manager should have also created the necessary firewall rules for cluster ingress and egress.
Building the Environments
Next, we will create three namespaces,dev
, test
, and uat
, which represent three non-production environments. Each environment consists of a Kubernetes Namespace, Istio Ingress, and Secret. The three environments are deployed using the part2-create-environments.sh
script.
Deploying Election v1
For this demonstration, we will assume v1 of the Election service has been previously promoted, tested, and released to Production. Hence, we would expect v1 to be deployed to each of the lower environments. Additionally, a new v2 of the Election service has been developed and tested locally using Minikube. It is ready for deployment to the three environments and will undergo integration testing (detailed in Part Two of the post).
If you recall from our GKE/Istio configuration, we chose manual sidecar injection of the Istio proxy. Therefore, all election deployment scripts perform a kube-inject
command. To connect to our external Amazon RDS database, this kube-inject
command requires the includeIPRanges
flag, which contains two cluster configuration values, the cluster’s IPv4 CIDR (clusterIpv4Cidr
) and the service’s IPv4 CIDR (servicesIpv4Cidr
).
Before deployment, we export the includeIPRanges
value as an environment variable, which will be used by the deployment scripts, using the following command, export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh)
. The get-cluster-ip-ranges.sh
script is shown below (gist).
# run this command line: | |
# export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh) | |
# capture the clusterIpv4Cidr and servicesIpv4Cidr values | |
# required for manual sidecar injection with kube-inject | |
# change to match your environment | |
GCP_PROJECT="springdemo-199819" | |
GKE_CLUSTER="election-nonprod-cluster" | |
GCP_ZONE="us-east1-b" | |
CLUSTER_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \ | |
| egrep clusterIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b") | |
SERVICES_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \ | |
| egrep servicesIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b") | |
export IP_RANGES="$CLUSTER_IPV4_CIDR,$SERVICES_IPV4_CIDR" | |
echo $IP_RANGES |
Using this method with manual sidecar injection is discussed in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).
To deploy v1 of the Election service to all three namespaces, execute the part3-deploy-v1-all-envs.sh
script.
We should now have two instances of v1 of the Election service, running in the dev
, test
, and uat
namespaces, for a total of six election-v1 Kubernetes Pods.
HTTP Request Routing
Before deploying additional versions of the Election service in Part Two of this post, we should understand how external HTTP requests will be routed to different versions of the Election service, in multiple namespaces. In the post’s simple example, we have a matrix of three namespaces and two versions of the Election service. That means we need a method to route external traffic to up to six different election versions. There multiple ways to solve this problem, each with their own pros and cons. For this post, I found a combination of DNS and HTTP request rewriting is most effective.
DNS
First, to route external HTTP requests to the correct namespace, we will use subdomains. Using my current DNS management solution, Azure DNS, I create three new A records for my registered domain, voter-demo.com
. There is one A record for each namespace, including api.dev
, api.test
, and api.uat
.
All three subdomains should resolve to the single external IP address assigned to the cluster’s load balancer.
As part of the environments creation, the script deployed an Istio Ingress, one to each environment. The ingress accepts traffic based on a match to the Request URL (gist).
apiVersion: extensions/v1beta1 | |
kind: Ingress | |
metadata: | |
name: dev-ingress | |
labels: | |
name: dev-ingress | |
namespace: dev | |
annotations: | |
kubernetes.io/ingress.class: istio | |
spec: | |
rules: | |
- host: api.dev.voter-demo.com | |
http: | |
paths: | |
- path: /.* | |
backend: | |
serviceName: election | |
servicePort: 8080 |
The istio-ingress
service load balancer, running in the istio-system
namespace, routes inbound external traffic, based on the Request URL, to the Istio Ingress in the appropriate namespace.
The Istio Ingress in the namespace then directs the traffic to one of the Kubernetes Pods, containing the Election service and the Istio sidecar proxy.
HTTP Rewrite
To direct the HTTP request to v1 or v2 of the Election service, an Istio Route Rule is used. As part of the environment creation, along with a Namespace and Ingress resources, we also deployed an Istio Route Rule to each environment. This particular route rule examines the HTTP request URL for a /v1/
or /v2/
sub-collection resource. If it finds the sub-collection resource, it performs a HTTPRewrite, removing the sub-collection resource from the HTTP request. The Route Rule then directs the HTTP request to the appropriate version of the Election service, v1 or v2 (gist).
According to Istio, ‘if there are multiple registered instances with the specified tag(s), they will be routed to based on the load balancing policy (algorithm) configured for the service (round-robin by default).’ We are using the default load balancing algorithm to distribute requests across multiple copies of each Election service.
# kubectl apply -f ./routerules/routerule-election-v1.yaml -n dev | |
apiVersion: config.istio.io/v1alpha2 | |
kind: RouteRule | |
metadata: | |
name: election-v1 | |
spec: | |
destination: | |
name: election | |
match: | |
request: | |
headers: | |
uri: | |
prefix: /v1/ | |
rewrite: | |
uri: / | |
route: | |
- labels: | |
app: election | |
version: v1 |
The final external HTTP request routing for the Election service in the Non-Production GKE cluster is shown on the left, in the diagram, below. Every Election service Pod also contains an Istio sidecar proxy instance.
Below are some examples of HTTP GET requests that would be successfully routed to our Election service, using the above-described routing strategy (gist).
# details of an election, id 5, requested from v1 elections in dev | |
curl http://api.dev.voter-demo.com/v1/elections/5 | |
# list of candidates, last name Obama, requested from v2 of elections in test | |
curl http://api.test.voter-demo.com/v2/candidates/search/findByLastName?lastName=Obama | |
# process start time metric, requested from v2 of elections in uat | |
curl http://api.test.voter-demo.com/v2/actuator/metrics/process.start.time | |
# vote summary, requested from v1 of elections in production | |
curl http://api.voter-demo.com/v1/vote-totals/summary/2012%20Presidential%20Election |
Part Two
In Part One of this post, we created the Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the Election microservice had already been released to Production, we deployed v1 to each of the three namespaces.
In Part Two of this post, we will learn how to utilize the sophisticated API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new, v2 of the Election microservice, locally, on Kubernetes Minikube. Once we are confident v2 is functioning as intended, we will promote and test v2, across the dev
, test
, and uat
namespaces.
All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.
Using Weave to Network a Docker Multi-Container Java Application
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development, Software Development on September 17, 2015
Use the latest version of Weaveworks’ Weave Net to network a multi-container, Dockerized Java Spring web application.
Introduction
The last post demonstrated how to build and deploy the Java Spring Music application to a VirtualBox, multi-container test environment. The environment contained (1) NGINX container, (2) load-balanced Tomcat containers, (1) MongoDB container, (1) ELK Stack container, and (1) Logspout container, all on one VM.
In that post, we used Docker’s links
option. The links
options, which modifies the container’s /etc/hosts
file, allows two Docker containers to communicate with each other. For example, the NGINX container is linked to both Tomcat containers:
proxy: build: nginx/ ports: "80:80" links: - app01 - app02
Although container linking works, links are not very practical beyond a small number of static containers or a single container host. With linking, you must explicitly define each service-to-container relationship you want Docker to configure. Linking is not an option with Docker Swarm to link containers across multiple virtual machine container hosts. With Docker Networking in its early ‘experimental’ stages and the Swarm limitation, it’s hard to foresee the use of linking for any uses beyond limited development and test environments.
Weave Net
Weave Net, aka Weave, is one of a trio of products developed by Weaveworks. The other two members of the trio include Weave Run and Weave Scope. According to Weaveworks’ website, ‘Weave Net connects all your containers into a transparent, dynamic and resilient mesh. This is one of the easiest ways to set up clustered applications that run anywhere.‘ Weave allows us to eliminate the dependency on the links
connect our containers. Weave does all the linking of containers for us automatically.
Weave v1.1.0
If you worked with previous editions of Weave, you will appreciate that Weave versions v1.0.x and v1.1.0 are significant steps forward in the evolution of Weave. Weaveworks’ GitHub Weave Release page details the many improvements. I also suggest reading Weave ‘Gossip’ DNS, on Weavework’s blog, before continuing. The post details the improvements of Weave v1.1.0. Some of those key new features include:
- Completely redesigned weaveDNS, dubbed ‘Gossip DNS’
- Registrations are broadcast to all weaveDNS instances
- Registered entries are stored in-memory and handle lookups locally
- Weave router’s gossip implementation periodically synchronizes DNS mappings between peers
- Ability to recover from network partitions and other transient failures
- Each peer is aware of the hostnames and IP address of all containers in the Weave network.
weave launch
now launches all weave components, including the router, weaveDNS and the proxy, greatly simplifying setup- weaveDNS is now embedded in the Weave router
Weave-based Network
In this post, we will reuse the Java Spring Music application from the last post. However, we will replace the project’s static dependencies on Docker links with Weave. This post will demonstrate the most basic features of Weave, using a single cluster. In a future post, we will demonstrate how easily Weave also integrates with multiple clusters.
All files for this post can be found in the swarm-weave
branch of the GitHub Repository. Instructions to clone are below.
Configuration
If you recall from the previous post, the Docker Compose YAML file (docker-compose.yml
) looked similar to this:
proxy: build: nginx/ ports: "80:80" links: - app01 - app02 hostname: "proxy" app01: build: tomcat/ expose: "8080" ports: "8180:8080" links: - nosqldb - elk hostname: "app01" app02: build: tomcat/ expose: "8080" ports: "8280:8080" links: - nosqldb - elk hostname: "app01" nosqldb: build: mongo/ hostname: "nosqldb" volumes: "/opt/mongodb:/data/db" elk: build: elk/ ports: - "8081:80" - "8082:9200" expose: "5000/upd" logspout: build: logspout/ volumes: "/var/run/docker.sock:/tmp/docker.sock" links: elk ports: "8083:80" environment: ROUTE_URIS=logstash://elk:5000
Implementing Weave simplifies the docker-compose.yml
, considerably. Below is the new Weave version of the docker-compose.yml
. The links
option have been removed from all containers. Additionally, the hostnames
have been removed, as they serve no real purpose moving forward. The logspout service’s environment
option has been modified to use the elk container’s full name as opposed to the hostname.
The only addition is the volumes_from
option to the proxy service. We must ensure that the two Tomcat containers start before the NGINX containers. The links
option indirectly provided this functionality, previously.
proxy: build: nginx/ ports: - "80:80" volumes_from: - app01 - app02 app01: build: tomcat/ expose: - "8080" ports: - "8180:8080" app02: build: tomcat/ expose: - "8080" ports: - "8280:8080" nosqldb: build: mongo/ volumes: - "/opt/mongodb:/data/db" elk: build: elk/ ports: - "8081:80" - "8082:9200" expose: - "5000/upd" logspout: build: logspout/ volumes: - "/var/run/docker.sock:/tmp/docker.sock" ports: - "8083:80" environment: - ROUTE_URIS=logstash://music_elk_1:5000
Next, we need to modify the NGINX configuration, slightly. In the previous post we referenced the Tomcat service names, as shown below.
upstream backend { server app01:8080; server app02:8080; }
Weave will automatically add the two Tomcat container names to the NGINX container’s /etc/hosts
file. We will add these Tomcat container names to NGINX’s configuration file.
upstream backend { server music_app01_1:8080; server music_app02_1:8080; }
In an actual Production environment, we would use a template, along with a service discovery tool, such as Consul, to automatically populate the container names, as containers are dynamically created or destroyed.
Installing and Running Weave
After cloning this post’s GitHub repository, I recommend first installing and configuring Weave. Next, build the container host VM using Docker Machine. Lastly, build the containers using Docker Compose. The build_project.sh
script below will take care of all the necessary steps.
#!/bin/sh ######################################################################## # # title: Build Complete Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/sprint-music-docker # description: Clone and build complete Spring Music Docker project # # to run: sh ./build_project.sh # ######################################################################## # install latest weave curl -L git.io/weave -o /usr/local/bin/weave && chmod a+x /usr/local/bin/weave && weave version # clone project git clone -b swarm-weave \ --single-branch --branch swarm-weave \ https://github.com/garystafford/spring-music-docker.git && cd spring-music-docker # build VM docker-machine create --driver virtualbox springmusic --debug # create diectory to store mongo data on host docker ssh springmusic mkdir /opt/mongodb # set new environment docker-machine env springmusic && eval "$(docker-machine env springmusic)" # launch weave and weaveproxy/weaveDNS containers weave launch && tlsargs=$(docker-machine ssh springmusic \ "cat /proc/\$(pgrep /usr/local/bin/docker)/cmdline | tr '\0' '\n' | grep ^--tls | tr '\n' ' '") weave launch-proxy $tlsargs && eval "$(weave env)" && # test/confirm weave status weave status && docker logs weaveproxy # pull and build images and containers # this step will take several minutes to pull images first time docker-compose -f docker-compose.yml -p music up -d # wait for container apps to fully start sleep 15 # test weave (should list entries for all containers) docker exec -it music_proxy_1 cat /etc/hosts # run quick test of Spring Music application for i in {1..10} do curl -I --url $(docker-machine ip springmusic) done
One last test, to ensure that MongoDB is using the host’s volume, and not storing data in the MongoDB container’s /data/db
directory, execute the following command: docker-machine ssh springmusic ls -Alh /opt/mongodb
. You should see MongoDB-related content being stored here.
Testing Weave
Running the weave status
command, we should observe that Weave returned a status similar to the example below:
gstafford@gstafford-X555LA:$ weave status Version: v1.1.0 Service: router Protocol: weave 1..2 Name: 6a:69:11:1b:b4:e3(springmusic) Encryption: disabled PeerDiscovery: enabled Targets: 0 Connections: 0 Peers: 1 Service: ipam Consensus: achieved Range: [10.32.0.0-10.48.0.0) DefaultSubnet: 10.32.0.0/12 Service: dns Domain: weave.local. TTL: 1 Entries: 2 Service: proxy Address: tcp://192.168.99.100:12375
Running the docker exec -it music_proxy_1 cat /etc/hosts
command, we should observe that WeaveDNS has automatically added entries for all containers to the music_proxy_1
container’s /etc/hosts
file. WeaveDNS will also remove the addresses of any containers that die. This offers a simple way to implement redundancy.
gstafford@gstafford-X555LA:$ docker exec -it music_proxy_1 cat /etc/hosts # modified by weave 10.32.0.6 music_proxy_1 127.0.0.1 localhost 172.17.0.131 weave weave.bridge 172.17.0.133 music_elk_1 music_elk_1.bridge 172.17.0.134 music_nosqldb_1 music_nosqldb_1.bridge 172.17.0.138 music_app02_1 music_app02_1.bridge 172.17.0.139 music_logspout_1 music_logspout_1.bridge 172.17.0.140 music_app01_1 music_app01_1.bridge ::1 ip6-localhost ip6-loopback localhost fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Weave resolves the container’s name to eth0
IP address, created by Docker’s docker0
Ethernet bridge. Each container can now communicate with all other containers in the cluster.
Results
Resulting virtual machines, network, images, and containers:
gstafford@gstafford-X555LA:$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM springmusic * virtualbox Running tcp://192.168.99.100:2376 gstafford@gstafford-X555LA:$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE music_app02 latest 632c782010ac 3 days ago 370.4 MB music_app01 latest 632c782010ac 3 days ago 370.4 MB music_proxy latest 171624a31920 3 days ago 144.5 MB music_nosqldb latest 2b3b46af5ef3 3 days ago 260.8 MB music_elk latest 5c18dae84b26 3 days ago 1.05 GB weaveworks/weaveexec v1.1.0 69c6bfa7934f 5 days ago 58.18 MB weaveworks/weave v1.1.0 5dccf0533147 5 days ago 17.53 MB music_logspout latest fe64597ab0c4 8 days ago 24.36 MB gliderlabs/logspout master 40a52d6ca462 9 days ago 14.75 MB willdurand/elk latest 04cd7334eb5d 2 weeks ago 1.05 GB tomcat latest 6fe1972e6b08 2 weeks ago 347.7 MB mongo latest 5c9464760d54 2 weeks ago 260.8 MB nginx latest cd3cf76a61ee 2 weeks ago 132.9 MB gstafford@gstafford-X555LA:$ weave ps weave:expose 6a:69:11:1b:b4:e3 2bce66e3b33b fa:07:7e:85:37:1b 10.32.0.5/12 604dbbc4473f 6a:73:8d:54:cc:fe 10.32.0.4/12 ea64b42cf5a1 c2:69:73:84:67:69 10.32.0.3/12 85b1e8a9b8d0 aa:f7:12:cd:b7:13 10.32.0.6/12 81041fc97d1f 2e:1e:82:67:89:5d 10.32.0.2/12 e80c04bdbfaf 1e:95:a5:b2:9d:30 10.32.0.1/12 18c22e7f1c33 7e:43:54:db:8d:b8 gstafford@gstafford-X555LA:$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2bce66e3b33b music_app01 "/w/w catalina.sh run" 3 days ago Up 3 days 0.0.0.0:8180->8080/tcp music_app01_1 604dbbc4473f music_logspout "/w/w /bin/logspout" 3 days ago Up 3 days 8000/tcp, 0.0.0.0:8083->80/tcp music_logspout_1 ea64b42cf5a1 music_app02 "/w/w catalina.sh run" 3 days ago Up 3 days 0.0.0.0:8280->8080/tcp music_app02_1 85b1e8a9b8d0 music_proxy "/w/w nginx -g 'daemo" 3 days ago Up 3 days 0.0.0.0:80->80/tcp, 443/tcp music_proxy_1 81041fc97d1f music_nosqldb "/w/w /entrypoint.sh " 3 days ago Up 3 days 27017/tcp music_nosqldb_1 e80c04bdbfaf music_elk "/w/w /usr/bin/superv" 3 days ago Up 3 days 5000/0, 0.0.0.0:8081->80/tcp, 0.0.0.0:8082->9200/tcp music_elk_1 8eafc6225fc1 weaveworks/weaveexec:v1.1.0 "/home/weave/weavepro" 3 days ago Up 3 days weaveproxy 18c22e7f1c33 weaveworks/weave:v1.1.0 "/home/weave/weaver -" 3 days ago Up 3 days 172.17.42.1:53->53/udp, 0.0.0.0:6783->6783/tcp, 0.0.0.0:6783->6783/udp, 172.17.42.1:53->53/tcp weave
Spring Music Application Links
Assuming springmusic
VM is running at 192.168.99.100
, these are the accessible URL for each of the environment’s major components:
- Spring Music: 192.168.99.100
- NGINX: 192.168.99.100/nginx_status
- Tomcat Node 1*: 192.168.99.100:8180/manager
- Tomcat Node 2*: 192.168.99.100:8280/manager
- Kibana: 192.168.99.100:8081
- Elasticsearch: 192.168.99.100:8082
- Elasticsearch: 192.168.99.100:8082/_status?pretty
- Logspout: 192.168.99.100:8083/logs
* The Tomcat user name is admin
and the password is t0mcat53rv3r
.
Helpful Links
Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development on June 27, 2015
Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.
Introduction
In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.
In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBox, Jenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.
Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).
Docker Machine
If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.
We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.
I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.
The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.
Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.
The bash script executed by Jenkins contains the following commands:
# optional: record current versions of docker apps with each build docker -v && docker-compose -v && docker-machine -v # set-up: clean up any previous machine failures docker-machine stop test || echo "nothing to stop" && \ docker-machine rm test || echo "nothing to remove" # use docker-machine to create and configure 'test' environment # add a -D (debug) if having issues docker-machine create --driver virtualbox test eval "$(docker-machine env test)" # use docker-compose to pull and build new images and containers docker-compose -p jenkins up -d # optional: list machines, images, and containers docker-machine ls && docker images && docker ps -a # wait for containers to fully start before tests fire up sleep 30 # test the services sh tests.sh $(docker-machine ip test) # tear down: stop and remove 'test' environment docker-machine stop test && docker-machine rm test
As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.
docker-machine create --driver virtualbox test eval "$(docker-machine env test)"
Once this step is complete you will have the following VirtualBox VM created, running, and active.
NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376
Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.
docker-compose -p jenkins up -d
The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Pulls (5) images, builds (5) images, and builds (11) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p <your_project_name_here> up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8500:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ links: - graphite - mongoAuthentication - "ambassador:nginx" expose: - "8587" valet: build: valet/ links: - graphite - mongoValet - "ambassador:nginx" expose: - "8585" maintenance: build: maintenance/ links: - graphite - mongoMaintenance - "ambassador:nginx" expose: - "8583" vehicle: build: vehicle/ links: - graphite - mongoVehicle - "ambassador:nginx" expose: - "8581" nginx: build: nginx/ ports: - "80:80" links: - "ambassador:vehicle" - "ambassador:valet" - "ambassador:authentication" - "ambassador:maintenance" ambassador: image: cpuguy83/docker-grand-ambassador volumes: - "/var/run/docker.sock:/var/run/docker.sock" command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"
Running the docker-compose.yaml
file, will pull these (5) Docker Hub images:
REPOSITORY TAG IMAGE ID ========== === ======== java 8u45-jdk 1f80eb0f8128 nginx latest 319d2015d149 mongo latest 66b43e3cae49 hopsoft/graphite-statsd latest b03e373279e8 cpuguy83/docker-grand-ambassador latest c635b1699f78
And, build these (5) Docker images from Dockerfiles:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_nginx latest 0b53a9adb296 jenkins_vehicle latest d80f79e605f4 jenkins_valet latest cbe8bdf909b8 jenkins_maintenance latest 15b8a94c00f4 jenkins_authentication latest ef0345369079
And, build these (11) Docker containers from corresponding image:
CONTAINER ID IMAGE NAME ============ ===== ==== 17992acc6542 jenkins_nginx jenkins_nginx_1 bcbb2a4b1a7d jenkins_vehicle jenkins_vehicle_1 4ac1ac69f230 mongo:latest jenkins_mongoVehicle_1 bcc8b9454103 jenkins_valet jenkins_valet_1 7c1794ca7b8c jenkins_maintenance jenkins_maintenance_1 2d0e117fa5fb jenkins_authentication jenkins_authentication_1 d9146a1b1d89 hopsoft/graphite-statsd:latest jenkins_graphite_1 56b34cee9cf3 cpuguy83/docker-grand-ambassador jenkins_ambassador_1 a72199d51851 mongo:latest jenkins_mongoAuthentication_1 307cb2c01cc4 mongo:latest jenkins_mongoMaintenance_1 4e0807431479 mongo:latest jenkins_mongoValet_1
Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.
Integration Testing
As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:
sh tests.sh $(docker-machine ip test)
The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test)
, is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100
. If a parameter is not provided, the test script’s hostname
variable will use the default value of localhost
, ‘hostname=${1-'localhost'}
‘.
Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581
for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:
http://192.168.99.100/vehicles/utils/ping.json http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad
Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh # docker-machine: sh tests.sh $(docker-machine ip test) # ######################################################################## echo --- Integration Tests --- echo ### VARIABLES ### hostname=${1-'localhost'} # use input param or default to localhost application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized make="Test" model="Foo" echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} echo make: ${make} echo model: ${model} echo ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"year\": 2015, \"make\": \"${make}\", \"model\": \"${model}\", \"color\": \"White\", \"type\": \"Sedan\", \"mileage\": 250 }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new maintenance record in the response body with an 'id'" url="http://${hostname}/maintenances" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\", \"mileage\": 1000, \"type\": \"Test Maintenance\", \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new valet transaction in the response body with an 'id'" url="http://${hostname}/valets" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\", \"parkingLot\": \"Test Parking Ramp\", \"parkingSpot\": 10, \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo
Tear Down
In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.
docker-machine stop test && \ docker-machine rm test
Jenkins CI Console Output
Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.
Started by user anonymous Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10 Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git > git --version # timeout=10 using GIT_SSH to set credentials using .gitcredentials to set credentials > git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10 > git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/* > git config --local --remove-section credential # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5 > git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10 [workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh + docker -v Docker version 1.7.0, build 0baf609 + docker-compose -v docker-compose version: 1.3.1 CPython version: 2.7.9 OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013 + docker-machine -v docker-machine version 0.3.0 (0a251fe) + docker-machine stop test + docker-machine rm test Successfully removed test + docker-machine create --driver virtualbox test Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env test + docker-machine env test + eval export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test" export DOCKER_MACHINE_NAME="test" # Run this command to configure your shell: # eval "$(docker-machine env test)" + export DOCKER_TLS_VERIFY=1 + export DOCKER_HOST=tcp://192.168.99.100:2376 + export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test + export DOCKER_MACHINE_NAME=test + docker-compose -p jenkins up -d Pulling mongoValet (mongo:latest)... latest: Pulling from mongo ...Abridged output... + docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376 + docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jenkins_vehicle latest fdd7f9d02ff7 2 seconds ago 837.1 MB jenkins_valet latest 8a592e0fe69a 4 seconds ago 837.1 MB jenkins_maintenance latest 5a4a44e136e5 5 seconds ago 837.1 MB jenkins_authentication latest e521e067a701 7 seconds ago 838.7 MB jenkins_nginx latest 085d183df8b4 25 minutes ago 132.8 MB java 8u45-jdk 1f80eb0f8128 12 days ago 816.4 MB nginx latest 319d2015d149 12 days ago 132.8 MB mongo latest 66b43e3cae49 12 days ago 260.8 MB hopsoft/graphite-statsd latest b03e373279e8 4 weeks ago 740 MB cpuguy83/docker-grand-ambassador latest c635b1699f78 5 months ago 525.7 MB + docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4ea39fa187bf jenkins_vehicle "java -classpath .:c 2 seconds ago Up 1 seconds 8581/tcp jenkins_vehicle_1 b248a836546b mongo:latest "/entrypoint.sh mong 3 seconds ago Up 3 seconds 27017/tcp jenkins_mongoVehicle_1 0c94e6409afc jenkins_valet "java -classpath .:c 4 seconds ago Up 3 seconds 8585/tcp jenkins_valet_1 657f8432004b jenkins_maintenance "java -classpath .:c 5 seconds ago Up 5 seconds 8583/tcp jenkins_maintenance_1 8ff6de1208e3 jenkins_authentication "java -classpath .:c 7 seconds ago Up 6 seconds 8587/tcp jenkins_authentication_1 c799d5f34a1c hopsoft/graphite-statsd:latest "/sbin/my_init" 12 minutes ago Up 12 minutes 2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp jenkins_graphite_1 040872881b25 jenkins_nginx "nginx -g 'daemon of 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 443/tcp jenkins_nginx_1 c6a2dc726abc mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoAuthentication_1 db22a44239f4 mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoMaintenance_1 d5fd655474ba cpuguy83/docker-grand-ambassador "/usr/bin/grand-amba 26 minutes ago Up 26 minutes jenkins_ambassador_1 2b46bd6f8cfb mongo:latest "/entrypoint.sh mong 31 minutes ago Up 31 minutes 27017/tcp jenkins_mongoValet_1 + sleep 30 + docker-machine ip test + sh tests.sh 192.168.99.100 --- Integration Tests --- hostname: 192.168.99.100 application: Test API Client 1435585062 secret: NGM5OTI5ODAxMTZ make: Test model: Foo TEST: GET request should return 'true' in the response body http://192.168.99.100/vehicles/utils/ping.json % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 RESULT: pass TEST: POST request should return a new client in the response body with an 'id' http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 847 225 --:--:-- --:--:-- --:--:-- 849 RESULT: pass SETUP: Get the new client's apiKey for next test http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 20482 5461 --:--:-- --:--:-- --:--:-- 21000 apiKey: sv1CA9NdhmXh72NrGKBN3Abb TEST: GET request should return a new jwt in the response body http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 686 0 --:--:-- --:--:-- --:--:-- 687 RESULT: pass SETUP: Get a new jwt using the new client for the next test http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 16843 0 --:--:-- --:--:-- --:--:-- 17076 jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM TEST: POST request should return a new vehicle in the response body with an 'id' http://192.168.99.100/vehicles % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 123 0 0 100 123 0 612 --:--:-- --:--:-- --:--:-- 611 100 419 0 296 100 123 649 270 --:--:-- --:--:-- --:--:-- 649 RESULT: pass SETUP: Get id from new vehicle for the next test http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 377 0 377 0 0 5564 0 --:--:-- --:--:-- --:--:-- 5626 vehicle id: 55914a28e4b04658471dc03a TEST: GET request should return a vehicle in the response body with the requested 'id' http://192.168.99.100/vehicles/55914a28e4b04658471dc03a % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 296 0 296 0 0 7051 0 --:--:-- --:--:-- --:--:-- 7219 RESULT: pass TEST: POST request should return a new maintenance record in the response body with an 'id' http://192.168.99.100/maintenances % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 RESULT: pass TEST: POST request should return a new valet transaction in the response body with an 'id' http://192.168.99.100/valets % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 561 0 368 100 193 514 269 --:--:-- --:--:-- --:--:-- 514 RESULT: pass + docker-machine stop test + docker-machine rm test Successfully removed test Finished: SUCCESS
Graphite and Statsd
If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.
The Complete Process
The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.
Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development on June 22, 2015
Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose
Previous Posts
In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.
Introduction
In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:
- Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
- Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
- Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
- Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
- Tear Down: Using Jenkins, automatically stop and remove the containers and images
For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.
Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Build the Microservices
In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.
To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml
below, corresponding to each microservice.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <name>Virtual-Vehicles API</name> <description>Virtual-Vehicles API https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3 </description> <url>https://github.com/garystafford/virtual-vehicle-demo</url> <groupId>com.example</groupId> <artifactId>Virtual-Vehicles-API</artifactId> <version>1</version> <packaging>pom</packaging> <modules> <module>Maintenance</module> <module>Valet</module> <module>Vehicle</module> <module>Authentication</module> </modules> </project>
Below is the view of the four individual Maven modules, within the single Jenkins Maven job.
Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate
‘.
Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.
Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.
Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml
file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.
Deploy Build Artifacts
Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.
The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.
Build and Compose the Containers
The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.
The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable export DOCKER_HOST=tcp://localhost:4243 # record installed version of Docker and Maven with each build mvn --version && \ docker --version && \ docker-compose --version # use docker-compose to build new images and containers docker-compose -p jenkins up -d # list virtual-vehicles related images docker images | grep 'jenkins' | awk '{print $0}' # list all containers docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Builds (4) images, pulls (2) images, and builds (9) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p virtualvehicles up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8481:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ ports: - "8587:8587" links: - graphite - mongoAuthentication valet: build: valet/ ports: - "8585:8585" links: - graphite - mongoValet - authentication maintenance: build: maintenance/ ports: - "8583:8583" links: - graphite - mongoMaintenance - authentication vehicle: build: vehicle/ ports: - "8581:8581" links: - graphite - mongoVehicle - authentication
Running the docker-compose.yaml
file, produces the following images:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_vehicle latest a6ea4dfe7cf5 jenkins_valet latest 162d3102d43c jenkins_maintenance latest 0b6f530cc968 jenkins_authentication latest 45b50487155e
And, these containers:
CONTAINER ID IMAGE NAME ============ ===== ==== 2b4d5a918f1f jenkins_vehicle jenkins_vehicle_1 492fbd88d267 mongo:latest jenkins_mongoVehicle_1 01f410bb1133 jenkins_valet jenkins_valet_1 6a63a664c335 jenkins_maintenance jenkins_maintenance_1 00babf484cf7 jenkins_authentication jenkins_authentication_1 548a31034c1e hopsoft/graphite-statsd:latest jenkins_graphite_1 cdc18bbb51b4 mongo:latest jenkins_mongoAuthentication_1 6be5c0558e92 mongo:latest jenkins_mongoMaintenance_1 8b71d50a4b4d mongo:latest jenkins_mongoValet_1
Integration Testing
Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.
Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.
sleep 15 sh tests.sh -v
The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh -v # ######################################################################## echo --- Integration Tests --- ### VARIABLES ### hostname="localhost" application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}:8581/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}:8587/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}:8587/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}:8581/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d '{ "year": 2015, "make": "Test", "model": "Foo", "color": "White", "type": "Sedan", "mileage": 250 }' --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}:8581/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass"
Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.
Tear Down
Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
The Complete Process
The below diagram show the entire process, start to finish.
Preventing Race Conditions Between Containers in ‘Dockerized’ MEAN Applications
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Client-Side Development, DevOps, Enterprise Software Development, Software Development on November 30, 2014
Introduction
The MEAN stack is a has gained enormous popularity as a reliable and scalable full-stack JavaScript solution. MEAN web application’s have four main components, MongoDB, Express, AngularJS, and Node.js. MEAN web-applications often includes other components, such as Mongoose, Passport, Twitter Bootstrap, Yoeman, Grunt or Gulp, and Bower. The two most popular ready-made MEAN application templates are MEAN.io from Linnovate, and MEAN.JS. Both of these offer a ready-made application framework for building MEAN applications.
Docker has also gained enormous popularity. According to Docker, Docker is an open platform, which enables developers and sysadmins apps to be quickly assembled from components. ‘Dockerized’ apps are completely portable and can run anywhere.
Docker is an ideal solution for MEAN applications. Being a full-stack JavaScript solution, MEAN applications are based on a multi-tier architecture. The MEAN application’s data tier contains the MongoDB noSQL database. The application tier (logic tier) contains Node.js and Express. The application tier can also contain other components, such as Mongoose, a Node.js Object Document Mapper (ODM) for MongoDB, and Passport, an authentication middleware for Node.js. Lastly, the presentation tier (front end) has client-side tools, such as AngularJS and Twitter Bootstrap.
Using Docker, we can ‘Dockerize’ or containerize each tier of a MEAN application, mirroring the physical architecture we would deploy a MEAN application to, in a Production environment. Just as we would always run a separate database server or servers for MongoDB, we can isolate MongoDB into a Docker container. Likewise, we can isolate the Node.js web server, along with the rest of the components (Mongoose, Express, Passport) on the application and presentation tiers, into a Docker container. We can easily add more containers, for more functionality, such as load-balancing and reverse-proxies (nginx), and caching (Redis and Memcached).
The MEAN.JS project has been very progressive in implementing Docker, to offer a more realistic environment for development and testing. An additional tool that the MEAN.JS project has implemented, to automate the creation of multiple Docker containers, is Fig. The tool, Fig, provides quick, automated creation of multiple, linked Docker containers.
Using Docker and Fig, a Developer can pull down ready-made base containers from Docker Hub, configure the containers as part of a multi-tier application environment, deploy our MEAN application components to the containers, and start the applications, all with a short list of commands.
Note, I said development and test, not production. To extend Docker and Fig to production, you can use tools such as Flocker. Flocker, by ClusterHQ, can scale the single-host Fig environment to multiple containers on multiple machines (hosts).
Race Conditions
Docker containers have a very fast start-up time, compared to other technologies, such as VMs (virtual machines). However, based on their contents, containers take varying amounts of time to fully start-up. In most multi-tier applications, there is a required start-up sequence for components (tiers, servers, applications). For example, in a database-driven application, like a MEAN application, you should make sure the MongoDB database server is up and running, before starting the application. Although this is obvious, it becomes harder to guarantee the order in which components will start-up, when you leverage an asynchronous, automated, continuous delivery solution like Docker with Fig.
When component dependencies are not met because another container is not fully started, we can refer to this as race condition. I have found with most multi-container MEAN application, the slower starting MongoDB data container prevents the quicker-starting Node.js web-application container from properly starting the MEAN application. In other words, the application crashes.
Fixing Race Conditions with MEAN.JS Applications
In order to eliminate race conditions, we need to script our start-up sequence to guarantee the order in which components will start, ensuring the overall application starts correctly. Specifically in this post, we will eliminate the potential race condition between the MongoDB data container (db_1) and the Node.js web-application container (web_1). At the same time, we will fix a small error with the existing MEAN.JS project, that prevents proper start-up of the ‘dockerized’ container MEAN.JS application.
Download and Build MEAN.JS App
Clone the meanjs/mean repository, and install npm and bower packages.
git clone https://github.com/meanjs/mean.git cd mean npm install bower install
Modify MEAN.JS App
- Add
fig_start.sh
start-up script to root of mean project. - Modify the Dockerfile, replace
CMD["grunt"]
withCMD /bin/sh /home/mean/wait_mongo_start.sh
- Optional, add
wait_mongo_start.sh
clean-up script to root of mean project.
Fix Existing Issue with MEAN.JS App When Using Docker and Fig
The existing MEAN.JS application references localhost
in the development configuration (config/env/development.js
). The development
configuration is the one used by the MEAN.JS application, at start-up. The MongoDB data container (db_1) is not running on localhost
, it is running on a IP address, assigned my Docker. To discover the IP address, we must reference an environment variable (DB_1_PORT_27017_TCP_ADDR
), created by Docker, within the Node.js web-application container (web_1).
- Modify the config/env/development.js file, add
var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';
- Modify the config/env/development.js file, change
db: 'mongodb://localhost/mean-dev',
todb: 'mongodb://' + DB_HOST + '/mean-dev',
Start the Application
Start the application using Fig commands or using the clean-up/start-up script (sh fig_start.sh
).
- Run
fig build && fig up
- Alternately, run
sh fig_start.sh
The Details…
The CMD
command is the last step in the Dockerfile
.The CMD
command sets the wait_mongo_start.sh
script to execute in the Node.js web-application container (web_1) when the container starts. This script prevents the grunt
command from running, until nc
(or netcat) succeeds at connecting to the IP address and port of mongod
, the primary daemon process for the MongoDB system, on the MongoDB data container (db_1). The script uses a 3-second polling interval, which can be modified if necessary.
#!/bin/sh polling_interval=3 # optional, view db_1 container-related env vars #env | grep DB_1 | sort echo "wait for mongo to start first..." # wait until mongo is running in db_1 container until nc -z $DB_1_PORT_27017_TCP_ADDR $DB_1_PORT_27017_TCP_PORT do echo "waiting for $polling_interval seconds..." sleep $polling_interval done # start node app grunt
The environment variables referenced in the script are created in the Node.js web-application container (web_1), automatically, by Docker. They are shown in the screen grab, below. You can discover these variables by uncommenting the env | grep DB_1 | sort
line, above.
The Dockerfile
modification is highlighted below.
#FROM dockerfile/nodejs MAINTAINER Matthias Luebken, matthias@catalyst-zero.com WORKDIR /home/mean # Install Mean.JS Prerequisites RUN npm install -g grunt-cli RUN npm install -g bower # Install Mean.JS packages ADD package.json /home/mean/package.json RUN npm install # Manually trigger bower. Why doesn't this work via npm install? ADD .bowerrc /home/mean/.bowerrc ADD bower.json /home/mean/bower.json RUN bower install --config.interactive=false --allow-root # Make everything available for start ADD . /home/mean # Currently only works for development ENV NODE_ENV development # Port 3000 for server # Port 35729 for livereload EXPOSE 3000 35729 CMD /bin/sh /home/mean/wait_mongo_start.sh
The config/env/development.js
modifications are highlighted below (abridged code).
'use strict'; // used when building application using fig and Docker var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost'; module.exports = { db: 'mongodb://' + DB_HOST + '/mean-dev', log: { // Can specify one of 'combined', 'common', 'dev', 'short', 'tiny' format: 'dev', // Stream defaults to process.stdout // Uncomment to enable logging to a log on the file system options: { //stream: 'access.log' } }, ...
The fig_start.sh
file is optional and not part of the solution for the race condition. Instead of repeating multiple commands, I prefer running a single script, which can execute the commands, consistently. Note, commands in this script remove ALL ‘Exited’ containers and untagged (<none>) images.
#!/bin/sh # remove all exited containers echo "Removing all 'Exited' containers..." docker rm -f $(docker ps --filter 'status=Exited' -a) > /dev/null 2>&1 # remove all images echo "Removing all untagged images..." docker rmi $(docker images | grep "^" | awk "{print $3}") > /dev/null 2>&1 # build and start containers with fig fig build && fig up
MEAN Application Start-Up Screen Grabs
Below are screen grabs showing the MEAN.JS application starting up, both before and after the changes were implemented.
Install Latest Node.js and npm in a Docker Container
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Client-Side Development, DevOps, Enterprise Software Development on November 17, 2014