Posts Tagged Java

Spring Music Revisited: Java-Spring-MongoDB Web App with Docker 1.12

Build, test, deploy, and monitor a multi-container, MongoDB-backed, Java Spring web application, using the new Docker 1.12.

Spring Music Infrastructure

Introduction

** This post and associated project code were updated 9/3/2016 to use Tomcat 8.5.4 with OpenJDK 8.**

This post and the post’s example project represent an update to a previous post, Build and Deploy a Java-Spring-MongoDB Application using Docker. This new post incorporates many improvements made in Docker 1.12, including the use of the new Docker Compose v2 YAML format. The post’s project was also updated to use Filebeat with ELK, as opposed to Logspout, which was used previously.

In this post, we will demonstrate how to build, test, deploy, and manage a Java Spring web application, hosted on Apache Tomcat, load-balanced by NGINX, monitored by ELK with Filebeat, and all containerized with Docker.

We will use a sample Java Spring application, Spring Music, available on GitHub from Cloud Foundry. The Spring Music sample record album collection application was originally designed to demonstrate the use of database services on Cloud Foundry, using the Spring Framework. Instead of Cloud Foundry, we will host the Spring Music application locally, using Docker on VirtualBox, and optionally on AWS.

All files necessary to build this project are stored on the docker_v2 branch of the garystafford/spring-music-docker repository on GitHub. The Spring Music source code is stored on the springmusic_v2 branch of the garystafford/spring-music repository, also on GitHub.

Spring Music Application

Application Architecture

The Java Spring Music application stack contains the following technologies: JavaSpring Framework, AngularJS, Bootstrap, jQueryNGINXApache TomcatMongoDB, the ELK Stack, and Filebeat. Testing frameworks include the Spring MVC Test Framework, Mockito, Hamcrest, and JUnit.

A few changes were made to the original Spring Music application to make it work for this demonstration, including:

  • Move from Java 1.7 to 1.8 (including newer Tomcat version)
  • Add unit tests for Continuous Integration demonstration purposes
  • Modify MongoDB configuration class to work with non-local, containerized MongoDB instances
  • Add Gradle warNoStatic task to build WAR without static assets
  • Add Gradle zipStatic task to ZIP up the application’s static assets for deployment to NGINX
  • Add Gradle zipGetVersion task with a versioning scheme for build artifacts
  • Add context.xml file and MANIFEST.MF file to the WAR file
  • Add Log4j RollingFileAppender appender to send log entries to Filebeat
  • Update versions of several dependencies, including Gradle, Spring, and Tomcat

We will use the following technologies to build, publish, deploy, and host the Java Spring Music application: GradlegitGitHubTravis CIOracle VirtualBoxDockerDocker ComposeDocker MachineDocker Hub, and optionally, Amazon Web Services (AWS).

NGINX
To increase performance, the Spring Music web application’s static content will be hosted by NGINX. The application’s WAR file will be hosted by Apache Tomcat 8.5.4. Requests for non-static content will be proxied through NGINX on the front-end, to a set of three load-balanced Tomcat instances on the back-end. To further increase application performance, NGINX will also be configured for browser caching of the static content. In many enterprise environments, the use of a Java EE application server, like Tomcat, is still not uncommon.

Reverse proxying and caching are configured thought NGINX’s default.conf file, in the server configuration section:

The three Tomcat instances will be manually configured for load-balancing using NGINX’s default round-robin load-balancing algorithm. This is configured through the default.conf file, in the upstream configuration section:

Client requests are received through port 80 on the NGINX server. NGINX redirects requests, which are not for non-static assets, to one of the three Tomcat instances on port 8080.

MongoDB
The Spring Music application was designed to work with a number of data stores, including MySQL, Postgres, Oracle, MongoDB, Redis, and H2, an in-memory Java SQL database. Given the choice of both SQL and NoSQL databases, we will select MongoDB.

The Spring Music application, hosted by Tomcat, will store and modify record album data in a single instance of MongoDB. MongoDB will be populated with a collection of album data from a JSON file, when the Spring Music application first creates the MongoDB database instance.

ELK
Lastly, the ELK Stack with Filebeat, will aggregate NGINX, Tomcat, and Java Log4j log entries, providing debugging and analytics to our demonstration. A similar method for aggregating logs, using Logspout instead of Filebeat, can be found in this previous post.

Kibana 4 Web Console

Continuous Integration

In this post’s example, two build artifacts, a WAR file for the application and ZIP file for the static web content, are built automatically by Travis CI, whenever source code changes are pushed to the springmusic_v2 branch of the garystafford/spring-music repository on GitHub.

Travis CI Output

Following a successful build and a small number of unit tests, Travis CI pushes the build artifacts to the build-artifacts branch on the same GitHub project. The build-artifacts branch acts as a pseudo binary repository for the project, much like JFrog’s Artifactory. These artifacts are used later by Docker to build the project’s immutable Docker images and containers.

Build Artifact Repository

Build Notifications
Travis CI pushes build notifications to a Slack channel, which eliminates the need to actively monitor Travis CI.

Travis CI Slack Notifications

Automation Scripting
The .travis.yaml file, custom gradle.build Gradle tasks, and the deploy_travisci.sh script handles the Travis CI automation described, above.

Travis CI .travis.yaml file:

Custom gradle.build tasks:

The deploy.sh file:

You can easily replicate the project’s continuous integration automation using your choice of toolchains. GitHub or BitBucket are good choices for distributed version control. For continuous integration and deployment, I recommend Travis CI, Semaphore, Codeship, or Jenkins. Couple those with a good persistent chat application, such as Glider Labs’ Slack or Atlassian’s HipChat.

Building the Docker Environment

Make sure VirtualBox, Docker, Docker Compose, and Docker Machine, are installed and running. At the time of this post, I have the following versions of software installed on my Mac:

  • Mac OS X 10.11.6
  • VirtualBox 5.0.26
  • Docker 1.12.1
  • Docker Compose 1.8.0
  • Docker Machine 0.8.1

To build the project’s VirtualBox VM, Docker images, and Docker containers, execute the build script, using the following command: sh ./build_project.sh. A build script is useful when working with CI/CD automation tools, such as Jenkins CI or ThoughtWorks go. However, to understand the build process, I suggest first running the individual commands, locally.

Deploying to AWS
By simply changing the Docker Machine driver to AWS EC2 from VirtualBox, and providing your AWS credentials, the springmusic environment may also be built on AWS.

Build Process
Docker Machine provisions a single VirtualBox springmusic VM on which host the project’s containers. VirtualBox provides a quick and easy solution that can be run locally for initial development and testing of the application.

Next, the script creates a Docker data volume and project-specific Docker bridge network.

Next, using the project’s individual Dockerfiles, Docker Compose pulls base Docker images from Docker Hub for NGINX, Tomcat, ELK, and MongoDB. Project-specific immutable Docker images are then built for NGINX, Tomcat, and MongoDB. While constructing the project-specific Docker images for NGINX and Tomcat, the latest Spring Music build artifacts are pulled and installed into the corresponding Docker images.

Docker Compose builds and deploys (6) containers onto the VirtualBox VM: (1) NGINX, (3) Tomcat, (1) MongoDB, and (1) ELK.

The NGINX Dockerfile:

The Tomcat Dockerfile:

Docker Compose v2 YAML
This post was recently updated for Docker 1.12, and to use Docker Compose v2 YAML file format. The post’s docker-compose.yml takes advantage of improvements in Docker 1.12 and Docker Compose v2 YAML. Improvements to the YAML file include eliminating the need to link containers and expose ports, and the addition of named networks and volumes.

The Results

Spring Music Infrastructure

Below are the results of building the project.

Testing the Application

Below are partial results of the curl test, hitting the NGINX endpoint. Note the different IP addresses in the Upstream-Address field between requests. This test proves NGINX’s round-robin load-balancing is working across the three Tomcat application instances: music_app_1, music_app_2, and music_app_3.

Also, note the sharp decrease in the Request-Time between the first three requests and subsequent three requests. The Upstream-Response-Time to the Tomcat instances doesn’t change, yet the total Request-Time is much shorter, due to caching of the application’s static assets by NGINX.

Spring Music Application Links

Assuming the springmusic VM is running at 192.168.99.100, the following links can be used to access various project endpoints. Note the (3) Tomcat instances each map to randomly exposed ports. These ports are not required by NGINX, which maps to port 8080 for each instance. The port is only required if you want access to the Tomcat Web Console. The port, shown below, 32771, is merely used as an example.

* The Tomcat user name is admin and the password is t0mcat53rv3r.

Helpful Links

TODOs

  • Automate the Docker image build and publish processes
  • Automate the Docker container build and deploy processes
  • Automate post-deployment verification testing of project infrastructure
  • Add Docker Swarm multi-host capabilities with overlay networking
  • Update Spring Music with latest CF project revisions
  • Include scripting example to stand-up project on AWS
  • Add Consul and Consul Template for NGINX configuration

, , , , , , , , , , , ,

9 Comments

Diving Deeper into ‘Getting Started with Spring Cloud’

Spring_Cloud_Config_2

Explore the integration of Spring Cloud and Spring Cloud Netflix tooling, through a deep dive into Pivotal’s ‘Getting Started with Spring Cloud’ presentation.

Introduction

Keeping current with software development and DevOps trends can often make us feel we are, as the overused analogy describes, drinking from a firehose, often several hoses at once. Recently joining a large client engagement, I found it necessary to supplement my knowledge of cloud-native solutions, built with the support of Spring Cloud and Spring Cloud Netflix technologies. One of my favorite sources of information on these subjects is presentations by people like Josh Long, Dr. Dave Syer, and Cornelia Davis of Pivotal Labs, and Jon Schneider and Taylor Wicksell of Netflix.

One presentation, in particular, Getting Started with Spring Cloud, by Long and Syer, provides an excellent end-to-end technical overview of the latest Spring and Netflix technologies. Josh Long’s fast-paced, eighty-minute presentation, available on YouTube, was given at SpringOne2GX 2015 with co-presenter, Dr. Dave Syer, founder of Spring Cloud, Spring Boot, and Spring Batch.

As the presenters of Getting Started with Spring Cloud admit, the purpose of the presentation was to get people excited about Spring Cloud and Netflix technologies, not to provide a deep dive into each technology. However, I believe the presentation’s Reservation Service example provides an excellent learning opportunity. In the following post, we will examine the technologies, components, code, and configuration presented in Getting Started with Spring Cloud. The goal of the post is to provide a greater understanding of the Spring Cloud and Spring Cloud Netflix technologies.

System Overview

Technologies

The presentation’s example introduces a dizzying array of technologies, which include:

Spring Boot
Stand-alone, production-grade Spring-based applications

Spring Data REST / Spring HATEOAS
Spring-based applications following HATEOAS principles

Spring Cloud Config
Centralized external configuration management, backed by Git

Netflix Eureka
REST-based service discovery and registration for failover and load-balancing

Netflix Ribbon
IPC library with built-in client-side software load-balancers

Netflix Zuul
Dynamic routing, monitoring, resiliency, security, and more

Netflix Hystrix
Latency and fault tolerance for distributed system

Netflix Hystrix Dashboard
Web-based UI for monitoring Hystrix

Spring Cloud Stream
Messaging microservices, backed by Redis

Spring Data Redis
Configuration and access to Redis from a Spring app, using Jedis

Spring Cloud Sleuth
Distributed tracing solution for Spring Cloud, sends traces via Thrift to the Zipkin collector service

Twitter Zipkin
Distributed tracing system, backed by Apache Cassandra

H2
In-memory Java SQL database, embedded and server modes

Docker
Package applications with dependencies into standardized Linux containers

System Components

Several components and component sub-systems comprise the presentation’s overall Reservation Service example. Each component implements a combination of the technologies mentioned above. Below is a high-level architectural diagram of the presentation’s example. It includes a few additional features, added as part of this post.

Overall Reservation System Diagram

Individual system components include:

Spring Cloud Config Server
Stand-alone Spring Boot application provides centralized external configuration to multiple Reservation system components

Spring Cloud Config Git Repo
Git repository containing multiple Reservation system components configuration files, served by Spring Cloud Config Server

H2 Java SQL Database Server (New)
This post substitutes the original example’s use of H2’s embedded version with a TCP Server instance, shared by Reservation Service instances

Reservation Service
Multi load-balanced instances of stand-alone Spring Boot application, backed by H2 database

Reservation Client
Stand-alone Spring Boot application (aka edge service or client-side proxy), forwards client-side load-balanced requests to the Reservation Service, using Eureka, Zuul, and Ribbon

Reservation Data Seeder (New)
Stand-alone Spring Boot application, seeds H2 with initial data, instead of the Reservation Service

Eureka Service
Stand-alone Spring Boot application provides service discovery and registration for failover and load-balancing

Hystrix Dashboard
Stand-alone Spring Boot application provides web-based Hystrix UI for monitoring system performance and Hystrix circuit-breakers

Zipkin
Zipkin Collector, Query, and Web, and Cassandra database, receives, correlates, and displays traces from Spring Cloud Sleuth

Redis
In-memory data structure store, acting as message broker/transport for Spring Cloud Stream

Github

All the code for this post is available on Github, split between two repositories. The first repository, spring-cloud-demo, contains the source code for all of the components listed above, except the Spring Cloud Config Git Repo. To function correctly, the configuration files, consumed by the Spring Cloud Config Server, needs to be placed into a separate repository, spring-cloud-demo-config-repo.

The first repository contains a git submodule , docker-zipkin. If you are not familiar with submodules, you may want to take a moment to read the git documentation. The submodule contains a dockerized version of Twitter’s OpenZipkin, docker-zipkin. To  clone the two repositories, use the following commands. The --recursive option is required to include the docker-zipkin submodule in the project.

Configuration

To try out the post’s Reservation system example, you need to configure at least one property. The Spring Cloud Config Server needs to know the location of the Spring Cloud Config Repository, which is the second GitHub repository you cloned, spring-cloud-demo-config-repo. From the root of the spring-cloud-demo repo, edit the Spring Cloud Config Server application.properties file, located in config-server/src/main/resources/application.properties. Change the following property’s value to your local path to the spring-cloud-demo-config-repo repository:

Startup

There are a few ways you could run the multiple components that make up the post’s example. I suggest running one component per terminal window, in the foreground. In this way, you can monitor the output from the bootstrap and startup processes of the system’s components. Furthermore, you can continue to monitor the system’s components once they are up and running, and receiving traffic. Yes, that is twelve terminal windows…

ReservationServices.png

There is a required startup order for the components. For example, Spring Cloud Config Server needs to start before the other components that rely on it for configuration. Netflix’s Eureka needs to start before the Reservation Client and ReservationServices, so they can register with Eureka on startup. Similarly, Zipkin needs to be started in its Docker container before the Reservation Client and Services, so Spring Cloud Sleuth can start sending traces. Redis needs to be started in its Docker container before Spring Cloud Stream tries to create the message queue. All instances of the Reservation Service needs to start before the Reservation Client. Once every component is started, the Reservation Data Seeder needs to be run once to create initial data in H2. For best results, follow the instructions below. Let each component start completely, before starting the next component.

Docker

Both Zipkin and Redis run in Docker containers. Redis runs in a single container. Zipkin’s four separate components run in four separate containers. Be advised, Zipkin seems to have trouble successfully starting all four of its components on a consistent basis. I believe it’s a race condition caused by Docker Compose simultaneously starting the four Docker containers, ignoring a proper startup order. More than half of the time, I have to stop Zipkin and rerun the docker command to get Zipkin to start without any errors.

If you’ve followed the instructions above, you should see the following Docker images and Docker containers installed and running in your local environment.

Components

Spring Cloud Config Server

At the center of the Reservation system is Spring Cloud Config. Configuration, typically found in the application.properties file, for the Reservation Services, Reservation Client, Reservation Data Seeder, Eureka Service, and Hystix Dashboard, has been externalized with Spring Cloud Config.

Spring_Cloud_Config_2

Each component has a bootstrap.properties file, which modifies its startup behavior during the bootstrap phase of an application context. Each bootstrap.properties file contains the component’s name and the address of the Spring Cloud Config Server. Components retrieve their configuration from the Spring Cloud Config Server at runtime. Below, is an example of the Reservation Client’s bootstrap.properties file.

Spring Cloud Config Git Repo

In the presentation, as in this post, the Spring Cloud Config Server is backed by a locally cloned Git repository, the Spring Cloud Config Git Repo. The Spring Cloud Config Server’s application.properties file contains the address of the Git repository. Each properties file within the Git repository corresponds to a system component. Below, is an example of the reservation-client.properties file, from the Spring Cloud Config Git Repo.

As shown in the original presentation, the configuration files can be viewed using HTTP endpoints of the Spring Cloud Config Server. To view the Reservation Service’s configuration stored in the Spring Cloud Config Git Repo, issue an HTTP GET request to http://localhost:8888/reservation-service/master. The master URI refers to the Git repo branch in which the configuration resides. This will return the configuration, in the response body, as JSON:

SpringCloudConfig

In a real Production environment, the Spring Cloud Config Server would be backed by a highly-available Git Server or GitHub repository.

Reservation Service

The Reservation Service is the core component in the presentation’s example. The Reservation Service is a stand-alone Spring Boot application. By implementing Spring Data REST and Spring HATEOAS, Spring automatically creates REST representations from the Reservation JPA Entity class of the Reservation Service. There is no need to write a Spring Rest Controller and explicitly code each endpoint.

HATEOAS

Spring HATEOAS allows us to interact with the Reservation Entity, using HTTP methods, such as GET and POST. These endpoints, along with all addressable endpoints, are displayed in the terminal output when a Spring Boot application starts. For example, we can use an HTTP GET request to call the reservations/{id} endpoint, such as:

The Reservation Service also makes use of the Spring RepositoryRestResource annotation. By annotating the RepositoryReservation Interface, which extends JpaRepository, we can customize export mapping and relative paths of the Reservation JPA Entity class. As shown below, the RepositoryReservation Interface contains the findByReservationName method signature, annotated with /by-name endpoint, which accepts the rn input parameter.

Calling the findByReservationName method, we can search for a particular reservation by using an HTTP GET request to call the reservations/search/by-name?rn={reservationName} endpoint.

Spring Screengrab 04

Reservation Client

Querying the Reservation Service directly is possible, however, is not the recommended. Instead, the presentation suggests using the Reservation Client as a proxy to the Reservation Service. The presentation offers three examples of using the Reservation Client as a proxy.

The first demonstration of the Reservation Client uses the /message endpoint on the Reservation Client to return a string from the Reservation Service. The message example has been modified to include two new endpoints on the Reservation Client. The first endpoint, /reservations/client-message, returns a message directly from the Reservation Client. The second endpoint, /reservations/service-message, returns a message indirectly from the Reservation Service. To retrieve the message from the Reservation Service, the Reservation Client sends a request to the endpoint Reservation Service’s /message endpoint.

To retrieve both messages, send separate HTTP GET requests to each endpoint:

Spring Screengrab 02

The second demonstration of the Reservation Client uses a Data Transfer Object (DTO). Calling the Reservation Client’s reservations/names endpoint, invokes the getReservationNames method. This method, in turn, calls the Reservation Service’s /reservations endpoint. The response object returned from the Reservation Service, a JSON array of reservation records, is deserialized and mapped to the Reservation Client’s Reservation DTO. Finally, the method returns a collection of strings, representing just the names from the reservations.

To retrieve the collection of reservation names, an HTTP GET request is sent to the /reservations/names endpoint:

Spring Screengrab 05

Spring Cloud Stream

One of the more interesting technologies in the presentation is Spring’s Spring Cloud Stream. The Spring website describes Spring Cloud Stream as a project that allows users  to develop and run messaging microservices using Spring Integration. In other words, it provides native Spring messaging capabilities, backed by a choice of message buses, including Redis, RabbitMQ, and Apache Kafka, to Spring Boot applications.

A detailed explanation of Spring Cloud Stream would take an entire post. The best technical demonstration I have found is the presentation, Message Driven Microservices in the Cloud, by speakers Dr. David Syer and Dr. Mark Pollack, given in January 2016, also at SpringOne2GX 2015.

Diagram_03

In the presentation, a new reservation is submitted via an HTTP POST to the acceptNewReservations method of the Reservation Client. The method, in turn, builds (aka produces) a message, containing the new reservation, and publishes that message to the queue.reservation queue.

The queue.reservation queue is located in Redis, which is running inside a Docker container. To view the messages being published to the queue in real-time, use the redis-cli, with the monitor command, from within the Redis Docker container. Below is an example of tests messages pushed (LPUSH) to the reservations queue from the Reservation Client.

The published messages are consumed by subscribers to the reservation queue. In this example, the consumer is the Reservation Service. The Reservation Service’s acceptNewReservation method processes the message and saves the new reservation to the H2 database. In Spring Cloud Stream terms, the Reservation Client is the Sink.

Netflix Eureka

Netflix’s Eureka, in combination with Netflix’s Zuul and Ribbon, provide the ability to scale the Reservation Service horizontally, and to load balance those instances. By using the @EnableEurekaClient annotation on the Reservation Client and Reservation Services, each instance will automatically register with Eureka on startup, as shown in the Eureka Web UI, below.

Diagram9

The names of the registered instances are in three parts: the address of the host on which the instance is running, followed by the value of the spring.application.name property of the instance’s bootstrap.properties file, and finally, the port number the instance is running on. Eureka displays each instance’s status, along with additional AWS information, if you are running on AWS, as Netflix does.

Diagram_07

According to Spring in their informative post, Spring Cloud, service discovery is one of the key tenets of a microservice based architecture. Trying to hand-configure each client, or to rely on convention over configuration, can be difficult to do and is brittle. Eureka is the Netflix Service Discovery Server and Client. A client (Spring Boot application), registers with Eureka, providing metadata about itself. Eureka then receives heartbeat messages from each instance. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.

The Reservation Client application is also annotated with @EnableZuulProxy. Adding this annotation pulls in Spring Cloud’s embedded Zuul proxy. Again, according to Spring, the proxy is used by front-end applications to proxy calls to one or more back-end services, avoiding the need to manage CORS and authentication concerns independently for all the backends. In the presentation and this post, the front end is the Reservation Client and the back end is the Reservation Service.

In the code snippet below from the ReservationApiGatewayRestController, note the URL of the endpoint requested in the getReservationNames method. Instead of directly calling http://localhost:8000/reservations, the method calls http://reservation-service/reservations. The reservation-service segment of the URL is the registered name of the service in Eureka and contained in the Reservation Service’s bootstrap.properties file.

In the following abridged output from the Reservation Client, you can clearly see the interaction of Zuul, Ribbon, Eureka, and Spring Cloud Config. Note the Client application has successfully registering itself with Eureka, along with the Reservation Client’s status. Also, note Zuul mapping the Reservation Service’s URL path.

Load Balancing

One shortcoming of the original presentation was true load balancing. With only a single instance of the Reservation Service in the original presentation, there is nothing to load balance; it’s more of a reverse proxy example. To demonstrate load balancing, we need to spin up additional instances of the Reservation Service. Following the post’s component start-up instructions, we should have three instances of the Reservation Service running, on ports 8000, 8001, and 8002, each in separate terminal windows.

ReservationServices.png

To confirm the three instances of the Reservation Service were successfully registered with Eureka, review the output from the Eureka Server terminal window. The output should show three instances of the Reservation Service registering on startup, in addition to the Reservation Client.

Viewing Eureka’s web console, we should observe three members in the pool of Reservation Services.

Diagram9b

Lastly, looking at the terminal output of the Reservation Client, we should see three instances of the Reservation Service being returned by Ribbon (aka the DynamicServerListLoadBalancer).

Requesting

Requesting http://localhost:8050/reservations/names, Ribbon forwards the request to one of the three Reservation Service instances registered with Eureka. By default, Ribbon uses a round-robin load-balancing strategy to select an instance from the pool of available Reservation Services.

H2 Server

The original presentation’s Reservation Service used an embedded instance of H2. To scale out the Reservation Service, we need a common database for multiple instances to share. Otherwise, queries would return different results, specific to the particular instance of Reservation Service chosen by the load-balancer. To solve this, the original presentation’s embedded version of H2 has been replaced with the TCP Server client/server version of H2.

Reservation Service Instances

Thanks to more Spring magic, the only change we need to make to the original presentation’s code is a few additional properties added to the Reservation Service’s reservation-service.properties file. This changes H2 from the embedded version to the TCP Server version.

Reservation Data Seeder

In the original presentation, the Reservation Service created several sample reservation records in its embedded H2 database on startup. Since we now have multiple instances of the Reservation Service running, the sample data creation task has been moved from the Reservation Service to the new Reservation Data Seeder. The Reservation Service only now validates the H2 database schema on startup. The Reservation Data Seeder now updates the schema based on its entities. This also means the seed data will be persisted across restarts of the Reservation Service, unlike in the original configuration.

Running the Reservation Data Seeder once will create several reservation records into the H2 database. To confirm the H2 Server is running and the initial reservation records were created by the Reservation Data Seeder, point your web browser to the H2 login page at http://192.168.99.1:6889. and log in using the credentials in the reservation-service.properties file.

H2_grab1

The H2 Console should contain the RESERVATION table, which holds the reservation sample records.

H2_grab2

Spring Cloud Sleuth and Twitter’s Zipkin

According to the project description, “Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud. All your interactions with external systems should be instrumented automatically. You can capture data simply in logs, or by sending it to a remote collector service.” In our case, that remote collector service is Zipkin.

Zipkin describes itself as, “a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data through a Collector and a Query service.” Zipkin provides critical insights into how microservices perform in a distributed system.

Zipkin_Diagram

In the presentation, as in this post, the Reservation Client’s main ReservationClientApplication class contains the alwaysSampler bean, which returns a new instance of org.springframework.cloud.sleuth.sampler.AlwaysSampler. As long as Spring Cloud Sleuth is on the classpath and you have added alwaysSampler bean, the Reservation Client will automatically generate trace data.

Sending a request to the Reservation Client’s service/message endpoint (http://localhost:8050/reservations/service-message,), will generate a trace, composed of spans. in this case, the spans are individual segments of the HTTP request/response lifecycle. Traces are sent by Sleuth to Zipkin, to be collected. According to Spring, if spring-cloud-sleuth-zipkin is available, then the application will generate and collect Zipkin-compatible traces using Brave). By default, it sends them via Apache Thrift to a Zipkin collector service on port 9410.

Zipkin’s web-browser interface, running on port 8080, allows us to view traces and drill down into individual spans.

Zipkin_UI

Zipkin contains fine-grain details about each span within a trace, as shown below.

Zipkin_UI_Popup

Correlation IDs

Note the x-trace-id and x-span-id in the request header, shown below. Sleuth injects the trace and span IDs to the SLF4J MDC (Simple Logging Facade for Java – Mapped Diagnostic Context). According to Spring, IDs provides the ability to extract all the logs from a given trace or span in a log aggregator. The use of correlation IDs and log aggregation are essential for monitoring and supporting a microservice architecture.

Zipkin_UI_Popup2

Hystix and Hystrix Dashboard

The last major technology highlighted in the presentation is Netflix’s Hystrix. According to Netflix, “Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services, and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.” Hystrix is essential, it protects applications from cascading dependency failures, an issue common to complex distributed architectures, with multiple dependency chains. According to Netflix, Hystrix uses multiple isolation techniques, such as bulkhead, swimlane, and circuit breaker patterns, to limit the impact of any one dependency on the entire system.

The presentation demonstrates one of the simpler capabilities of Hystrix, fallback. The getReservationNames method is decorated with the @HystrixCommand annotation. This annotation contains the fallbackMethod. According to Netflix, a graceful degradation of a method is provided by adding a fallback method. Hystrix will call to obtain a default value or values, in case the main command fails. In the presentation’s example, the Reservation Service, a direct dependency of the Reservation Client, has failed. The Reservation Service failure causes the failure of the Reservation Client.

In the presentation’s example, the Reservation Service, a direct dependency of the Reservation Client, has failed. The Reservation Service failure causes the failure of the Reservation Client’s getReservationNames method to return a collection of reservation names. Hystrix redirects the application to the getReservationNameFallback method. Instead of returning a collection of reservation names, the getReservationNameFallback returns an empty collection, as opposed to an error message to the client.

A more relevant example  involves Netflix movie recommendation service. In the event a failure of the recommendation service’s method to return a collection of personalized list of movie recommendations to a customer, Hystrix fallbacks to a method that returns a generic list of the most popular movies to the customer. Netflix has determined that, in the event of a failure of their recommendation service, falling back to a generic list of movies is better than returning no movies at all.

The Hystrix Dashboard is a tool, available with Hystrix, to visualize the current state of Hystrix instrumented methods. Although visually simplistic, the dashboard effectively presents the health of calls to external systems, which are wrapped in a HystrixCommand or HystrixObservableCommand.

Hystrix_Stream_Diagram

The Hystrix dashboard is a visual representation of the Hystrix Stream. This stream is a live feed of data sent by the Hystrix instrumented application, in this case, the Reservation Client. For a single Hystrix application, such as the Reservation Client, the feed requested from the application’s hystrix.stream endpoint is http://localhost:8050/hystrix.stream. The dashboard consumes the stream resource’s response and visualizes it in the browser using JavaScript, jQuery, and d3.

In the post, as in the presentation, hitting the Reservation Client with a volume of requests, we observe normal activity in Hystrix Dashboard. All three instances of the Reservation Service are running and returning the collection of reservations from H2, to the Reservation Client.

Hystrix_success

If all three instances of the Reservation Service fail or the maximum latency is exceeded, the Reservation Client falls back to returning an empty collection in the response body. In the example below, 15 requests, representing 100% of the current traffic, to the getReservationNames method failed and subsequently fell back to return an empty collection. Hystrix succeeded in helping the application gracefully fall back to an alternate response.

Hystrix_failures

Conclusion

It’s easy to see how Spring Cloud and Netflix’s technologies are easily combined to create a performant, horizontally scalable, reliable system. With the addition of a few missing components, such metrics monitoring and log aggregation, this example could easily be scaled up to support a production-grade microservices-based, enterprise software platform.

, , , , , , , , , , , , , , ,

7 Comments

Scaffolding Modern Web Applications

Scaffold Three Full-Stack Modern Web Application Examples using Yeoman with npm, Bower, and Gruntarticle background

Introduction

The capabilities of modern web applications have quickly matched and surpassed those of most traditional desktop applications. However, with the increase in capabilities, comes an increase in architectural complexity of web applications. To help deal with the increase in complexity, developers have benefitted from a plethora of popular support libraries, frameworks, API’s, and similar tooling. Examples of these include AngularJSReact.js, Play!Node.js, Express, npmYeoman, Bower, Grunt, and Gulp.

With so many choices, selecting an optimal toolset to construct a modern web application can be overwhelming. In this post, we will examine three distinct modern web application architectures, predominantly JavaScript-based. The post will discuss how to select and install the required tools, and how to use those tools to scaffold the applications. By the end of the post, we will have three working web applications, ready for further development.

Generators

The post’s examples use Yeoman generators. What is a generator? According to Wikipedia, Yeoman’s generator concept was inspired by Ruby on Rails. Using a generator’s blueprint, Yeoman scaffolds an application’s project directory and file structure, and installs required vendor libraries and dependencies. Yeoman generators usually run interactively, guiding the developer through a series of configuration questions. The configuration choices determine the project’s physical structure and components installed by Yeoman.

Generators are inherently opinionated. They dictate a particular application architecture they feel represents current best practices. However, good generators also allow developers to select from a range of architectural choices to meet the requirements of a developer’s environment. For example, a generator might allow Grunt or Gulp for task automation, or allow either Less or Sass for styling the UI. Similar to npm, RubyGems, Bower, Docker Hub, and Puppet Forge, Yeoman provides a searchable public repository. This allows developers to choose from a variety of generators to meet their specific needs.

Preparing the Development Environment

The examples in this post were built on Mac OS X. However, all the tools discussed in the post are available on the three major platforms, Linux, Mac, and Windows. Installation and configuration will vary.

An IDE is not required to scaffold the post’s application examples. However, for further development of the applications, I strongly recommend JetBrain’s WebStorm. According to Slant, WebStorm is a popular, highly rated IDE for building modern web applications. WebStorm is available on all three major platforms. A paid license is required, but well worth the reasonable investment based on the IDE’s rich feature set. WebStorm is integrated with many popular JavaScript frameworks. Additionally, there are hundreds of plug-ins available to extend WebStorm’s functionality.

Each of the post’s examples varies, architecturally. However, each also shares several common components, which we will install. They include:

npm
We will use npm (aka Node Package Manager), a leading server-side package manager, to manage the application’s server-side JavaScript dependencies.

Node.js
We will use Node.js, a JavaScript runtime, to power the JavaScript-based web application examples.

Bower
Similar to npm, Bower, is a popular client-side package manager. We will use Bower to manage the application’s client-side JavaScript dependencies.

Yeoman
We will use Yeoman, a leading web application scaffolding tool, to quickly build the frameworks for the example applications based on best practices and tooling.

Grunt
We will use Grunt, a leading JavaScript task runner, to automate common tasks such as minification, compilation, unit-testing, linting, and packaging of applications for deployment. At least two of the three examples also offer Gulp as an alternative.

Express
We will use Express, a Node.js web application framework that provides a robust set of features for web applications development, to support our example applications.

MongoDB
We will use MongoDB, a leading open-source NoSQL document database, for all three examples. For two of the examples, you can easily substitute alternate databases, such as MySQL, when configuring the application with Yeoman. The choice of database is of secondary importance in this post.

First install Node.js, which comes packaged with npm. Then, use npm to install Bower, Yeoman, and Grunt. Make sure you run the command to fix the permissions for npm. If permissions are set correctly, you should not have to use sudo with your npm commands.

The global mode option (-g) installs packages globally. Packages are usually installed globally, only if they are used as a command line tool, such as with Bower, Yeoman, and Grunt.

The easiest way to install Node.js and npm on OS X, is with Homebrew:

Alternatively, install Node and npm by downloading the package installer for Mac OS X (x64) from nodejs.org. If you are not a currently a Homebrew user, I suggest this route. At the time of this post, you have the choice of Node.js version v4.2.3 LTS or v5.1.1 Stable.

Fix the potential permission problem with npm, so sudo is not required:

Then, install Yeoman, Bower, and Grunt, globally:

Lastly, install MongoDB. Similar to Node, we can use Homebrew, or download and install MongoDB from the MongoDB website. Review this page for detailed installation and configuration instructions. To install MongoDB with Homebrew, we would issue the following commands:

Example #1: Server-Centric Express Application

For our first example, we will scaffold a server-centric JavaScript web application using Pete Cooper’s Express Generator (v2.9.2). According to the project’s GitHub site, the Express Generator is ‘An Expressjs generator for Yeoman, based on the express command line tool.’ I suggest reading the project’s documentation before continuing; it describes the generator’s functionality in greater detail.

To install, download the Express Generator with npm, and install with Yeoman, as follows:

As part of the Express Generator’s configuration process, Yeoman will ask a series of configuration questions. The Express generator offers several choices for scaffolding the application. For this example, we will choose the following options: MVC, Marko, Sass, MongoDB, and Grunt.

Express-Generator with Yeoman'

Using Express-Generator with Yeoman

For those not as familiar with developing full-stack JavaScript applications, some of the generator’s choices may be unfamiliar, such as view engines, css preprocessors, and build tools. For this example, we will select Marko, a highly regarded JavaScript templating engine (aka view engine), for the first application. You can compare different engines on Slant. For CSS preprocessors, you can also refer to Slant for a comparison of leading candidates. We will choose Sass.

Lastly, for a build tool (aka task runner) we will choose Grunt. Grunt and Gulp are the two most popular choices. Either is a proven tool for automation tasks such as minification, compilation, unit-testing, linting, and packaging applications for deployment.

As shown below, Yeoman creates a series of files and directories and installs JavaScript libraries with npm and Bower. Choices are based on best practices, as prescribed by the generator.

Default Express-Generator File Structure

Default Express-Generator File Structure

npm

Yeoman uses npm and bower to install the generator’s required packages. Based on our five configuration choices for the Express Generator, npm installed over 225 packages in the project’s local node_module directory. This includes primary and secondary npm package dependencies. For example, Marko, one of the choices, which npm installed, has 24 dependencies it requires. In turn, each of those packages may have more dependencies. You quickly see why npm, and other similar package managers, are invaluable to building and managing a modern web application. The npm dependencies are declared in the package.json file, in the project’s root directory.

Partial List of npm Packages Installed

Partial List of npm Packages Installed

We will still need to install a few more items. We chose Sass as an Express generator option. Sass requires Ruby, which comes preinstalled on Mac OS X. If you wish, you can upgrade your pre-installed version of Ruby with Homebrew, but it is not required. Sass is installed with RubyGems, a package manager for Ruby. To automate the Sass-related tasks with Grunt, we also need to install the Grunt plug-in for Sass, grunt-contrib-sass, using npm:

The Express Generator’s test are written in Mocha. Mocha is an asynchronous JavaScript test framework running on Node.js. The website suggests installing Mocha globally with npm. Mocha can be run from the command line:

Up and Running

Simply running the grunt command will start the default Generator-Express MVC application. While in development, I prefer to run Grunt with the debug option (grunt -d) and/or with the verbose option (grunt -v or grunt -dv). These options offer enhanced feedback, especially on which tasks are run by Grunt. Review the terminal output to make sure the application started properly.

Starting the Application with Grunt

Starting the Application with Grunt

To confirm the Express application started correctly, in a second terminal window, curl the application using curl -I localhost:3000. Easier yet, point your web browser to localhost:3000. You should see the following default web page.

Default Express-Generator Application

Default Express-Generator Application

Example #2: Full-Stack MEAN Application

In our second example, we will scaffold a true full-stack JavaScript web application using Tyler Henkel’s popular AngularJS Full-Stack Generator for Yeoman (v3.0.2). According to the project’s GitHub site, the generator is a ‘Yeoman generator for creating MEAN stack applications, using MongoDB, Express, AngularJS, and Node – lets you quickly set up a project following best practices.’ As with the earlier example, I suggest you read the project’s documentation before continuing.

To install theAngularJS Full-Stack Generator, download the with npm and install with Yeoman:

Similar to the Express example, Yeoman will ask a series of configuration questions. We will choose the following options: Jade, Less, ngRoute, Bootstrap, UI Bootstrap, and MongoDB with Mongoose. AngularJS, Express, and Grunt are installed by the generator, automatically. For the sake of brevity, we will not include other available options, including Babel for ES6, OAuth authentication, or socket.io.

AngularJS Full-Stack Generator Config Options

AngularJS Full-Stack Generator Config Options

PhantomJS

After generating the AngularJS Full-Stack project, I received errors regarding PhantomJS. According to several sources, this is not uncommon. The AngularJS Full-Stack project uses PhantomJS as the default browser for Karma, the popular test runner, designed by the AngularJS team. Although npm installed PhantomJS locally, as part of the project, Karma complained about missing the path to the PhantomJS binary. To eliminate the issue, I installed PhantomJS globally with npm. I then manually added the PhantomJS binary path to the $PATH environment variable:

To test Karma, with PhantomJS, run the grunt test command. This should result in error-free output, similar to the following.

Running "karma:unit" (karma) task

Running “karma:unit” (karma) task

Client/Server Architecture

Similar to the previous example, Yeoman creates a series of files and directories, and installs JavaScript packages on the server and client sides with npm and Bower.

AngularJS Full-Stack Generator File Structure

AngularJS Full-Stack Generator File Structure

Both the Express and AngularJS examples share several common files and directories. However, one major difference between the two is the client/server oriented directory structure of the AngularJS generator. Unlike the Express example, the AngularJS example has both a client and a server directory. The server-side of the application (aka back-end) is driven primarily by Express and Node. Mongoose serves as an interface between our application’s domain model and MongoDB, on the server-side. Also, on the server-side, Jade is used for HTML templating. The client-side of the application (aka front-end) is driven primarily by AngularJS. Twitter’s Bootstrap and Bootstrap UI offer a responsive web interface for our example application.

Full-Stack Server/Client File Structure

Full-Stack Server/Client File Structure

Up and Running

Running the grunt serve command will eventually start the default AngularJS Full-Stack application, after running a series of pre-defined Grunt tasks.

AngularJS Full-Stack App Starting with Grunt

AngularJS Full-Stack App Starting with Grunt

Review the terminal output to make sure the application started properly. You may see some warnings, suggesting the installation of several dependencies globally. You may also see warnings about dependency versions being outdated. Outdated versions are one of the challenges with generators that are not constantly kept refreshed and tested with the latest package dependencies. Warnings shouldn’t prevent the application from starting, only Errors.

AngularJS Full-Stack App Started with Grunt

AngularJS Full-Stack App Started with Grunt

To confirm the application started, in a second terminal window, curl the application using curl -I localhost:9000. Easier yet, point your web browser at localhost:9000. The default web page for the AngularJS Full-Stack web application is much more elaborate than the previous Express example. This is thanks to the Bootstrap and AngularJS client-side components.

AngularJS Full-Stack App Running in Browser

AngularJS Full-Stack App Running in Browser

Additional Generator Features

The AngularJS Full-Stack generator is capable of generating more than just the default application project. The AngularJS Full-Stack generator contains a set of generators. Beyond generating the basic application framework, you may use the generator to create boilerplate code for AngularJS and Node.js components for endpoints, services, routes, models, controllers, directives, and filters. The generators also provide the ability to prepare your application for deployment to OpenStack and Heroku.

The best place to review available options for the generators is on the GitHub sites. You can display a high-level list of the generator’s features using the yo --help command. Below are the three generators used in this post.

Below, is an example of generating additional application components using the AngularJS Full-Stack generator. First scaffold a server-side Express RESTful API endpoint, called ‘user’. The single command generates a server-side directory structure and several boilerplate files, including an Express model, controller, and router, and Mocha tests.

List of Installed Yeoman Generators

List of Installed Yeoman Generators

Next, generate a client-side AngularJS service, which connects to the server-side, Express RESTful ‘user’ endpoint above. The command creates a boilerplate AngularJS service and Mocha test. Lastly, create an AngularJS route. This generator command creates a boilerplate AngularJS route and controller, Mocha test, Jade view template, and less file.

AngularJS Full-Stack Component Generators

AngularJS Full-Stack Component Generators

Example #3: Java Hipster Application

In the third and last example, we will scaffold another full-stack web application. However, this time, we will use a generator that relies on Java EE as the primary development platform on the server-side, as opposed to JavaScript. JavaScript will be relegated largely to the client-side.

Again, we will use a Yeoman generator, JHipster, built by Julien Dubois and team, to scaffold the application (v2.25.0). According to the project’s GitHub site, JHipster uses a robust server-side Java EE stack with Spring Boot and Maven. JHipster’s mobile-first front-end is enabled with AngularJS and Bootstrap. Being the most complex of the three examples, it’s important to review the project documentation.

JHipster offers three ways to install the application, which are 1) locally, 2) a Docker container, or 3) a Vagrant VM. We will install the application framework locally as not to introduce additional complexity. To install the generator, download the JHipster generator with npm, and install with Yeoman:

Again, Yeoman will ask a series of configuration questions on behalf of JHipster. For this example, we will choose the following options: token-based authentication, MongoDB, Maven, Grunt, LibSass (Sass), and Gatling for testing. AngularJS and Bootstrap are installed automatically. We have chosen not to include other configuration options in this example, such as Angular Translate, WebSockets, and clustered HTTP sessions.

Default JHipster Generator Options

Default JHipster Generator Options

Once Yeoman finishes scaffolding the application, you should see the following output.

JHipster Generator Install Complete

JHipster Generator Install Complete

Maven Project Structure

The file and directory structure of JHipster is very different from the previous two examples. The first two example’s project structure is typical of a JavaScript project. In contrast, the JHipster example’s structure is more typical of a Maven-based Java project. In the JHipster project, the client-side JavaScript files are in the /src/main/webapp/ directory. The presence of the webapp directory is based on part of the project’s reliance on the Spring MVC web framework. Additionally, npm has loaded required server-side JavaScript packages into the node_modules directory, in the project’s root directory.

Default JHipster File Structure

Default JHipster File Structure

Up and Running

Running the mvn command will start the default JHipster application. The URL for the JHipster application is included in the terminal output.

JHipster Application Running with Maven

JHipster Application Running with Maven

To confirm the application has started, curl the application in a second terminal window, using curl -I localhost:8080. Easier yet, point your web browser to localhost:8080. Again, thanks to Bootstrap and AngularJS, the application presents a rich client UI.

Default JHipster Application Running

Default JHipster Application Running

Conclusion

The post’s examples represent a narrow sampling of available modern web application stacks, which can be easily scaffolded with generators. The JavaScript space continues to evolve rapidly. Even within the realm of JavaScript-based solutions, we didn’t examine several other popular frameworks, such as Meteor, FaceBook’s ReactJS, Ember, Backbone, and Polymer. They are all worth exploring, along with the hundreds of popular supporting frameworks, libraries, and API’s.

Useful Links

  • Post: JavaScript Frameworks: The Best 10 for Modern Web Apps (link)
  • Post: Best Web Frameworks (link)
  • Post: State Of Web Development 2014 (link)
  • YouTube: Modern Front-end Engineering (link)
  • YouTube: WebStorm – Things You Probably Didn’t Know (link)
  • Website: MEAN Stack (link)
  • Website: Full Stack Python (link)
  • Post: Best Full Stack Web Framework (link)

, , , , , , , , , , , , ,

1 Comment

Build and Deploy a Java-Spring-MongoDB Application using Docker

Build a multi-container, MongoDB-backed, Java Spring web application, and deploy to a test environment using Docker.

Spring Music Diagram

Introduction
Application Architecture
Spring Music Environment
Building the Environment
Spring Music Application Links
Helpful Links

Introduction

In this post, we will demonstrate how to build, deploy, and host a multi-tier Java application using Docker. For the demonstration, we will use a sample Java Spring application, available on GitHub from Cloud Foundry. Cloud Foundry’s Spring Music sample record album collection application was originally designed to demonstrate the use of database services on Cloud Foundry and Spring Framework. Instead of Cloud Foundry, we will host the Spring Music application using Docker with VirtualBox and optionally, AWS.

All files required to build this post’s demonstration are located in the master branch of this GitHub repository. Instructions to clone the repository are below. The Java Spring Music application’s source code, used in this post’s demonstration, is located in the master branch of this GitHub repository.

Spring Music

A few changes were necessary to the original Spring Music application to make it work for the this demonstration. At a high-level, the changes included:

  • Modify MongoDB configuration class to work with non-local MongoDB instances
  • Add Gradle warNoStatic task to build WAR file without the static assets, which will be host separately in NGINX
  • Create new Gradle task, zipStatic, to ZIP up the application’s static assets for deployment to NGINX
  • Add versioning scheme for build artifacts
  • Add context.xml file and MANIFEST.MF file to the WAR file
  • Add log4j syslog appender to send log entries to Logstash
  • Update versions of several dependencies, including Gradle to 2.6

Application Architecture

The Java Spring Music application stack contains the following technologies:

The Spring Music web application’s static content will be hosted by NGINX for increased performance. The application’s WAR file will be hosted by Apache Tomcat. Requests for non-static content will be proxied through a single instance of NGINX on the front-end, to one of two load-balanced Tomcat instances on the back-end. NGINX will also be configured to allow for browser caching of the static content, to further increase application performance. Reverse proxying and caching are configured thought NGINX’s default.conf file’s server configuration section:

server {
  listen        80;
  server_name   localhost;

  location ~* \/assets\/(css|images|js|template)\/* {
    root          /usr/share/nginx/;
    expires       max;
    add_header    Pragma public;
    add_header    Cache-Control "public, must-revalidate, proxy-revalidate";
    add_header    Vary Accept-Encoding;
    access_log    off;
  }

The two Tomcat instances will be configured on NGINX, in a load-balancing pool, using NGINX’s default round-robin load-balancing algorithm. This is configured through NGINX’s default.conf file’s upstream configuration section:

upstream backend {
  server app01:8080;
  server app02:8080;
}

The Spring Music application can be run with MySQL, Postgres, Oracle, MongoDB, Redis, or H2, an in-memory Java SQL database. Given the choice of both SQL and NoSQL databases available for use with the Spring Music application, we will select MongoDB.

The Spring Music application, hosted by Tomcat, will store and modify record album data in a single instance of MongoDB. MongoDB will be populated with a collection of album data when the Spring Music application first creates the MongoDB database instance.

Lastly, the ELK Stack with Logspout, will aggregate both Docker and Java Log4j log entries, providing debugging and analytics to our demonstration. I’ve used the same method for Docker and Java Log4j log entries, as detailed in this previous post.

Kibana Spring Music

Spring Music Environment

To build, deploy, and host the Java Spring Music application, we will use the following technologies:

All files necessary to build this project are stored in the garystafford/spring-music-docker repository on GitHub. The Spring Music source code and build artifacts are stored in a seperate garystafford/spring-music repository, also on GitHub.

Build artifacts are automatically built by Travis CI when changes are checked into the garystafford/spring-music repository on GitHub. Travis CI then overwrites the build artifacts back to a build artifact branch of that same project. The build artifact branch acts as a pseudo binary repository for the project. The .travis.yaml file, gradle.build file, and deploy.sh script handles these functions.

.travis.yaml file:

language: java
jdk: oraclejdk7
before_install:
- chmod +x gradlew
before_deploy:
- chmod ugo+x deploy.sh
script:
- bash ./gradlew clean warNoStatic warCopy zipGetVersion zipStatic
- bash ./deploy.sh
env:
  global:
  - GH_REF: github.com/garystafford/spring-music.git
  - secure: <secure hash here>

gradle.build file snippet:

// new Gradle build tasks

task warNoStatic(type: War) {
  // omit the version from the war file name
  version = ''
  exclude '**/assets/**'
  manifest {
    attributes 
      'Manifest-Version': '1.0', 
      'Created-By': currentJvm, 
      'Gradle-Version': GradleVersion.current().getVersion(), 
      'Implementation-Title': archivesBaseName + '.war', 
      'Implementation-Version': artifact_version, 
      'Implementation-Vendor': 'Gary A. Stafford'
  }
}

task warCopy(type: Copy) {
  from 'build/libs'
  into 'build/distributions'
  include '**/*.war'
}

task zipGetVersion (type: Task) {
  ext.versionfile = 
    new File("${projectDir}/src/main/webapp/assets/buildinfo.properties")
  versionfile.text = 'build.version=' + artifact_version
}

task zipStatic(type: Zip) {
  from 'src/main/webapp/assets'
  appendix = 'static'
  version = ''
}

deploy.sh file:

#!/bin/bash

# reference: https://gist.github.com/domenic/ec8b0fc8ab45f39403dd

set -e # exit with nonzero exit code if anything fails

# go to the distributions directory and create a *new* Git repo
cd build/distributions && git init

# inside this git repo we'll pretend to be a new user
git config user.name "travis-ci"
git config user.email "auto-deploy@travis-ci.com"

# The first and only commit to this new Git repo contains all the
# files present with the commit message.
git add .
git commit -m "Deploy Travis CI build #${TRAVIS_BUILD_NUMBER} artifacts to GitHub"

# Force push from the current repo's master branch to the remote
# repo's build-artifacts branch. (All previous history on the gh-pages branch
# will be lost, since we are overwriting it.) We redirect any output to
# /dev/null to hide any sensitive credential data that might otherwise be exposed. Environment variables pre-configured on Travis CI.
git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:build-artifacts > /dev/null 2>&1

Base Docker images, such as NGINX, Tomcat, and MongoDB, used to build the project’s images and subsequently the containers, are all pulled from Docker Hub.

This NGINX and Tomcat Dockerfiles pull the latest build artifacts down to build the project-specific versions of the NGINX and Tomcat Docker images used for this project. For example, the NGINX Dockerfile looks like:

# NGINX image with build artifact

FROM nginx:latest

MAINTAINER Gary A. Stafford <garystafford@rochester.rr.com>

ENV REFRESHED_AT 2015-09-20
ENV GITHUB_REPO https://github.com/garystafford/spring-music/raw/build-artifacts
ENV STATIC_FILE spring-music-static.zip

RUN apt-get update -y && 
  apt-get install wget unzip nano -y && 
  wget -O /tmp/${STATIC_FILE} ${GITHUB_REPO}/${STATIC_FILE} && 
  unzip /tmp/${STATIC_FILE} -d /usr/share/nginx/assets/

COPY default.conf /etc/nginx/conf.d/default.conf

Docker Machine builds a single VirtualBox VM. After building the VM, Docker Compose then builds and deploys (1) NGINX container, (2) load-balanced Tomcat containers, (1) MongoDB container, (1) ELK container, and (1) Logspout container, onto the VM. Docker Machine’s VirtualBox driver provides a basic solution that can be run locally for testing and development. The docker-compose.yml for the project is as follows:

proxy:
  build: nginx/
  ports: "80:80"
  links:
   - app01
   - app02
  hostname: "proxy"

app01:
  build: tomcat/
  expose: "8080"
  ports: "8180:8080"
  links:
   - nosqldb
   - elk
  hostname: "app01"

app02:
  build: tomcat/
  expose: "8080"
  ports: "8280:8080"
  links:
   - nosqldb
   - elk
  hostname: "app01"

nosqldb:
  build: mongo/
  hostname: "nosqldb"
  volumes: "/opt/mongodb:/data/db"

elk:
  build: elk/
  ports:
   - "8081:80"
   - "8082:9200"
  expose: "5000/upd"

logspout:
  build: logspout/
  volumes: "/var/run/docker.sock:/tmp/docker.sock"
  links: elk
  ports: "8083:80"
  environment: ROUTE_URIS=logstash://elk:5000

Building the Environment

Before continuing, ensure you have nothing running on ports 80, 8080, 8081, 8082, and 8083. Also, make sure VirtualBox, Docker, Docker Compose, Docker Machine, VirtualBox, cURL, and git are all pre-installed and running.

docker --version && 
docker-compose --version && 
docker-machine --version && 
echo "VirtualBox $(vboxmanage --version)" && 
curl --version && git --version

All of the below commands may be executed with the following single command (sh ./build_project.sh). This is useful for working with Jenkins CI, ThoughtWorks go, or similar CI tools. However, I suggest building the project step-by-step, as shown below, to better understand the process.

# clone project
git clone -b master 
  --single-branch https://github.com/garystafford/spring-music-docker.git && 
cd spring-music-docker

# build VM
docker-machine create --driver virtualbox springmusic --debug

# create directory to store mongo data on host
docker-machine ssh springmusic mkdir /opt/mongodb

# set new environment
docker-machine env springmusic && 
eval "$(docker-machine env springmusic)"

# build images and containers
docker-compose -f docker-compose.yml -p music up -d

# wait for container apps to start
sleep 15

# run quick test of project
for i in {1..10}
do
  curl -I --url $(docker-machine ip springmusic)
done

By simply changing the driver to AWS EC2 and providing your AWS credentials, the same environment can be built on AWS within a single EC2 instance. The ‘springmusic’ environment has been fully tested both locally with VirtualBox, as well as on AWS.

Results
Resulting Docker images and containers:

gstafford@gstafford-X555LA:$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
music_proxy           latest              46af4c1ffee0        52 seconds ago       144.5 MB
music_logspout        latest              fe64597ab0c4        About a minute ago   24.36 MB
music_app02           latest              d935211139f6        2 minutes ago        370.1 MB
music_app01           latest              d935211139f6        2 minutes ago        370.1 MB
music_elk             latest              b03731595114        2 minutes ago        1.05 GB
gliderlabs/logspout   master              40a52d6ca462        14 hours ago         14.75 MB
willdurand/elk        latest              04cd7334eb5d        9 days ago           1.05 GB
tomcat                latest              6fe1972e6b08        10 days ago          347.7 MB
mongo                 latest              5c9464760d54        10 days ago          260.8 MB
nginx                 latest              cd3cf76a61ee        10 days ago          132.9 MB

gstafford@gstafford-X555LA:$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                                                  NAMES
facb6eddfb96        music_proxy         "nginx -g 'daemon off"   46 seconds ago       Up 46 seconds       0.0.0.0:80->80/tcp, 443/tcp                            music_proxy_1
abf9bb0821e8        music_app01         "catalina.sh run"        About a minute ago   Up About a minute   0.0.0.0:8180->8080/tcp                                 music_app01_1
e4c43ed84bed        music_logspout      "/bin/logspout"          About a minute ago   Up About a minute   8000/tcp, 0.0.0.0:8083->80/tcp                         music_logspout_1
eca9a3cec52f        music_app02         "catalina.sh run"        2 minutes ago        Up 2 minutes        0.0.0.0:8280->8080/tcp                                 music_app02_1
b7a7fd54575f        mongo:latest        "/entrypoint.sh mongo"   2 minutes ago        Up 2 minutes        27017/tcp                                              music_nosqldb_1
cbfe43800f3e        music_elk           "/usr/bin/supervisord"   2 minutes ago        Up 2 minutes        5000/0, 0.0.0.0:8081->80/tcp, 0.0.0.0:8082->9200/tcp   music_elk_1

Partial result of the curl test, calling NGINX. Note the two different upstream addresses for Tomcat. Also, note the sharp decrease in request times, due to caching.

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Mon, 07 Sep 2015 17:56:11 GMT
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 2090
Connection: keep-alive
Accept-Ranges: bytes
ETag: W/"2090-1441648256000"
Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT
Content-Language: en
Request-Time: 0.521
Upstream-Address: 172.17.0.121:8080
Upstream-Response-Time: 1441648570.774

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Mon, 07 Sep 2015 17:56:11 GMT
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 2090
Connection: keep-alive
Accept-Ranges: bytes
ETag: W/"2090-1441648256000"
Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT
Content-Language: en
Request-Time: 0.326
Upstream-Address: 172.17.0.123:8080
Upstream-Response-Time: 1441648571.506

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Mon, 07 Sep 2015 17:56:12 GMT
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 2090
Connection: keep-alive
Accept-Ranges: bytes
ETag: W/"2090-1441648256000"
Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT
Content-Language: en
Request-Time: 0.006
Upstream-Address: 172.17.0.121:8080
Upstream-Response-Time: 1441648572.050

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Mon, 07 Sep 2015 17:56:12 GMT
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 2090
Connection: keep-alive
Accept-Ranges: bytes
ETag: W/"2090-1441648256000"
Last-Modified: Mon, 07 Sep 2015 17:50:56 GMT
Content-Language: en
Request-Time: 0.006
Upstream-Address: 172.17.0.123:8080
Upstream-Response-Time: 1441648572.266

Assuming springmusic VM is running at 192.168.99.100:

* The Tomcat user name is admin and the password is t0mcat53rv3r.

Helpful Links

, , , , ,

2 Comments

Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose

Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.

Docker Machine with Ambassador

Introduction

In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.

In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBoxJenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.

Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).

Docker Machine

If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.

Build and Deploy Results

We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.

Docker Machine with Ambassador

I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.

Jenkins CI Jobs Machine

The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.

Jenkins CI Machine Config 1

Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.

Jenkins CI Machine Config 2

The bash script executed by Jenkins contains the following commands:

# optional: record current versions of docker apps with each build
docker -v && docker-compose -v && docker-machine -v

# set-up: clean up any previous machine failures
docker-machine stop test || echo "nothing to stop" && \
docker-machine rm test   || echo "nothing to remove"

# use docker-machine to create and configure 'test' environment
# add a -D (debug) if having issues
docker-machine create --driver virtualbox test
eval "$(docker-machine env test)"

# use docker-compose to pull and build new images and containers
docker-compose -p jenkins up -d

# optional: list machines, images, and containers
docker-machine ls && docker images && docker ps -a

# wait for containers to fully start before tests fire up
sleep 30

# test the services
sh tests.sh $(docker-machine ip test)

# tear down: stop and remove 'test' environment
docker-machine stop test && docker-machine rm test

As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.

docker-machine create --driver virtualbox test
eval "$(docker-machine env test)"

Once this step is complete you will have the following VirtualBox VM created, running, and active.

NAME   ACTIVE   DRIVER       STATE     URL                         SWARM
test   *        virtualbox   Running   tcp://192.168.99.100:2376

Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.

docker-compose -p jenkins up -d

The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.

########################################################################
#
# title:       Docker Compose YAML file for Virtual-Vehicles Project
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Pulls (5) images, builds (5) images, and builds (11) containers,
#              for the Virtual-Vehicles Java microservices example REST API
# to run:      docker-compose -p <your_project_name_here> up -d
#
########################################################################

graphite:
  image: hopsoft/graphite-statsd:latest
  ports:
   - "8500:80"

mongoAuthentication:
  image: mongo:latest

mongoValet:
  image: mongo:latest

mongoMaintenance:
  image: mongo:latest

mongoVehicle:
  image: mongo:latest

authentication:
  build: authentication/
  links:
   - graphite
   - mongoAuthentication
   - "ambassador:nginx"
  expose:
   - "8587"

valet:
  build: valet/
  links:
   - graphite
   - mongoValet
   - "ambassador:nginx"
  expose:
   - "8585"

maintenance:
  build: maintenance/
  links:
   - graphite
   - mongoMaintenance
   - "ambassador:nginx"
  expose:
   - "8583"

vehicle:
  build: vehicle/
  links:
   - graphite
   - mongoVehicle
   - "ambassador:nginx"
  expose:
   - "8581"

nginx:
  build: nginx/
  ports:
   - "80:80"
  links:
   - "ambassador:vehicle"
   - "ambassador:valet"
   - "ambassador:authentication"
   - "ambassador:maintenance"

ambassador:
  image: cpuguy83/docker-grand-ambassador
  volumes:
   - "/var/run/docker.sock:/var/run/docker.sock"
  command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"

Running the docker-compose.yaml file, will pull these (5) Docker Hub images:

REPOSITORY                           TAG          IMAGE ID
==========                           ===          ========
java                                 8u45-jdk     1f80eb0f8128
nginx                                latest       319d2015d149
mongo                                latest       66b43e3cae49
hopsoft/graphite-statsd              latest       b03e373279e8
cpuguy83/docker-grand-ambassador     latest       c635b1699f78

And, build these (5) Docker images from Dockerfiles:

REPOSITORY                  TAG          IMAGE ID
==========                  ===          ========
jenkins_nginx               latest       0b53a9adb296
jenkins_vehicle             latest       d80f79e605f4
jenkins_valet               latest       cbe8bdf909b8
jenkins_maintenance         latest       15b8a94c00f4
jenkins_authentication      latest       ef0345369079

And, build these (11) Docker containers from corresponding image:

CONTAINER ID     IMAGE                                NAME
============     =====                                ====
17992acc6542     jenkins_nginx                        jenkins_nginx_1
bcbb2a4b1a7d     jenkins_vehicle                      jenkins_vehicle_1
4ac1ac69f230     mongo:latest                         jenkins_mongoVehicle_1
bcc8b9454103     jenkins_valet                        jenkins_valet_1
7c1794ca7b8c     jenkins_maintenance                  jenkins_maintenance_1
2d0e117fa5fb     jenkins_authentication               jenkins_authentication_1
d9146a1b1d89     hopsoft/graphite-statsd:latest       jenkins_graphite_1
56b34cee9cf3     cpuguy83/docker-grand-ambassador     jenkins_ambassador_1
a72199d51851     mongo:latest                         jenkins_mongoAuthentication_1
307cb2c01cc4     mongo:latest                         jenkins_mongoMaintenance_1
4e0807431479     mongo:latest                         jenkins_mongoValet_1

Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.

Integration Testing

As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:

sh tests.sh $(docker-machine ip test)

The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test), is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100. If a parameter is not provided, the test script’s hostname variable will use the default value of localhost, ‘hostname=${1-'localhost'}‘.

Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581 for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:

http://192.168.99.100/vehicles/utils/ping.json
http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI
http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad

Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.

#!/bin/sh

########################################################################
#
# title:          Virtual-Vehicles Project Integration Tests
# author:         Gary A. Stafford (https://programmaticponderings.com)
# url:            https://github.com/garystafford/virtual-vehicles-docker  
# description:    Performs integration tests on the Virtual-Vehicles
#                 microservices
# to run:         sh tests.sh
# docker-machine: sh tests.sh $(docker-machine ip test)
#
########################################################################

echo --- Integration Tests ---
echo

### VARIABLES ###
hostname=${1-'localhost'} # use input param or default to localhost
application="Test API Client $(date +%s)" # randomized
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized
make="Test"
model="Foo"

echo hostname: ${hostname}
echo application: ${application}
echo secret: ${secret}
echo make: ${make}
echo model: ${model}
echo


### TESTS ###
echo "TEST: GET request should return 'true' in the response body"
url="http://${hostname}/vehicles/utils/ping.json"
echo ${url}
curl -X GET -H 'Accept: application/json; charset=UTF-8' \
--url "${url}" \
| grep true > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new client in the response body with an 'id'"
url="http://${hostname}/clients"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get the new client's apiKey for next test"
url="http://${hostname}/clients"
echo ${url}
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo apiKey: ${apiKey}
echo


echo "TEST: GET request should return a new jwt in the response body"
url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get a new jwt using the new client for the next test"
url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
jwt=$(curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo jwt: ${jwt}
echo


echo "TEST: POST request should return a new vehicle in the response body with an 'id'"
url="http://${hostname}/vehicles"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"year\": 2015,
    \"make\": \"${make}\",
    \"model\": \"${model}\",
    \"color\": \"White\",
    \"type\": \"Sedan\",
    \"mileage\": 250
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get id from new vehicle for the next test"
url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1"
echo ${url}
id=$(curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| tail -1 \
| sed -e 's/^"//'  -e 's/"$//')
echo vehicle id: ${id}
echo


echo "TEST: GET request should return a vehicle in the response body with the requested 'id'"
url="http://${hostname}/vehicles/${id}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new maintenance record in the response body with an 'id'"
url="http://${hostname}/maintenances"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"vehicleId\": \"${id}\",
    \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\",
    \"mileage\": 1000,
    \"type\": \"Test Maintenance\",
    \"notes\": \"This is a test notes.\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new valet transaction in the response body with an 'id'"
url="http://${hostname}/valets"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"vehicleId\": \"${id}\",
    \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\",
    \"parkingLot\": \"Test Parking Ramp\",
    \"parkingSpot\": 10,
    \"notes\": \"This is a test notes.\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo

Tear Down

In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.

docker-machine stop test && \
docker-machine rm test

Jenkins CI Console Output

Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.

Started by user anonymous
Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10
Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git
> git --version # timeout=10
using GIT_SSH to set credentials
using .gitcredentials to set credentials
> git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10
> git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/*
> git config --local --remove-section credential # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5
> git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10
[workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh

+ docker -v
Docker version 1.7.0, build 0baf609
+ docker-compose -v
docker-compose version: 1.3.1
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
+ docker-machine -v
docker-machine version 0.3.0 (0a251fe)

+ docker-machine stop test
+ docker-machine rm test
Successfully removed test

+ docker-machine create --driver virtualbox test
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env test
+ docker-machine env test
+ eval export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test"
export DOCKER_MACHINE_NAME="test"
# Run this command to configure your shell:
# eval "$(docker-machine env test)"
+ export DOCKER_TLS_VERIFY=1
+ export DOCKER_HOST=tcp://192.168.99.100:2376
+ export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test
+ export DOCKER_MACHINE_NAME=test
+ docker-compose -p jenkins up -d
Pulling mongoValet (mongo:latest)...
latest: Pulling from mongo

...Abridged output...

+ docker-machine ls
NAME   ACTIVE   DRIVER       STATE     URL                         SWARM
test   *        virtualbox   Running   tcp://192.168.99.100:2376
+ docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
jenkins_vehicle                    latest              fdd7f9d02ff7        2 seconds ago       837.1 MB
jenkins_valet                      latest              8a592e0fe69a        4 seconds ago       837.1 MB
jenkins_maintenance                latest              5a4a44e136e5        5 seconds ago       837.1 MB
jenkins_authentication             latest              e521e067a701        7 seconds ago       838.7 MB
jenkins_nginx                      latest              085d183df8b4        25 minutes ago      132.8 MB
java                               8u45-jdk            1f80eb0f8128        12 days ago         816.4 MB
nginx                              latest              319d2015d149        12 days ago         132.8 MB
mongo                              latest              66b43e3cae49        12 days ago         260.8 MB
hopsoft/graphite-statsd            latest              b03e373279e8        4 weeks ago         740 MB
cpuguy83/docker-grand-ambassador   latest              c635b1699f78        5 months ago        525.7 MB

+ docker ps -a
CONTAINER ID        IMAGE                              COMMAND                CREATED             STATUS              PORTS                                      NAMES
4ea39fa187bf        jenkins_vehicle                    "java -classpath .:c   2 seconds ago       Up 1 seconds        8581/tcp                                   jenkins_vehicle_1
b248a836546b        mongo:latest                       "/entrypoint.sh mong   3 seconds ago       Up 3 seconds        27017/tcp                                  jenkins_mongoVehicle_1
0c94e6409afc        jenkins_valet                      "java -classpath .:c   4 seconds ago       Up 3 seconds        8585/tcp                                   jenkins_valet_1
657f8432004b        jenkins_maintenance                "java -classpath .:c   5 seconds ago       Up 5 seconds        8583/tcp                                   jenkins_maintenance_1
8ff6de1208e3        jenkins_authentication             "java -classpath .:c   7 seconds ago       Up 6 seconds        8587/tcp                                   jenkins_authentication_1
c799d5f34a1c        hopsoft/graphite-statsd:latest     "/sbin/my_init"        12 minutes ago      Up 12 minutes       2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp   jenkins_graphite_1
040872881b25        jenkins_nginx                      "nginx -g 'daemon of   25 minutes ago      Up 25 minutes       0.0.0.0:80->80/tcp, 443/tcp                jenkins_nginx_1
c6a2dc726abc        mongo:latest                       "/entrypoint.sh mong   26 minutes ago      Up 26 minutes       27017/tcp                                  jenkins_mongoAuthentication_1
db22a44239f4        mongo:latest                       "/entrypoint.sh mong   26 minutes ago      Up 26 minutes       27017/tcp                                  jenkins_mongoMaintenance_1
d5fd655474ba        cpuguy83/docker-grand-ambassador   "/usr/bin/grand-amba   26 minutes ago      Up 26 minutes                                                  jenkins_ambassador_1
2b46bd6f8cfb        mongo:latest                       "/entrypoint.sh mong   31 minutes ago      Up 31 minutes       27017/tcp                                  jenkins_mongoValet_1

+ sleep 30

+ docker-machine ip test
+ sh tests.sh 192.168.99.100

--- Integration Tests ---

hostname: 192.168.99.100
application: Test API Client 1435585062
secret: NGM5OTI5ODAxMTZ
make: Test
model: Foo

TEST: GET request should return 'true' in the response body
http://192.168.99.100/vehicles/utils/ping.json
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     4    0     4    0     0     26      0 --:--:-- --:--:-- --:--:--    25
100     4    0     4    0     0     26      0 --:--:-- --:--:-- --:--:--    25
RESULT: pass

TEST: POST request should return a new client in the response body with an 'id'
http://192.168.99.100/clients
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   399    0   315  100    84    847    225 --:--:-- --:--:-- --:--:--   849
RESULT: pass

SETUP: Get the new client's apiKey for next test
http://192.168.99.100/clients
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   399    0   315  100    84  20482   5461 --:--:-- --:--:-- --:--:-- 21000
apiKey: sv1CA9NdhmXh72NrGKBN3Abb

TEST: GET request should return a new jwt in the response body
http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   222    0   222    0     0    686      0 --:--:-- --:--:-- --:--:--   687
RESULT: pass

SETUP: Get a new jwt using the new client for the next test
http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   222    0   222    0     0  16843      0 --:--:-- --:--:-- --:--:-- 17076
jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM

TEST: POST request should return a new vehicle in the response body with an 'id'
http://192.168.99.100/vehicles
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   123    0     0  100   123      0    612 --:--:-- --:--:-- --:--:--   611
100   419    0   296  100   123    649    270 --:--:-- --:--:-- --:--:--   649
RESULT: pass

SETUP: Get id from new vehicle for the next test
http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   377    0   377    0     0   5564      0 --:--:-- --:--:-- --:--:--  5626
vehicle id: 55914a28e4b04658471dc03a

TEST: GET request should return a vehicle in the response body with the requested 'id'
http://192.168.99.100/vehicles/55914a28e4b04658471dc03a
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   296    0   296    0     0   7051      0 --:--:-- --:--:-- --:--:--  7219
RESULT: pass

TEST: POST request should return a new maintenance record in the response body with an 'id'
http://192.168.99.100/maintenances
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   565    0   376  100   189    506    254 --:--:-- --:--:-- --:--:--   506
100   565    0   376  100   189    506    254 --:--:-- --:--:-- --:--:--   506
RESULT: pass

TEST: POST request should return a new valet transaction in the response body with an 'id'
http://192.168.99.100/valets
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   561    0   368  100   193    514    269 --:--:-- --:--:-- --:--:--   514
RESULT: pass

+ docker-machine stop test
+ docker-machine rm test
Successfully removed test

Finished: SUCCESS

Graphite and Statsd

If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.

Graphite Dashboard

The Complete Process

The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.

Docker Machine Full Process

, , , , , , , , , , , , , ,

5 Comments

Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose

Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose

IntroDockerCompose

Previous Posts

In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.

Introduction

In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:

  1. Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
  2. Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
  3. Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
  4. Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
  5. Tear Down: Using Jenkins, automatically stop and remove the containers and images

For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.

Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).

Build the Microservices

In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.

Build and Deploy

To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml below, corresponding to each microservice.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <name>Virtual-Vehicles API</name>
    <description>Virtual-Vehicles API
        https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3
    </description>
    <url>https://github.com/garystafford/virtual-vehicle-demo</url>
    <groupId>com.example</groupId>
    <artifactId>Virtual-Vehicles-API</artifactId>
    <version>1</version>
    <packaging>pom</packaging>

    <modules>
        <module>Maintenance</module>
        <module>Valet</module>
        <module>Vehicle</module>
        <module>Authentication</module>
    </modules>
</project>

Below is the view of the four individual Maven modules, within the single Jenkins Maven job.

Maven Modules In Jenkins

Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate‘.

Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.

Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.

Jenkins CI Main Page

Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.

Build and Deploy Config 1

Build and Deploy Config 2

Deploy Build Artifacts

Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.

Build and Deploy Config 3

The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.

Build and Deploy Results

Build and Compose the Containers

IntroDockerCompose

The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.

Docker Compose Config 1

The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.

Docker Compose Config 2

# remove all images and containers from this build
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \
docker images --no-trunc | grep 'jenkins' \
| awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable
export DOCKER_HOST=tcp://localhost:4243

# record installed version of Docker and Maven with each build
mvn --version && \
docker --version && \
docker-compose --version

# use docker-compose to build new images and containers
docker-compose -p jenkins up -d

# list virtual-vehicles related images
docker images | grep 'jenkins' | awk '{print $0}'

# list all containers
docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
########################################################################
#
# title:       Docker Compose YAML file for Virtual-Vehicles Project
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Builds (4) images, pulls (2) images, and builds (9) containers,
#              for the Virtual-Vehicles Java microservices example REST API
# to run:      docker-compose -p virtualvehicles up -d
#
########################################################################

graphite:
  image: hopsoft/graphite-statsd:latest
  ports:
   - "8481:80"

mongoAuthentication:
  image: mongo:latest

mongoValet:
  image: mongo:latest

mongoMaintenance:
  image: mongo:latest

mongoVehicle:
  image: mongo:latest

authentication:
  build: authentication/
  ports:
   - "8587:8587"
  links:
   - graphite
   - mongoAuthentication

valet:
  build: valet/
  ports:
   - "8585:8585"
  links:
   - graphite
   - mongoValet
   - authentication

maintenance:
  build: maintenance/
  ports:
   - "8583:8583"
  links:
   - graphite
   - mongoMaintenance
   - authentication

vehicle:
  build: vehicle/
  ports:
   - "8581:8581"
  links:
   - graphite
   - mongoVehicle
   - authentication

Running the docker-compose.yaml file, produces the following images:

REPOSITORY                TAG        IMAGE ID
==========                ===        ========
jenkins_vehicle           latest     a6ea4dfe7cf5
jenkins_valet             latest     162d3102d43c
jenkins_maintenance       latest     0b6f530cc968
jenkins_authentication    latest     45b50487155e

And, these containers:

CONTAINER ID     IMAGE                              NAME
============     =====                              ====
2b4d5a918f1f     jenkins_vehicle                    jenkins_vehicle_1
492fbd88d267     mongo:latest                       jenkins_mongoVehicle_1
01f410bb1133     jenkins_valet                      jenkins_valet_1
6a63a664c335     jenkins_maintenance                jenkins_maintenance_1
00babf484cf7     jenkins_authentication             jenkins_authentication_1
548a31034c1e     hopsoft/graphite-statsd:latest     jenkins_graphite_1
cdc18bbb51b4     mongo:latest                       jenkins_mongoAuthentication_1
6be5c0558e92     mongo:latest                       jenkins_mongoMaintenance_1
8b71d50a4b4d     mongo:latest                       jenkins_mongoValet_1

Integration Testing

Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.

Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.

Docker Compose Config 3

sleep 15
sh tests.sh -v

The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.

Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl, grep, sed, awk, along with regular expressions, to test our RESTful services.

#!/bin/sh

########################################################################
#
# title:       Virtual-Vehicles Project Integration Tests
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Performs integration tests on the Virtual-Vehicles
#              microservices
# to run:      sh tests.sh -v
#
########################################################################

echo --- Integration Tests ---

### VARIABLES ###
hostname="localhost"
application="Test API Client $(date +%s)" # randomized
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized

echo hostname: ${hostname}
echo application: ${application}
echo secret: ${secret}


### TESTS ###
echo "TEST: GET request should return 'true' in the response body"
url="http://${hostname}:8581/vehicles/utils/ping.json"
echo ${url}
curl -X GET -H 'Accept: application/json; charset=UTF-8' \
--url "${url}" \
| grep true > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "TEST: POST request should return a new client in the response body with an 'id'"
url="http://${hostname}:8587/clients"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get the new client's apiKey for next test"
url="http://${hostname}:8587/clients"
echo ${url}
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo apiKey: ${apiKey}
echo

echo "TEST: GET request should return a new jwt in the response body"
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get a new jwt using the new client for the next test"
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
jwt=$(curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo jwt: ${jwt}


echo "TEST: POST request should return a new vehicle in the response body with an 'id'"
url="http://${hostname}:8581/vehicles"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d '{
    "year": 2015,
    "make": "Test",
    "model": "Foo",
    "color": "White",
    "type": "Sedan",
    "mileage": 250
}' --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get id from new vehicle for the next test"
url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1"
echo ${url}
id=$(curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| tail -1 \
| sed -e 's/^"//'  -e 's/"$//')
echo vehicle id: ${id}


echo "TEST: GET request should return a vehicle in the response body with the requested 'id'"
url="http://${hostname}:8581/vehicles/${id}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"

Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.

Running Integration Tests

Tear Down

Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.

# remove all images and containers from this build
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \
docker images --no-trunc | grep 'jenkins' \
| awk '{print $3}' | xargs -r --no-run-if-empty docker rmi

The Complete Process

The below diagram show the entire process, start to finish.

Full Process

, , , , , , , , , , , , ,

6 Comments

Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3

Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.

Virtual-Vehicles Architecture

Note: All code available on GitHub. For the version of the code that matches the details in this blog post, check out the master branch, v1.0.0 tag (after running git clone …, run a git checkout tags/v1.0.0 command).

Previous Posts

In Part One of this series, we introduced the microservices-based Virtual-Vehicles REST API example. The vehicle-themed Virtual-Vehicles microservices offers a comprehensive set of functionality, through a REST API, to application developers. In Part Two, we installed a copy of the Virtual-Vehicles project from GitHub. In Part Two, we also gained a basic understanding of how RestExpress works. Finally, we discovered how to get the Virtual-Vehicles microservices up and running.

Part Three

In part three of this series, we will take the Virtual-Vehicles for a test drive (get it? maybe it was funnier the first time…). There are several tools we can use to test the Virtual-Vehicles API. One of my favorite tools is Postman.  We will explore how to use Postman, along with the Virtual-Vehicles API documentation, to test the Virtual-Vehicles microservice’s endpoints, which compose the Virtual-Vehicles API.

Testing the API

There are three categories of tools available to test RESTful APIs, which are GUI-based applications, command line tools, and testing frameworks. Postman, Advanced REST ClientREST Console, and SmartBear’s SoapUI and SoapUI NG Pro, are examples of GUI-based applications, designed specifically to test RESTful APIs. cURL and GNU Wget are two examples of command line tools, which among other capabilities, can test APIs. Lastly, JUnit is an example of a testing framework that can be used to test a RESTful API. Surprisingly, JUnit is not only designed to manage unit tests. Each category of testing tools has their pros and cons, depending on your testing needs. We will explore all of these categories in this post as we test the Virtual-Vehicles REST API.

JUnit

JUnit is probably the best known of all Java unit testing frameworks. JUnit’s website describes JUnit as ‘a simple, open source framework to write and run repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.’ Most Java developers turn to JUnit for unit testing. However, JUnit is capable of other forms of testing, including integration testing. In his post, ‘Unit Testing with JUnit – Tutorial’, Lars Vogel states ‘an integration test has the target to test the behavior of a component or the integration between a set of components. The term functional test is sometimes used as a synonym for integration test. This kind of tests allow you to translate your user stories into a test suite, i.e., the test would resemble an expected user interaction with the application.’

Testing the Virtual-Vehicles RESTful API’s operations with JUnit would be considered integration (functional) testing. At a minimum, to complete requests, we call one microservice, which in turn authenticates the JWT by calling another microservice. If authenticated, the first microservice makes a request to its MongoDB database. As Vogel stated, whereas a unit test targets a small unit of code, such as a method, the request/response operation is integration between a set of components. testing an API call requires several dependencies.

The simplest example of testing the Virtual-Vehicles API with JUnit, would be to test an HTTP GET request to return a single instance of a vehicle. The code below demonstrates how this might be done. Notice the request depends on helper methods (not included, for brevity). To request the vehicle, assuming we already have a registered client, we need a valid JWT. We also need a valid vehicle ObjectId. To obtain these two pieces of data, we call helper methods, which in turn makes the necessary request to retrieve a JWT and vehicle ObjectId.

Below are the results of the above test, run in NetBeans IDE, using the built-in support for JUnit.

JUnit Test Results

JUnit can also be run from the command line using the Maven goal, surefire:test:

Running JUnit from Command Line

cURL

One of the best-known command line tools for calling for all types of operations centered around calling a URL is cURL. According to their website, ‘curl is a command line tool and library for transferring data with URL syntax, supporting…HTTP, HTTPS…curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate, and Kerberos), file transfer resume, proxy tunneling and more.’ I prefer the website’s  briefer description, cURL ‘groks those URLs’.

Using cURL, we could make an HTTP PUT request to the Vehicle microservice’s /vehicles/{oid}.{format} endpoint. With cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object. Below is an example of that cURL command, which can be run from a terminal prompt.

The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created response status.

cURL  HTTP PUT Vehicle

The cURL commands may be incorporated into many types of automated testing processes. These might be as simple as a bash script. The script could a series of automated tests, including the following: register an API client, use the API key to create a JWT, use the JWT to create a new vehicle, use the new vehicle’s ObjectId to modify that same vehicle, delete that vehicle, confirm the vehicle is removed using the count operation and returns a test results report to the user.

cURL Commands from Chrome
Quick tip, instead of hand-coding complex cURL commands, containing form data, URL parameters, and Headers, use Chrome. First, open the Chrome Developer Tools (f12). Next, using the Postman – REST Client for Chrome, available in the Chrome App Store, execute your HTTP request. Finally, in the ‘Network’ tab of the Developers tools, find and right-click on the request and select ‘Copy as cURL’. You have a complete cURL command equivalent of your Postman request, which you can paste directly into the command line or insert into a script. Below is an example of using the Postman – REST Client for Chrome to generate a cURL command.

Using Postman in Chrome to get cURL

The generated command is a bit verbose. Compare this command to the cURL command, earlier.

Wget

Similar to cURL, GNU Wget provides the ability to call the Virtual-Vehicles API’s endpoints. According to their website, ‘GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive command line tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.’ Again, like cURL, we can run Wget commands from the command line or incorporate them into scripted testing processes. The Wget website contains excellent documentation.

Using Wget, we could make the same HTTP PUT request to the Vehicle microservice /vehicles/{oid}.{format} endpoint. Like cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object.

The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created response status.

Wget HTTP PUT Vehicle

cURL Bash Testing

We can combine cURL and Wget with several of the tools bash provides, to develop fairly complex integration tests. The bash-based script below just scratches the surface as a complete set of integration tests. However, the tests demonstrate an efficient multi-stage test approach to handling the complex nature of RESTful service request requirements. The tests build upon each other.

After setting up some variables and doing a quick health check on one service, the tests register a new API client by calling the Authentication service. Next, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.

Although some may consider using bash to test somewhat primitive, the following script demonstrates the effectiveness of bash’s  curl, grep, sed, awk, along with regular expressions, to test our RESTful services. Note how we grep certain values from the response, such as the new client’s API key, and then use that value as a parameter for the following test request, such as to obtain a JWT.

Since these tests are just a bash script, they can from the command line, or easily called from a continuous integration tool, Such as Jenkins CI or Hudson.

Running Integration Tests

Postman

Postman, like several similar tools, is an application designed specifically for test API endpoints. The Postman website describes Postman as tool that allows you to ‘build, test, and document your APIs faster.’  There are two versions of Postman in the Chrome Web Store. They are Postman – REST Client, the in-browser extension, which we mentioned above, and Postman, the standalone application. There is also Postman Interceptor, which helps you send requests that use browser cookies through the Postman application.

Postman and similar applications, have add-ons and extensions to extend their features. In particular, Postman, which is free, offers the Jetpacks paid extension. Jetpacks add the ability to ‘write and run tests inside Postman, extract data from responses, chain requests together and test requests with thousands of variations’. Jetpacks allow you to move beyond basic one-off API request-based testing, to automated regression and performance testing.

Using Postman
Let’s use the same HTTP PUT example we used with cURL and Wget, and see how we would perform the same task with Postman. In the first screen grab below, you can see all elements of the HTTP request, including the RESTful API’s URL, URI including the vehicle’s ObjectId (/vehicles/{ObjectId}.{format}), HTTP method (PUT), Authorization Header with JWT (Bearer), and the raw request body. The raw request body contains a JSON representation of the vehicle we want to update. Note how Postman saves the request in history so we can easily replay it later.

Postman HTTP PUT of Vehicle

In the next screen-grab, we see the response to the HTTP PUT request. Note the response body, response status, timing, and response headers.

Postman HTTP POST of Vehicle Response

Looking at the response body in Postman, you easily see the how RestExpress demonstrates the RESTful principle we discussed in Part Two of the series, HATEOAS (Hypermedia as the Engine of Application State). Note the link to this vehicle’s ‘self’ href) and the entire vehicles collection (‘up’ href).

Postman Collections
A great feature of Postman with Jetpacks is Collections. Collections are sets of requests, which can be saved, recalled, and shared. The Collection Runner runs requests in a collection, in the order in which you set them. Ordered collections are ideal for the Virtual-Vehicles API. The screen grab below shows a collection of requests, arranged in the order we would execute them to test the Virtual-Vehicles API, as it applies to specifically to vehicle CRUD operations:

  1. Execute HTTP POST request to register the new API client, passing the application name and a shared secret in the request
    Receive the new client’s API key in response
  2. Execute HTTP GET to request, passing the new client’s API key and the shared secret in the request
    Receive the new JWT in response
  3. Execute HTTP POST request to create a new vehicle, passing the JWT in the header for authentication (used for all following requests)
    Receive the new vehicle object in response
  4. Execute HTTP PUT request to modify the new vehicle, using the vehicle’s ObjectId
    Receive the modified vehicle object in response
  5. Execute HTTP GET to request the modified vehicle, to confirm it exists in the expected state
    Receive the vehicle object in response
  6. Execute HTTP DELETE request to delete the new vehicle, using the vehicle’s ObjectId
  7. Execute HTTP GET to request the new vehicle and to confirm it has been removed
    Receive a 404 Not Found status response, as expected

Postman Ordered Series of REST Calls

Using saved collections for testing the Virtual-Vehicles API is a real-time saving. However, the collections cannot easily be re-run without hand-editing or some advanced scripting. In the simple example above, we hard-coded a JWT and vehicle ObjectId in the requests. Unfortunately, the JWT has an expiration of only 10 hours by default. More immediately, the ObjectId is unique. The earlier collection test run created, then deleted, the vehicle with that ObjectId.

Negative Testing
You may also perform negative testing with Postman. For example, do you receive the expected response when you don’t include the Authorization Header with JWT in a request (401 Unauthorized status)? When you include a JWT, which has expired (401 Unauthorized status)? When you request a vehicle, whose ObjectId is incorrect or is not found in the database (400 Bad Request status)? Do you receive the expected response when you call an actual service, but an endpoint that doesn’t exist (405 Method Not Allowed)?

Negative Testing in Postman

Postman Test Automation

In addition to manually viewing the HTTP response, to verify the results of a request, Postman allows you to write and run automated tests for each request. According to their website, a ‘Postman test is essentially JavaScript code which sets values for the special tests object. You can set a descriptive key for an element in the object and then say if it’s true or false’. This allows you to write a set of response validation tests for each request.

Below is a quick example of testing the same HTTP POST request, used to create the new API client, above. In this example, we:

  1. Test that the Content-Type response header is present
  2. Test that the HTTP POST successfully returned a 201 status code
  3. Test that the new client’s API key was returned in the response body
  4. Test that the response time was less than 200ms

Postman Test Editor Example

Reviewing Postman’s ‘Tests’ tab, above, observe the four tests have run successfully. Using the Postman’s testing feature, you can create even more advanced tests, eliminating the need to manually validate responses.

This post demonstrates a small subset of the features Postman and other similar applications provide for testing RESTful API. The tools and processes you use to test your RESTful API will depend on the stage of development and testing you are in, as well as the existing technology stacks you build, and on which you host your services.

, , , , , , , , , , , , ,

1 Comment