Posts Tagged microservice
Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry
Posted by Gary A. Stafford in AWS, Cloud, Continuous Delivery, DevOps, Java Development, PCF, Software Development on March 23, 2018
In this post, we will explore the development of a cloud-native, data-centric Spring Boot 2.0 application, and its deployment to Pivotal Software’s hosted Pivotal Cloud Foundry service, Pivotal Web Services. We will add a few additional features, such as Spring Data, Lombok, and Swagger, to enhance our application.
According to Pivotal, Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications. Spring Boot takes an opinionated view of the Spring platform and third-party libraries. Spring Boot 2.0 just went GA on March 1, 2018. This is the first major revision of Spring Boot since 1.0 was released almost 4 years ago. It is also the first GA version of Spring Boot that provides support for Spring Framework 5.0.
Pivotal Web Services’ tagline is ‘The Agile Platform for the Agile Team Powered by Cloud Foundry’. According to Pivotal, Pivotal Web Services (PWS) is a hosted environment of Pivotal Cloud Foundry (PCF). PWS is hosted on AWS in the US-East region. PWS utilizes two availability zones for redundancy. PWS provides developers a Spring-centric PaaS alternative to AWS Elastic Beanstalk, Elastic Container Service (Amazon ECS), and OpsWorks. With PWS, you get the reliability and security of AWS, combined with the rich-functionality and ease-of-use of PCF.
To demonstrate the feature-rich capabilities of the Spring ecosystem, the Spring Boot application shown in this post incorporates the following complimentary technologies:
- Spring Boot Actuator: Sub-project of Spring Boot, adds several production grade services to Spring Boot applications with little developer effort
- Spring Data JPA: Sub-project of Spring Data, easily implement JPA based repositories and data access layers
- Spring Data REST: Sub-project of Spring Data, easily build hypermedia-driven REST web services on top of Spring Data repositories
- Spring HATEOAS: Create REST representations that follow the HATEOAS principle from Spring-based applications
- Springfox Swagger 2: We are using the Springfox implementation of the Swagger 2 specification, an automated JSON API documentation for API’s built with Spring
- Lombok: The
@Data
annotation generates boilerplate code that is typically associated with simple POJOs (Plain Old Java Objects) and beans:@ToString
,@EqualsAndHashCode
,@Getter
,@Setter
, and@RequiredArgsConstructor
Source Code
All source code for this post can be found on GitHub. To get started quickly, use one of the two following commands (gist).
# clone the official v2.1.1 release for this post | |
git clone --depth 1 --branch v2.1.1 \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo \ | |
&& git checkout -b v2.1.1 | |
# clone the latest version of code (newer than article) | |
git clone --depth 1 --branch master \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo |
For this post, I have used JetBrains IntelliJ IDEA and Git Bash on Windows for development. However, all code should be compatible with most popular IDEs and development platforms. The project assumes you have Docker and the Cloud Foundry Command Line Interface (cf CLI) installed locally.
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
Demo Application
The Spring Boot application demonstrated in this post is a simple election-themed RESTful API. The app allows API consumers to create, read, update, and delete, candidates, elections, and votes, via its exposed RESTful HTTP-based resources.
The Spring Boot application consists of (7) JPA Entities that mirror the tables and views in the database, (7) corresponding Spring Data Repositories, (2) Spring REST Controller, (4) Liquibase change sets, and associated Spring, Liquibase, Swagger, and PCF configuration files. I have intentionally chosen to avoid the complexities of using Data Transfer Objects (DTOs) for brevity, albeit a security concern, and directly expose the entities as resources.
Controller Resources
This application is a simple CRUD application. The application contains a few simple HTTP GET resources in each of the two controller classes, as an introduction to Spring REST controllers. For example, the CandidateController
contains the /candidates/summary
and /candidates/summary/{election}
resources (shown below in Postman). Typically, you would expose your data to the end-user as controller resources, as opposed to exposing the entities directly. The ease of defining controller resources is one of the many powers of Spring Boot.
Paging and Sorting
As an introduction to Spring Data’s paging and sorting features, both the VoteRepository
and VotesElectionsViewRepository
Repository Interfaces extend Spring Data’s PagingAndSortingRepository<T,ID>
interface, instead of the default CrudRepository<T,ID>
interface. With paging and sorting enabled, you may both sort and limit the amount of data returned in the response payload. For example, to reduce the size of your response payload, you might choose to page through the votes in blocks of 25 votes at a time. In that case, as shown below in Postman, if you needed to return just votes 26-50, you would append the /votes
resource with ?page=1&size=25
. Since paging starts on page 0 (zero), votes 26-50 will on page 1.
Swagger
This project also includes the Springfox implementation of the Swagger 2 specification. Along with the Swagger 2 dependency, the project takes a dependency on io.springfox:springfox-swagger-ui
. The Springfox Swagger UI dependency allows us to view and interactively test our controller resources through Swagger’s browser-based UI, as shown below.
All Swagger configuration can be found in the project’s SwaggerConfig
Spring Configuration class.
Gradle
This post’s Spring Boot application is built with Gradle, although it could easily be converted to Maven if desired. According to Gradle, Gradle is the modern tool used to build, automate and deliver everything from mobile apps to microservices.
Data
In real life, most applications interact with one or more data sources. The Spring Boot application demonstrated in this post interacts with data from a PostgreSQL database. PostgreSQL, or simply Postgres, is the powerful, open-source object-relational database system, which has supported production-grade applications for 15 years. The application’s single elections
database consists of (6) tables, (3) views, and (2) function, which are both used to generate random votes for this demonstration.
Spring Data makes interacting with PostgreSQL easy. In addition to the features of Spring Data, we will use Liquibase. Liquibase is known as the source control for your database. With Liquibase, your database development lifecycle can mirror your Spring development lifecycle. Both DDL (Data Definition Language) and DML (Data Manipulation Language) changes are versioned controlled, alongside the Spring Boot application source code.
Locally, we will take advantage of Docker to host our development PostgreSQL database, using the official PostgreSQL Docker image. With Docker, there is no messy database installation and configuration of your local development environment. Recreating and deleting your PostgreSQL database is simple.
To support the data-tier in our hosted PWS environment, we will implement ElephantSQL, an offering from the Pivotal Services Marketplace. ElephantSQL is a hosted version of PostgreSQL, running on AWS. ElephantSQL is referred to as PostgreSQL as a Service, or more generally, a Database as a Service or DBaaS. As a Pivotal Marketplace service, we will get easy integration with our PWS-hosted Spring Boot application, with near-zero configuration.
Docker
First, set up your local PostgreSQL database using Docker and the official PostgreSQL Docker image. Since this is only a local database instance, we will not worry about securing our database credentials (gist).
# create container | |
docker run --name postgres \ | |
-e POSTGRES_USERNAME=postgres \ | |
-e POSTGRES_PASSWORD=postgres1234 \ | |
-e POSTGRES_DB=elections \ | |
-p 5432:5432 \ | |
-d postgres | |
# view container | |
docker container ls | |
# trail container logs | |
docker logs postgres --follow |
Your running PostgreSQL container should resemble the output shown below.
Data Source
Most IDEs allow you to create and save data sources. Although this is not a requirement, it makes it easier to view the database’s resources and table data. Below, I have created a data source in IntelliJ from the running PostgreSQL container instance. The port, username, password, and database name were all taken from the above Docker command.
Liquibase
There are multiple strategies when it comes to managing changes to your database. With Liquibase, each set of changes are handled as change sets. Liquibase describes a change set as an atomic change that you want to apply to your database. Liquibase offers multiple formats for change set files, including XML, JSON, YAML, and SQL. For this post, I have chosen SQL, specifically PostgreSQL SQL dialect, which can be designated in the IntelliJ IDE. Below is an example of the first changeset, which creates four tables and two indexes.
As shown below, change sets are stored in the db/changelog/changes
sub-directory, as configured in the master change log file (db.changelog-master.yaml
). Change set files follow an incremental naming convention.
The empty PostgreSQL database, before any Liquibase changes, should resemble the screengrab shown below.
To automatically run Liquibase database migrations on startup, the org.liquibase:liquibase-core
dependency must be added to the project’s build.gradle
file. To apply the change sets to your local, empty PostgreSQL database, simply start the service locally with the gradle bootRun
command. As the app starts after being compiled, any new Liquibase change sets will be applied.
You might ask how does Liquibase know the change sets are new. During the initial startup of the Spring Boot application, in addition to any initial change sets, Liquibase creates two database tables to track changes, the databasechangelog
and databasechangeloglock
tables. Shown below are the two tables, along with the results of the four change sets included in the project, and applied by Liquibase to the local PostgreSQL elections database.
Below we see the contents of the databasechangelog
table, indicating that all four change sets were successfully applied to the database. Liquibase checks this table before applying change sets.
ElephantSQL
Before we can deploy our Spring Boot application to PWS, we need an accessible PostgreSQL instance in the Cloud; I have chosen ElephantSQL. Available through the Pivotal Services Marketplace, ElephantSQL currently offers one free and three paid service plans for their PostgreSQL as a Service. I purchased the Panda service plan as opposed to the free Turtle service plan. I found the free service plan was too limited in the maximum number of database connections for multiple service instances.
Previewing and purchasing an ElephantSQL service plan from the Pivotal Services Marketplace, assuming you have an existing PWS account, literally takes a single command (gist).
# view elephantsql service plans | |
cf marketplace -s elephantsql | |
# purchase elephantsql service plan | |
cf create-service elephantsql panda elections | |
# display details of running service | |
cf service elections |
The output of the command should resemble the screengrab below. Note the total concurrent connections and total storage for each plan.
To get details about the running ElephantSQL service, use the cf service elections
command.
From the ElephantSQL Console, we can obtain the connection information required to access our PostgreSQL elections database. You will need the default database name, username, password, and URL.
Service Binding
Once you have created the PostgreSQL database service, you need to bind the database service to the application. We will bind our application and the database, using the PCF deployment manifest file (manifest.yml
), found in the project’s root directory. Binding is done using the services
section (shown below).
The key/value pairs in the env
section of the deployment manifest will become environment variables, local to the deployed Spring Boot service. These key/value pairs in the manifest will also override any configuration set in Spring’s external application properties file (application.yml
). This file is located in the resources
sub-directory. Note the SPRING_PROFILES_ACTIVE: test
environment variable in the manifest.yml
file. This variable designates which Spring Profile will be active from the multiple profiles defined in the application.yml
file.
Deployment to PWS
Next, we run gradle build
followed by cf push
to deploy one instance of the Spring Boot service to PWS and associate it with our ElephantSQL database instance. Below is the expected output from the cf push
command.
Note the route highlighted below. This is the URL where your Spring Boot service will be available.
To confirm your ElephantSQL database was populated by Liquibase when PWS started the deployed Spring application instance, we can check the ElephantSQL Console’s Stats tab. Note the database tables and rows in each table, signifying Liquibase ran successfully. Alternately, you could create another data source in your IDE, connected to ElephantSQL; this can be helpful for troubleshooting.
To access the running service and check that data is being returned, point your browser (or Postman) to the URL returned from the cf push
command output (see second screengrab above) and hit the /candidates
resource. Obviously, your URL, referred to as a route by PWS, will be different and unique. In the response payload, you should observe a JSON array of eight candidate objects. Each candidate was inserted into the Candidate table of the database, by Liquibase, when Liquibase executed the second of the four change sets on start-up.
With Spring Boot Actuator and Spring Data REST, our simple Spring Boot application has dozens of resources exposed automatically, without extensive coding of resource controllers. Actuator exposes resources to help manage and troubleshoot the application, such as info
, health
, mappings
(shown below), metrics
, env
, and configprops
, among others. All Actuator resources are exposed explicitly, thus they can be disabled for Production deployments. With Spring Boot 2.0, all Actuator resources are now preceded with /actuator/
.
According to Pivotal, Spring Data REST builds on top of Spring Data repositories, analyzes an application’s domain model and exposes hypermedia-driven HTTP resources for aggregates contained in the model, such as our /candidates
resource. A partial list of the application’s exposed resources are listed in the GitHub project’s README file.
In Spring’s approach to building RESTful web services, HTTP requests are handled by a controller. Spring Data REST automatically exposes CRUD resources for our entities. With Spring Data JPA, POJOs like our Candidate class are annotated with @Entity
, indicating that it is a JPA entity. Lacking a @Table
annotation, it is assumed that this entity will be mapped to a table named Candidate
.
With Spring’s Data REST’s RESTful HTTP-based API, traditional database Create, Read, Update, and Delete commands for each PostgreSQL database table are automatically mapped to equivalent HTTP methods, including POST, GET, PUT, PATCH, and DELETE.
Below is an example, using Postman, to create a new Candidate using an HTTP POST method.
Below is an example, using Postman, to update a new Candidate using an HTTP PUT method.
With Spring Data REST, we can even retrieve data from read-only database Views, as shown below. This particular JSON response payload was returned from the candidates_by_elections
database View, using the /election-candidates
resource.
Scaling Up
Once your application is deployed and you have tested its functionality, you can easily scale out or scale in the number instances, memory, and disk, with the cf scale
command (gist).
# scale up to 2 instances | |
cf scale cf-spring -i 2 | |
# review status of both instances | |
cf app pcf-postgresql-demo |
Below is sample output from scaling up the Spring Boot application to two instances.
Optionally, you can activate auto-scaling, which will allow the application to scale based on load.
Following the PCF architectural model, auto-scaling is actually another service from the Pivotal Services Marketplace, PCF App Autoscaler, as seen below, running alongside our ElephantSQL service.
With PCF App Autoscaler, you set auto-scaling minimum and maximum instance limits and define scaling rules. Below, I have configured auto-scaling to scale out the number of application instances when the average CPU Utilization of all instances hits 80%. Conversely, the application will scale in when the average CPU Utilization recedes below 40%. In addition to CPU Utilization, PCF App Autoscaler also allows you to set scaling rules based on HTTP Throughput, HTTP Latency, RabbitMQ Depth (queue depth), and Memory Utilization.
Furthermore, I set the auto-scaling minimum number of instances to two and the maximum number of instances to four. No matter how much load is placed on the application, PWS will not scale above four instances. Conversely, PWS will maintain a minimum of two running instances at all times.
Conclusion
This brief post demonstrates both the power and simplicity of Spring Boot to quickly develop highly-functional, data-centric RESTful API applications with minimal coding. Further, when coupled with Pivotal Cloud Foundry, Spring developers have a highly scalable, resilient cloud-native application hosting platform.
Deploying and Configuring Istio on Google Kubernetes Engine (GKE)
Posted by Gary A. Stafford in Cloud, DevOps, Enterprise Software Development, GCP, Java Development, Software Development on December 22, 2017
Introduction
Unquestionably, Kubernetes has quickly become the leading Container-as-a-Service (CaaS) platform. In late September 2017, Rancher Labs announced the release of Rancher 2.0, based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. In late October, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). In November, Google officially renamed its Google Container Engine to Google Kubernetes Engine. Most recently, at AWS re:Invent 2017, Amazon announced its own manged version of Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS).
The recent abundance of Kuberentes-based CaaS offerings makes deploying, scaling, and managing modern distributed applications increasingly easier. However, as Craig McLuckie, CEO of Heptio, recently stated, “…it doesn’t matter who is delivering Kubernetes, what matters is how it runs.” Making Kubernetes run better is the goal of a new generation of tools, such as Istio, Envoy, Project Calico, Helm, and Ambassador.
What is Istio?
One of those new tools and the subject of this post is Istio. Released in Alpha by Google, IBM and Lyft, in May 2017, Istio is an open platform to connect, manage, and secure microservices. Istio describes itself as, “…an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio’s control plane functionality.”
Istio contains several components, split between the data plane and a control plane. The data plane includes the Istio Proxy (an extended version of Envoy proxy). The control plane includes the Istio Mixer, Istio Pilot, and Istio-Auth. The Istio components work together to provide behavioral insights and operational control over a microservice-based service mesh. Istio describes a service mesh as a “transparently injected layer of infrastructure between a service and the network that gives operators the controls they need while freeing developers from having to bake solutions to distributed system problems into their code.”
In this post, we will deploy the latest version of Istio, v0.4.0, on Google Cloud Platform, using the latest version of Google Kubernetes Engine (GKE), 1.8.4-gke.1. Both versions were just released in mid-December, as this post is being written. Google, as you probably know, was the creator of Kubernetes, now an open-source CNCF project. Google was the first Cloud Service Provider (CSP) to offer managed Kubernetes in the Cloud, starting in 2014, with Google Container Engine (GKE), which used Kubernetes. This post will outline the installation of Istio on GKE, as well as the deployment of a sample application, integrated with Istio, to demonstrate Istio’s observability features.
Getting Started
All code from this post is available on GitHub. You will need to change some variables within the code, to meet your own project’s needs (gist).
git clone \ | |
--branch master --single-branch --depth 1 --no-tags \ | |
https://github.com/garystafford/gke-istio-atlas-rabbit-demo.git |
The scripts used in this post are as follows, in order of execution (gist).
# gke | |
sh ./kubernetes/voter-api-atlas/create-gke-cluster.sh | |
# istio | |
sh ./kubernetes/voter-api-atlas/install-istio.sh | |
# voter api | |
sh ./kubernetes/voter-api-atlas/create-voter-api_part1.sh | |
sh ./kubernetes/voter-api-atlas/create-voter-api_part2.sh | |
sh ./kubernetes/voter-api-atlas/create-voter-api_part3.sh | |
# sample document and message generation | |
sh ./sample_docs_scripts/sample_data_run_all.sh |
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
Creating GKE Cluster
First, we create the Google Kubernetes Engine (GKE) cluster. The GKE cluster creation is highly-configurable from either the GCP Cloud Console or from the command line, using the Google Cloud Platform gcloud CLI. The CLI will be used throughout the post. I have chosen to create a highly-available, 3-node cluster (1 node/zone) in GCP’s South Carolina us-east1 region (gist).
#!/bin/bash | |
# create gke cluster | |
gcloud beta container \ | |
clusters create "voter-api-istio-demo" \ | |
--project "voter-api-kub-demo" \ | |
--enable-kubernetes-alpha \ | |
--cluster-version "1.8.4-gke.1" \ | |
--username="admin" \ | |
--zone "us-east1-b" \ | |
--node-locations "us-east1-b","us-east1-c","us-east1-d" \ | |
--machine-type "n1-standard-1" \ | |
--num-nodes "1" \ | |
--labels environment=development \ | |
--enable-cloud-logging \ | |
--enable-cloud-monitoring | |
# retrieve cluster credentials | |
gcloud container clusters get-credentials voter-api-istio-demo \ | |
--zone us-east1-b --project voter-api-kub-demo |
Once built, we need to retrieve the cluster’s credentials.
Having chosen to use Kubernetes’ Alpha Clusters feature, the following warning is displayed, warning the Alpha cluster will be deleted in 30 days (gist).
This will create a cluster with all Kubernetes Alpha features enabled. | |
- This cluster will not be covered by the Kubernetes Engine SLA and should not be used for production workloads. | |
- You will not be able to upgrade the master or nodes. | |
- The cluster will be deleted after 30 days. |
The resulting GKE cluster will have the following characteristics (gist).
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODE S STATUS | |
voter-api-istio-demo us-east1-b 1.8.4-gke.1 ALPHA (29 days left) 35.227.38.218 n1-standard-1 1.8.4-gke.1 3 RUNNING |
Installing Istio
With the GKE cluster created, we can now deploy Istio. There are at least two options for deploying Istio on GCP. You may choose to manually install and configure Istio in a GKE cluster, as I will do in this post, following these instructions. Alternatively, you may choose to use the Istio GKE Deployment Manager. This all-in-one GCP service will create your GKE cluster, and install and configure Istio and the Istio add-ons, including their Book Info sample application.
There were a few reasons I chose not to use the Istio GKE Deployment Manager option. First, until very recently, you could not install the latest versions of Istio with this option (as of 12/21 you can now deploy v0.3.0 and v0.4.0). Secondly, currently, you only have the choice of GKE version 1.7.8-gke.0. I wanted to test the latest v1.8.4 release with a stable GA version of RBAC. Thirdly, at least three out of four of my initial attempts to use the Istio GKE Deployment Manager failed during provisioning for unknown reasons. Lastly, you will learn more about GKE, Kubernetes, and Istio by doing it yourself, at least the first time.
Istio Code Changes
Before installing Istio, I had to make several minor code changes to my existing Kubernetes resource files. The requirements are detailed in Istio’s Pod Spec Requirements. These changes are minor, but if missed, cause errors during deployment, which can be hard to identify and resolve.
First, you need to name your Service ports in your Service resource files. More specifically, you need to name your service ports, http
, as shown in the Candidate microservice’s Service resource file, below (note line 10) (gist).
apiVersion: v1 | |
kind: Service | |
metadata: | |
namespace: voter-api | |
labels: | |
app: candidate | |
name: candidate | |
spec: | |
ports: | |
- name: http | |
port: 8080 | |
selector: | |
app: candidate |
Second, an app label is required for Istio. I added an app label to each Deployment and Service resource file, as shown below in the Candidate microservice’s Deployment resource files (note lines 5 and 6) (gist).
apiVersion: extensions/v1beta1 | |
kind: Deployment | |
metadata: | |
namespace: voter-api | |
labels: | |
app: candidate | |
name: candidate | |
spec: | |
replicas: 3 | |
strategy: {} | |
template: | |
metadata: | |
labels: | |
app: candidate | |
version: v1 | |
spec: | |
containers: | |
- image: garystafford/candidate-service:gke-0.6.139 | |
name: candidate | |
ports: | |
- containerPort: 8080 | |
env: | |
- name: SPRING_RABBITMQ_HOST | |
valueFrom: | |
secretKeyRef: | |
name: rabbitmq-connection-string | |
key: host | |
- name: SPRING_RABBITMQ_VIRTUAL_HOST | |
valueFrom: | |
secretKeyRef: | |
name: rabbitmq-connection-string | |
key: virtualHost | |
- name: SPRING_RABBITMQ_USERNAME | |
valueFrom: | |
secretKeyRef: | |
name: rabbitmq-connection-string | |
key: username | |
- name: SPRING_RABBITMQ_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
name: rabbitmq-connection-string | |
key: password | |
- name: SPRING_DATA_MONGODB_URI | |
valueFrom: | |
secretKeyRef: | |
name: mongodb-atlas-candidate | |
key: connection-string | |
command: ["/bin/sh"] | |
args: ["-c", "java -Dspring.profiles.active=kub-aks -Djava.security.egd=file:/dev/./urandom -jar /candidate/candidate-service.jar"] | |
imagePullPolicy: Always | |
restartPolicy: Always | |
status: {} |
The next set of code changes were to my existing Ingress resource file. The requirements for an Ingress resource using Istio are explained here. The first change, Istio ignores all annotations other than kubernetes.io/ingress.class: istio
(note line 7, below). The next change, if using HTTPS, the secret containing your TLS/SSL certificate and private key must be called istio-ingress-certs
; all other names will be ignored (note line 10, below). Related and critically important, that secret must be deployed to the istio-system
namespace, not the application’s namespace. The last change, for my particular my prefix match routing rules, I needed to change the rules from /{service_name}
to /{service_name}/.*
. The /.*
is a special Istio notation that is used to indicate a prefix match (note lines 14, 18, and 22, below) (gist).
apiVersion: extensions/v1beta1 | |
kind: Ingress | |
metadata: | |
name: voter-ingress | |
namespace: voter-api | |
annotations: | |
kubernetes.io/ingress.class: istio | |
spec: | |
tls: | |
- secretName: istio-ingress-certs | |
rules: | |
- http: | |
paths: | |
- path: /candidate/.* | |
backend: | |
serviceName: candidate | |
servicePort: 8080 | |
- path: /election/.* | |
backend: | |
serviceName: election | |
servicePort: 8080 | |
- path: /voter/.* | |
backend: | |
serviceName: voter | |
servicePort: 8080 |
Installing Istio
To install Istio, you first will need to download and uncompress the correct distribution of Istio for your OS. Istio provides instructions for installation on various platforms.
My install-istio.sh
script contains a variable, ISTIO_HOME
, which should point to the root of your local Istio directory. We will also deploy all the current Istio add-ons, including Prometheus, Grafana, Zipkin, Service Graph, and Zipkin-to-Stackdriver (gist).
#!/bin/bash | |
# install istio, add-ons, and roles | |
# https://cloud.google.com/kubernetes-engine/docs/tutorials/istio-on-gke | |
ISTIO_HOME="/Applications/istio-0.4.0" | |
# required dashboard access and istio roles | |
kubectl apply \ | |
-f ./other/kube-system-cluster-admin.yaml \ | |
-f ./other/cluster-admin-binding.yaml | |
# istio | |
kubectl apply \ | |
-f $ISTIO_HOME/install/kubernetes/istio-auth.yaml \ | |
-f $ISTIO_HOME/install/kubernetes/istio-initializer.yaml | |
# add-ons | |
kubectl apply \ | |
-f $ISTIO_HOME/install/kubernetes/addons/prometheus.yaml \ | |
-f $ISTIO_HOME/install/kubernetes/addons/grafana.yaml \ | |
-f $ISTIO_HOME/install/kubernetes/addons/servicegraph.yaml \ | |
-f $ISTIO_HOME/install/kubernetes/addons/zipkin.yaml \ | |
-f $ISTIO_HOME/install/kubernetes/addons/zipkin-to-stackdriver.yaml |
Once installed, from the GCP Cloud Console, an alternative to the native Kubernetes Dashboard, we should see the following Istio resources deployed and running. Below, note the three nodes are distributed across three zones within the GCP us-east-1 region, the correct version of GKE is employed, Stackdriver logging and monitoring are enabled, and the Alpha Clusters features are also enabled.
And here, we see the nodes that comprise the GKE cluster.
Below, note the four components that comprise Istio: istio-ca
, istio-ingress
, istio-mixer
, and istio-pilot
. Additionally, note the five components that comprise the Istio add-ons.
Below, observe the Istio Ingress has automatically been assigned a public IP address by GCP, accessible on ports 80 and 443. This IP address is how we will communicate with applications running on our GKE cluster, behind the Istio Ingress Load Balancer. Later, we will see how the Istio Ingress Load Balancer knows how to route incoming traffic to those application endpoints, using the Voter API’s Ingress configuration.
Istio makes ample use of Kubernetes Config Maps and Secrets, to store configuration, and to store certificates for mutual TLS.
Creation of the GKE cluster and deployed Istio to the cluster is complete. Following, I will demonstrate the deployment of the Voter API to the cluster. This will be used to demonstrate the capabilities of Istio on GKE.
Kubernetes Dashboard
In addition to the GCP Cloud Console, the native Kubernetes Dashboard is also available. To open, use the kubectl proxy
command and connect to the Kubernetes Dashboard at https://127.0.0.1:8001/ui
. You should now be able to view and edit all resources, from within the Kubernetes Dashboard.
Sample Application
To demonstrate the functionality of Istio and GKE, I will deploy the Voter API. I have used variations of the sample Voter API application in several previous posts, including Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB and Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. I suggest reading these two post to better understand the Voter API’s design.
For this post, I have reconfigured the Voter API to use MongoDB’s Atlas Database-as-a-Service (DBaaS) as the NoSQL data-source for each microservice. The Voter API is connected to a MongoDB Atlas 3-node M10 instance cluster in GCP’s us-east1 (South Carolina) region. With Atlas, you have the choice of deploying clusters to GCP or AWS.
The Voter API will use CloudAMQP’s RabbitMQ-as-a-Service for its decoupled, eventually consistent, message-based architecture. For this post, the Voter API is configured to use a RabbitMQ cluster in GCP’s us-east1 (South Carolina) region; I chose a minimally-configured free version of RabbitMQ. CloudAMQP allows you to provide a much more robust multi-node clusters for Production, on GCP or AWS.
CloudAMQP provides access to their own Management UI, in addition to access to RabbitMQ’s Management UI.
With the Voter API running and taking traffic, we can see each Voter API microservice instance, nine replicas in total, connected to RabbitMQ. They are each publishing and consuming messages off the two queues.
The GKE, MongoDB Atlas, and RabbitMQ clusters are all running in the same GCP Region. Optimizing the Voter API cloud architecture on GCP, within a single Region, greatly reduces network latency, increases API performance, and improves end-to-end application and infrastructure observability and traceability.
Installing the Voter API
For simplicity, I have divided the Voter API deployment into three parts. First, we create the new voter-api
Kubernetes Namespace, followed by creating a series of Voter API Kuberentes Secrets (gist).
#!/bin/bash | |
# apply voter api resources part 1 | |
# namespace | |
kubectl apply -f ./other/namespace.yaml | |
# secrets | |
kubectl apply \ | |
-f ./secrets/mongodb-atlas-election-secret.yaml \ | |
-f ./secrets/mongodb-atlas-candidate-secret.yaml \ | |
-f ./secrets/mongodb-atlas-voter-secret.yaml \ | |
-f ./secrets/rabbitmq-connection-string-secret.yaml \ | |
-f ./secrets/istio-ingress-certs-secret.yaml |
There are a total of five secrets, one secret for each of the three microservice’s MongoDB databases, one secret for the RabbitMQ connection string (shown below), and one secret containing a Let’s Encrypt SSL/TLS certificate chain and private key for the Voter API’s domain, api.voter-demo.com
(shown below).
Next, we create the microservice pods, using the Kubernetes Deployment files, create three ClusterIP-type Kubernetes Services, and a Kubernetes Ingress. The Ingress contains the service endpoint configuration, which Istio Ingress will use to correctly route incoming external API traffic (gist).
#!/bin/bash | |
# apply voter api resources part 2 | |
# pods | |
kubectl apply \ | |
-f ./services/election-deployment.yaml \ | |
-f ./services/candidate-deployment.yaml \ | |
-f ./services/voter-deployment.yaml | |
# services | |
kubectl apply \ | |
-f ./services/election-service.yaml \ | |
-f ./services/candidate-service.yaml \ | |
-f ./services/voter-service.yaml | |
# ingress | |
kubectl apply -f ./other/ingress-istio.yaml |
Three Kubernetes Pods for each of the three microservice should be created, for a total of nine pods. In the GCP Cloud UI’s Workloads (Kubernetes Deployments), you should see the following three resources. Note each Workload has three pods, each containing one replica of the microservice.
In the GCP Cloud UI’s Discovery and Load Balancing tab, you should observe the following four resources. Note the Voter API Ingress endpoints for the three microservices, which are used by the Istio Proxy, discussed below.
Istio Proxy
Examining the Voter API deployment more closely, you will observe that each of the nine Voter API microservice pods have two containers running within them (gist).
kubectl get pods -n voter-api | |
NAME READY STATUS RESTARTS AGE | |
candidate-8567b45cd9-728fn 2/2 Running 0 1h | |
candidate-8567b45cd9-7pq4k 2/2 Running 0 1h | |
candidate-8567b45cd9-d89fr 2/2 Running 0 1h | |
election-545759dbf6-4jxjs 2/2 Running 0 1h | |
election-545759dbf6-4ktgh 2/2 Running 0 1h | |
election-545759dbf6-k7k2t 2/2 Running 0 1h | |
voter-7b4599886c-6ccg2 2/2 Running 0 1h | |
voter-7b4599886c-grtps 2/2 Running 0 1h | |
voter-7b4599886c-p6fgl 2/2 Running 0 1h |
Along with the microservice container, there is an Istio Proxy container, commonly referred to as a sidecar container. Istio Proxy is an extended version of the Envoy proxy, Lyfts well-known, highly performant edge and service proxy. The proxy sidecar container is injected automatically when the Voter API pods are created. This is possible because we deployed the Istio Initializer (istio-initializer.yaml
). The Istio Initializer guarantees that Istio Proxy will be automatically injected into every microservice Pod. This is referred to as automatic sidecar injection. Below we see an example of one of three Candidate pods running the istio-proxy sidecar.
In the example above, all traffic to and from the Candidate microservice now passes through the Istio Proxy sidecar. With Istio Proxy, we gain several enterprise-grade features, including enhanced observability, service discovery and load balancing, credential injection, and connection management.
Manual Sidecar Injection
What if we have application components we do not want automatically managed with Istio Proxy. In that case, manual sidecar injection might be preferable to automatic sidecar injection with Istio Initializer. For manual sidecar injection, we execute a istioctl kube-inject
command for each of the Kubernetes Deployments. The command manually injects the Istio Proxy container configuration into the Deployment resource file, alongside each Voter API microservice container. On Mac and Linux, this command is similar to the following. Proxy injection is discussed in detail, here (gist).
kubectl create -f <(istioctl kube-inject -f voter-deployment.yaml) |
External Service Egress
Whether you choose automatic or manual sidecar injection of the Istio Proxy, Istio’s egress rules currently only support HTTP and HTTPS requests. The Voter API makes external calls to its backend services, using two alternate protocols, MongoDB Wire Protocol (mongodb://
) and RabbitMQ AMQP (amqps://
). Since we cannot use an Istio egress rule for either protocol, we will use the includeIPRanges
option with the istioctl kube-inject
command to open egress to the two backend services. This will completely bypass Istio for a specific IP range. You can read more about calling external services directly, on Istio’s website.
You will need to modify the includeIPRanges
argument within the create-voter-api-part3.sh
script, adding your own GKE cluster’s IP ranges to the IP_RANGES
variable. The two IP ranges can be found using the following GCP CLI command (gist).
gcloud container clusters describe voter-api-istio-demo \ | |
--zone us-east1-b --project voter-api-kub-demo \ | |
| egrep 'clusterIpv4Cidr|servicesIpv4Cidr' |
The create-voter-api-part3.sh
script also contains a modified version the istioctl kube-inject
command for each Voter API Deployment. Using the modified command, the original Deployment files are not altered, instead, a temporary copy of the Deployment file is created into which Istio injects the required modifications. The temporary Deployment file is then used for the deployment, and then immediately deleted (gist).
#!/bin/bash | |
# apply voter api resources part 3 | |
# manual sidecar injection with istioctl kube-inject | |
# istio egress of mongodb and amqp protocols | |
IP_RANGES="10.12.0.0/14,10.15.240.0/20" | |
# candidate service | |
istioctl kube-inject –kubeconfig "~/.kube/config" \ | |
-f ./services/candidate-deployment.yaml \ | |
--includeIPRanges=$IP_RANGES > \ | |
candidate-deployment-istio.yaml \ | |
&& kubectl apply -f candidate-deployment-istio.yaml \ | |
&& rm candidate-deployment-istio.yaml | |
# election service | |
istioctl kube-inject –kubeconfig "~/.kube/config" \ | |
-f ./services/election-deployment.yaml \ | |
--includeIPRanges=$IP_RANGES > \ | |
election-deployment-istio.yaml \ | |
&& kubectl apply -f election-deployment-istio.yaml \ | |
&& rm election-deployment-istio.yaml | |
# voter service | |
istioctl kube-inject –kubeconfig "~/.kube/config" \ | |
-f ./services/voter-deployment.yaml \ | |
--includeIPRanges=$IP_RANGES > \ | |
voter-deployment-istio.yaml \ | |
&& kubectl apply -f voter-deployment-istio.yaml \ | |
&& rm voter-deployment-istio.yaml |
Some would argue not having the actual deployed version of the file checked into in source code control is an anti-pattern; in this case, I would disagree. If I need to redeploy, I would just run the istioctl kube-inject
command again. You can always view, edit, and import the deployed YAML file, from the GCP CLI or GKE Management UI.
The amount of Istio configuration injected into each microservice Pod’s Deployment resource file is considerable. The Candidate Deployment file swelled from 68 lines to 276 lines of code! This hints at the power, as well as the complexity of Istio. Shown below is a snippet of the Candidate Deployment YAML, after Istio injection.
Confirming Voter API
Installation of the Voter API is now complete. We can validate the Voter API is working, and that traffic is being routed through Istio, using Postman. Below, we see a list of candidates successfully returned from the Voter microservice, through the Voter API. This means, not only us the API running, but that messages have been successfully passed between the services, using RabbitMQ, and saved to the microservice’s corresponding MongoDB databases.
Below, note the server
and x-envoy-upstream-service-time
response headers. They both confirm the Voter API HTTPS traffic is being managed by Istio.
Observability
Observability is certainly one of the primary advantages of implementing Istio. For anyone like myself, who has spent many long and often frustrating hours installing, configuring, and managing monitoring systems for distributed platforms, Istio’s observability features are most welcome. Istio provides Prometheus, Grafana, Zipkin, Service Graph, and Zipkin-to-Stackdriver add-ons. Combined with the monitoring capabilities of Backend-as-a-Service providers, such as MongoDB Altas and CloudAMQP RabvbitMQ, you get considerable visibility into your application, out-of-the-box.
Prometheus
First, we will look at Prometheus, a leading open-source monitoring solution. The easiest way to access the Prometheus UI, or any of the other add-ons, including Prometheus, is using port-forwarding. For example with Prometheus, we use the following command (gist).
kubectl -n istio-system port-forward \ | |
$(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') \ | |
9090:9090 & |
Alternatively, you could securely expose any of the Istio add-ons through the Istio Ingress, similar to how the Voter API microservice endpoints are exposed.
Prometheus collects time series metrics from both the Istio and Voter API components. Below we see two examples of typical metrics being collected; they include the 201 responses generated by the Candidate microservice, and the outflow of bytes by the Election microservice, over a given period of time.
Grafana
Although Prometheus is an excellent monitoring solution, Grafana, the leading open source software for time series analytics, provides a much easier way to visualize the metrics collected by Prometheus. Conveniently, Istio provides a dynamically-configured Grafana Dashboard, which will automatically display metrics for components deployed to GKE.
Below, note the metrics collected for the Candidate and Election microservice replicas. Out-of-the-box, Grafana displays common HTTP KPIs, such as request rate, success rate, response codes, response time, and response size. Based on the version label included in the Deployment resource files, we can delineate metrics collected by the version of the Voter API microservices, in this case, v1 of the Candidate and Election microservices.
Zipkin
Next, we have Zipkin, a leading distributed tracing system.
Since the Voter API application uses RabbitMQ to decouple communications between services, versus direct HTTP-based IPC, we won’t see any complex multi-segment traces. We will only see traces representing traffic to and from the microservices, which passes through the Istio Ingress.
Service Graph
Similar to Zipkin, Service Graph is not as valuable with the Voter API application as it could be with more complex applications. Below is a Service Graph view of the Voter API showing microservice version and requests/second to each microservice.
Stackdriver
One last tool we have to monitor our GKE cluster is Stackdriver. Stackdriver provides fine-grain monitoring, logging, and diagnostics. If you recall, we enabled Stackdriver logging and monitoring when we first provisioned the GKE cluster. Stackdrive allows us to examine the performance of the GKE cluster’s resources, review logs, and set alerts.
Zipkin-to-Stackdriver
When we installed Istio, we also installed the Zipkin-to-Stackdriver add-on. The Stackdriver Trace Zipkin Collector is a drop-in replacement for the standard Zipkin HTTP collector that writes to Google’s free Stackdriver Trace distributed tracing service. To use Stackdriver for traces originating from Zipkin, there is additional configuration required, which is commented out of the current version of the zipkin-to-stackdriver.yaml
file (gist).
spec: | |
containers: | |
- name: zipkin-to-stackdriver | |
image: gcr.io/stackdriver-trace-docker/zipkin-collector | |
imagePullPolicy: IfNotPresent | |
# env: | |
# - name: GOOGLE_APPLICATION_CREDENTIALS | |
# value: "/path/to/credentials.json" | |
# - name: PROJECT_ID | |
# value: "my_project_id" | |
ports: | |
- name: zipkin | |
containerPort: 9411 |
Instructions to configure the Zipkin-to-Stackdriver feature can be found here. Below is an example of how you might add the necessary configuration using a Kubernetes ConfigMap to inject the required user credentials JSON file (zipkin-to-stackdriver-creds.json
) into the zipkin-to-stackdriver
container. The new configuration can be seen on lines 27-44 (gist).
# Revised copy of Istio v0.4.0 file with required env vars - | |
# GOOGLE_APPLICATION_CREDENTIALS and PROJECT_ID added using a ConfigMap | |
# *** Need to add credentials json file contents to zipkin-to-stackdriver-creds.yaml *** | |
apiVersion: extensions/v1beta1 | |
kind: Deployment | |
metadata: | |
name: zipkin-to-stackdriver | |
namespace: istio-system | |
annotations: | |
sidecar.istio.io/inject: "false" | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: zipkin-to-stackdriver | |
template: | |
metadata: | |
name: zipkin-to-stackdriver | |
labels: | |
app: zipkin-to-stackdriver | |
spec: | |
containers: | |
- name: zipkin-to-stackdriver | |
image: gcr.io/stackdriver-trace-docker/zipkin-collector | |
imagePullPolicy: IfNotPresent | |
volumeMounts: | |
- mountPath: /tmp | |
name: zipkin-stackdriver-creds | |
env: | |
- name: GOOGLE_APPLICATION_CREDENTIALS | |
value: "/tmp/zipkin-to-stackdriver-creds.json" | |
- name: PROJECT_ID | |
value: "voter-api-kub-demo" | |
ports: | |
- name: zipkin | |
containerPort: 9411 | |
volumes: | |
- name: zipkin-to-stackdriver-creds | |
configMap: | |
name: zipkin-to-stackdriver-creds | |
items: | |
- key: config | |
path: zipkin-to-stackdriver-creds.json | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: zipkin-to-stackdriver | |
namespace: istio-system # <-- Added - missing... | |
spec: | |
ports: | |
- name: zipkin | |
port: 9411 | |
selector: | |
app: zipkin-to-stackdriver | |
--- |
Conclusion
Istio provides a significant amount of fine-grained management control to Kubernetes. Managed Kubernetes CaaS offerings like GKE, coupled with tools like Istio, will soon make running reliable and secure containerized applications in Production, commonplace.
References
- Introducing Istio: A robust service mesh for microservices
- IBM Code: What is a Service Mesh and how Istio fits in
- Microservices Patterns With Envoy Sidecar Proxy: The series
- OpenShift: Microservices Patterns with Envoy Sidecar Proxy, Part I: Circuit Breaking
- OpenShift: Microservices Patterns with Envoy Proxy, Part II: Timeouts and Retries
- OpenShift: Microservices Patterns With Envoy Proxy, Part III: Distributed Tracing
- Using Kubernetes Configmap with configuration files…
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.
Scaffold a RESTful API with Yeoman, Node, Restify, and MongoDB
Posted by Gary A. Stafford in Continuous Delivery, Enterprise Software Development, Software Development on June 22, 2016
Using Yeoman, scaffold a basic RESTful CRUD API service, based on Node, Restify, and MongoDB.
Introduction
NOTE: Generator updated on 11-13-2016 to v0.2.1.
Yeoman generators reduce the repetitive coding of boilerplate functionality and ensure consistency between full-stack JavaScript projects. For several recent Node.js projects, I created the generator-node-restify-mongodb Yeoman generator. This Yeoman generator scaffolds a basic RESTful CRUD API service, a Node application, based on Node.js, Restify, and MongoDB.
According to their website, Restify, used most notably by Netflix, borrows heavily from Express. However, while Express is targeted at browser applications, with templating and rendering, Restify is keenly focused on building API services that are maintainable and observable.
Along with Node, Restify, and MongoDB, theNode application’s scaffolded by the Node-Restify-MongoDB Generator, also implements Bunyan, which includes DTrace, Jasmine, using jasmine-node, Mongoose, and Grunt.
Portions of the scaffolded Node application’s file structure and code are derived from what I consider the best parts of several different projects, including generator-express, generator-restify-mongo, and generator-restify.
Installation
To begin, install Yeoman and the generator-node-restify-mongodb using npm. The generator assumes you have pre-installed Node and MongoDB.
npm install -g yo npm install -g generator-node-restify-mongodb
Then, generate the new project.
mkdir node-restify-mongodb cd $_ yo node-restify-mongodb
Yeoman scaffolds the application, creating the directory structure, copying required files, and running ‘npm install’ to load the npm package dependencies.
Using the Generated Application
Next, import the supplied set of sample widget documents into the local development instance of MongoDB from the supplied ‘data/widgets.json’ file.
NODE_ENV=development grunt mongoimport --verbose
Similar to Yeoman’s Express Generator, this application contains configuration for three typical environments: ‘Development’ (default), ‘Test’, and ‘Production’. If you want to import the sample widget documents into your Test or Production instances of MongoDB, first change the ‘NODE_ENV’ environment variable value.
NODE_ENV=production grunt mongoimport --verbose
To start the application in a new terminal window, use the following command.
npm start
The output should be similar to the example, below.
To test the application, using jshint and the jasmine-node module, the sample documents must be imported into MongoDB and the application must be running (see above). To test the application, open a separate terminal window, and use the following command.
npm test
The project contains a set of jasmine-node tests, split between the ‘/widgets’ and the ‘/utils’ endpoints. If the application is running correctly, you should see the following output from the tests.
Similarly, the following command displays a code coverage report, using the grunt, mocha, istanbul, and grunt-mocha-istanbul node modules.
grunt coverage
Grunt uses the grunt-mocha-istanbul module to execute the same set of jasmine-node tests as shown above. Based on those tests, the application’s code coverage (statement, line, function, and branch coverage) is displayed.
You may test the running application, directly, by cURLing the ‘/widgets’ endpoints.
curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets"
For more legible output, try prettyjson.
npm install -g prettyjson curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets" --silent | prettyjson curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets/SVHXPAWEOD" --silent | prettyjson
The JSON-formatted response body from the HTTP GET requests should look similar to the output, below.
A much better RESTful API testing solution is Postman. Postman provides the ability to individually configure each environment and abstract that environment-specific configuration, such as host and port, from the actual HTTP requests.
Continuous Integration
As part of being published to both the npmjs and Yeoman registries, the generator-node-restify-mongodb generator is continuously integrated on Travis CI. This should provide an addition level of confidence to the generator’s end-users. Currently, Travis CI tests the generator against Node.js v4, v5, and v6, as well as IO.js. Older versions of Node.js may have compatibility issues with the application.
Additionally, Travis CI feeds test results to Coveralls, which displays the generator’s code coverage. Note the code coverage, shown below, is reported for the yeoman generator, not the generator’s scaffolded application. The scaffolded application’s coverage is shown above.
Application Details
API Endpoints
The scaffolded application includes the following endpoints.
# widget resources var PATH = '/widgets'; server.get({path: PATH, version: VERSION}, findDocuments); server.get({path: PATH + '/:product_id', version: VERSION}, findOneDocument); server.post({path: PATH, version: VERSION}, createDocument); server.put({path: PATH, version: VERSION}, updateDocument); server.del({path: PATH + '/:product_id', version: VERSION}, deleteDocument); # utility resources var PATH = '/utils'; server.get({path: PATH + '/ping', version: VERSION}, ping); server.get({path: PATH + '/health', version: VERSION}, health); server.get({path: PATH + '/info', version: VERSION}, information); server.get({path: PATH + '/config', version: VERSION}, configuraton); server.get({path: PATH + '/env', version: VERSION}, environment);
The Widget
The Widget is the basic document object used throughout the application. It is used, primarily, to demonstrate Mongoose’s Model and Schema. The Widget object contains the following fields, as shown in the sample widget, below.
{ "product_id": "4OZNPBMIDR", "name": "Fapster", "color": "Orange", "size": "Medium", "price": "29.99", "inventory": 5 }
MongoDB
Use the mongo shell to access the application’s MongoDB instance and display the imported sample documents.
mongo > show dbs > use node-restify-mongodb-development > show tables > db.widgets.find()
The imported sample documents should be displayed, as shown below.
Environmental Variables
The scaffolded application relies on several environment variables to determine its environment-specific runtime configuration. If these environment variables are present, the application defaults to using the Development environment values, as shown below, in the application’s ‘config/config.js’ file.
var NODE_ENV = process.env.NODE_ENV || 'development'; var NODE_HOST = process.env.NODE_HOST || '127.0.0.1'; var NODE_PORT = process.env.NODE_PORT || 3000; var MONGO_HOST = process.env.MONGO_HOST || '127.0.0.1'; var MONGO_PORT = process.env.MONGO_PORT || 27017; var LOG_LEVEL = process.env.LOG_LEVEL || 'info'; var APP_NAME = 'node-restify-mongodb-';
Future Project TODOs
Future project enhancements include the following:
- Add filtering, sorting, field selection and paging
- Add basic HATEOAS-based response features
- Add authentication and authorization to production MongoDB instance
- Convert from out-dated jasmine-node to Jasmine?
Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development on June 27, 2015
Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.
Introduction
In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.
In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBox, Jenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.
Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).
Docker Machine
If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.
We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.
I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.
The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.
Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.
The bash script executed by Jenkins contains the following commands:
# optional: record current versions of docker apps with each build docker -v && docker-compose -v && docker-machine -v # set-up: clean up any previous machine failures docker-machine stop test || echo "nothing to stop" && \ docker-machine rm test || echo "nothing to remove" # use docker-machine to create and configure 'test' environment # add a -D (debug) if having issues docker-machine create --driver virtualbox test eval "$(docker-machine env test)" # use docker-compose to pull and build new images and containers docker-compose -p jenkins up -d # optional: list machines, images, and containers docker-machine ls && docker images && docker ps -a # wait for containers to fully start before tests fire up sleep 30 # test the services sh tests.sh $(docker-machine ip test) # tear down: stop and remove 'test' environment docker-machine stop test && docker-machine rm test
As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.
docker-machine create --driver virtualbox test eval "$(docker-machine env test)"
Once this step is complete you will have the following VirtualBox VM created, running, and active.
NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376
Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.
docker-compose -p jenkins up -d
The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Pulls (5) images, builds (5) images, and builds (11) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p <your_project_name_here> up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8500:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ links: - graphite - mongoAuthentication - "ambassador:nginx" expose: - "8587" valet: build: valet/ links: - graphite - mongoValet - "ambassador:nginx" expose: - "8585" maintenance: build: maintenance/ links: - graphite - mongoMaintenance - "ambassador:nginx" expose: - "8583" vehicle: build: vehicle/ links: - graphite - mongoVehicle - "ambassador:nginx" expose: - "8581" nginx: build: nginx/ ports: - "80:80" links: - "ambassador:vehicle" - "ambassador:valet" - "ambassador:authentication" - "ambassador:maintenance" ambassador: image: cpuguy83/docker-grand-ambassador volumes: - "/var/run/docker.sock:/var/run/docker.sock" command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"
Running the docker-compose.yaml
file, will pull these (5) Docker Hub images:
REPOSITORY TAG IMAGE ID ========== === ======== java 8u45-jdk 1f80eb0f8128 nginx latest 319d2015d149 mongo latest 66b43e3cae49 hopsoft/graphite-statsd latest b03e373279e8 cpuguy83/docker-grand-ambassador latest c635b1699f78
And, build these (5) Docker images from Dockerfiles:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_nginx latest 0b53a9adb296 jenkins_vehicle latest d80f79e605f4 jenkins_valet latest cbe8bdf909b8 jenkins_maintenance latest 15b8a94c00f4 jenkins_authentication latest ef0345369079
And, build these (11) Docker containers from corresponding image:
CONTAINER ID IMAGE NAME ============ ===== ==== 17992acc6542 jenkins_nginx jenkins_nginx_1 bcbb2a4b1a7d jenkins_vehicle jenkins_vehicle_1 4ac1ac69f230 mongo:latest jenkins_mongoVehicle_1 bcc8b9454103 jenkins_valet jenkins_valet_1 7c1794ca7b8c jenkins_maintenance jenkins_maintenance_1 2d0e117fa5fb jenkins_authentication jenkins_authentication_1 d9146a1b1d89 hopsoft/graphite-statsd:latest jenkins_graphite_1 56b34cee9cf3 cpuguy83/docker-grand-ambassador jenkins_ambassador_1 a72199d51851 mongo:latest jenkins_mongoAuthentication_1 307cb2c01cc4 mongo:latest jenkins_mongoMaintenance_1 4e0807431479 mongo:latest jenkins_mongoValet_1
Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.
Integration Testing
As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:
sh tests.sh $(docker-machine ip test)
The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test)
, is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100
. If a parameter is not provided, the test script’s hostname
variable will use the default value of localhost
, ‘hostname=${1-'localhost'}
‘.
Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581
for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:
http://192.168.99.100/vehicles/utils/ping.json http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad
Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh # docker-machine: sh tests.sh $(docker-machine ip test) # ######################################################################## echo --- Integration Tests --- echo ### VARIABLES ### hostname=${1-'localhost'} # use input param or default to localhost application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized make="Test" model="Foo" echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} echo make: ${make} echo model: ${model} echo ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"year\": 2015, \"make\": \"${make}\", \"model\": \"${model}\", \"color\": \"White\", \"type\": \"Sedan\", \"mileage\": 250 }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new maintenance record in the response body with an 'id'" url="http://${hostname}/maintenances" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\", \"mileage\": 1000, \"type\": \"Test Maintenance\", \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new valet transaction in the response body with an 'id'" url="http://${hostname}/valets" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\", \"parkingLot\": \"Test Parking Ramp\", \"parkingSpot\": 10, \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo
Tear Down
In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.
docker-machine stop test && \ docker-machine rm test
Jenkins CI Console Output
Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.
Started by user anonymous Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10 Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git > git --version # timeout=10 using GIT_SSH to set credentials using .gitcredentials to set credentials > git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10 > git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/* > git config --local --remove-section credential # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5 > git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10 [workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh + docker -v Docker version 1.7.0, build 0baf609 + docker-compose -v docker-compose version: 1.3.1 CPython version: 2.7.9 OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013 + docker-machine -v docker-machine version 0.3.0 (0a251fe) + docker-machine stop test + docker-machine rm test Successfully removed test + docker-machine create --driver virtualbox test Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env test + docker-machine env test + eval export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test" export DOCKER_MACHINE_NAME="test" # Run this command to configure your shell: # eval "$(docker-machine env test)" + export DOCKER_TLS_VERIFY=1 + export DOCKER_HOST=tcp://192.168.99.100:2376 + export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test + export DOCKER_MACHINE_NAME=test + docker-compose -p jenkins up -d Pulling mongoValet (mongo:latest)... latest: Pulling from mongo ...Abridged output... + docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376 + docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jenkins_vehicle latest fdd7f9d02ff7 2 seconds ago 837.1 MB jenkins_valet latest 8a592e0fe69a 4 seconds ago 837.1 MB jenkins_maintenance latest 5a4a44e136e5 5 seconds ago 837.1 MB jenkins_authentication latest e521e067a701 7 seconds ago 838.7 MB jenkins_nginx latest 085d183df8b4 25 minutes ago 132.8 MB java 8u45-jdk 1f80eb0f8128 12 days ago 816.4 MB nginx latest 319d2015d149 12 days ago 132.8 MB mongo latest 66b43e3cae49 12 days ago 260.8 MB hopsoft/graphite-statsd latest b03e373279e8 4 weeks ago 740 MB cpuguy83/docker-grand-ambassador latest c635b1699f78 5 months ago 525.7 MB + docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4ea39fa187bf jenkins_vehicle "java -classpath .:c 2 seconds ago Up 1 seconds 8581/tcp jenkins_vehicle_1 b248a836546b mongo:latest "/entrypoint.sh mong 3 seconds ago Up 3 seconds 27017/tcp jenkins_mongoVehicle_1 0c94e6409afc jenkins_valet "java -classpath .:c 4 seconds ago Up 3 seconds 8585/tcp jenkins_valet_1 657f8432004b jenkins_maintenance "java -classpath .:c 5 seconds ago Up 5 seconds 8583/tcp jenkins_maintenance_1 8ff6de1208e3 jenkins_authentication "java -classpath .:c 7 seconds ago Up 6 seconds 8587/tcp jenkins_authentication_1 c799d5f34a1c hopsoft/graphite-statsd:latest "/sbin/my_init" 12 minutes ago Up 12 minutes 2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp jenkins_graphite_1 040872881b25 jenkins_nginx "nginx -g 'daemon of 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 443/tcp jenkins_nginx_1 c6a2dc726abc mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoAuthentication_1 db22a44239f4 mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoMaintenance_1 d5fd655474ba cpuguy83/docker-grand-ambassador "/usr/bin/grand-amba 26 minutes ago Up 26 minutes jenkins_ambassador_1 2b46bd6f8cfb mongo:latest "/entrypoint.sh mong 31 minutes ago Up 31 minutes 27017/tcp jenkins_mongoValet_1 + sleep 30 + docker-machine ip test + sh tests.sh 192.168.99.100 --- Integration Tests --- hostname: 192.168.99.100 application: Test API Client 1435585062 secret: NGM5OTI5ODAxMTZ make: Test model: Foo TEST: GET request should return 'true' in the response body http://192.168.99.100/vehicles/utils/ping.json % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 RESULT: pass TEST: POST request should return a new client in the response body with an 'id' http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 847 225 --:--:-- --:--:-- --:--:-- 849 RESULT: pass SETUP: Get the new client's apiKey for next test http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 20482 5461 --:--:-- --:--:-- --:--:-- 21000 apiKey: sv1CA9NdhmXh72NrGKBN3Abb TEST: GET request should return a new jwt in the response body http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 686 0 --:--:-- --:--:-- --:--:-- 687 RESULT: pass SETUP: Get a new jwt using the new client for the next test http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 16843 0 --:--:-- --:--:-- --:--:-- 17076 jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM TEST: POST request should return a new vehicle in the response body with an 'id' http://192.168.99.100/vehicles % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 123 0 0 100 123 0 612 --:--:-- --:--:-- --:--:-- 611 100 419 0 296 100 123 649 270 --:--:-- --:--:-- --:--:-- 649 RESULT: pass SETUP: Get id from new vehicle for the next test http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 377 0 377 0 0 5564 0 --:--:-- --:--:-- --:--:-- 5626 vehicle id: 55914a28e4b04658471dc03a TEST: GET request should return a vehicle in the response body with the requested 'id' http://192.168.99.100/vehicles/55914a28e4b04658471dc03a % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 296 0 296 0 0 7051 0 --:--:-- --:--:-- --:--:-- 7219 RESULT: pass TEST: POST request should return a new maintenance record in the response body with an 'id' http://192.168.99.100/maintenances % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 RESULT: pass TEST: POST request should return a new valet transaction in the response body with an 'id' http://192.168.99.100/valets % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 561 0 368 100 193 514 269 --:--:-- --:--:-- --:--:-- 514 RESULT: pass + docker-machine stop test + docker-machine rm test Successfully removed test Finished: SUCCESS
Graphite and Statsd
If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.
The Complete Process
The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.
Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development on June 22, 2015
Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose
Previous Posts
In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.
Introduction
In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:
- Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
- Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
- Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
- Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
- Tear Down: Using Jenkins, automatically stop and remove the containers and images
For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.
Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Build the Microservices
In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.
To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml
below, corresponding to each microservice.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <name>Virtual-Vehicles API</name> <description>Virtual-Vehicles API https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3 </description> <url>https://github.com/garystafford/virtual-vehicle-demo</url> <groupId>com.example</groupId> <artifactId>Virtual-Vehicles-API</artifactId> <version>1</version> <packaging>pom</packaging> <modules> <module>Maintenance</module> <module>Valet</module> <module>Vehicle</module> <module>Authentication</module> </modules> </project>
Below is the view of the four individual Maven modules, within the single Jenkins Maven job.
Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate
‘.
Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.
Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.
Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml
file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.
Deploy Build Artifacts
Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.
The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.
Build and Compose the Containers
The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.
The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable export DOCKER_HOST=tcp://localhost:4243 # record installed version of Docker and Maven with each build mvn --version && \ docker --version && \ docker-compose --version # use docker-compose to build new images and containers docker-compose -p jenkins up -d # list virtual-vehicles related images docker images | grep 'jenkins' | awk '{print $0}' # list all containers docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Builds (4) images, pulls (2) images, and builds (9) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p virtualvehicles up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8481:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ ports: - "8587:8587" links: - graphite - mongoAuthentication valet: build: valet/ ports: - "8585:8585" links: - graphite - mongoValet - authentication maintenance: build: maintenance/ ports: - "8583:8583" links: - graphite - mongoMaintenance - authentication vehicle: build: vehicle/ ports: - "8581:8581" links: - graphite - mongoVehicle - authentication
Running the docker-compose.yaml
file, produces the following images:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_vehicle latest a6ea4dfe7cf5 jenkins_valet latest 162d3102d43c jenkins_maintenance latest 0b6f530cc968 jenkins_authentication latest 45b50487155e
And, these containers:
CONTAINER ID IMAGE NAME ============ ===== ==== 2b4d5a918f1f jenkins_vehicle jenkins_vehicle_1 492fbd88d267 mongo:latest jenkins_mongoVehicle_1 01f410bb1133 jenkins_valet jenkins_valet_1 6a63a664c335 jenkins_maintenance jenkins_maintenance_1 00babf484cf7 jenkins_authentication jenkins_authentication_1 548a31034c1e hopsoft/graphite-statsd:latest jenkins_graphite_1 cdc18bbb51b4 mongo:latest jenkins_mongoAuthentication_1 6be5c0558e92 mongo:latest jenkins_mongoMaintenance_1 8b71d50a4b4d mongo:latest jenkins_mongoValet_1
Integration Testing
Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.
Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.
sleep 15 sh tests.sh -v
The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh -v # ######################################################################## echo --- Integration Tests --- ### VARIABLES ### hostname="localhost" application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}:8581/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}:8587/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}:8587/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}:8581/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d '{ "year": 2015, "make": "Test", "model": "Foo", "color": "White", "type": "Sedan", "mileage": 250 }' --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}:8581/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass"
Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.
Tear Down
Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
The Complete Process
The below diagram show the entire process, start to finish.
Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on June 5, 2015
Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.
Note: All code available on GitHub. For the version of the code that matches the details in this blog post, check out the master branch, v1.0.0 tag (after running git clone …, run a git checkout tags/v1.0.0
command).
Previous Posts
In Part One of this series, we introduced the microservices-based Virtual-Vehicles REST API example. The vehicle-themed Virtual-Vehicles microservices offers a comprehensive set of functionality, through a REST API, to application developers. In Part Two, we installed a copy of the Virtual-Vehicles project from GitHub. In Part Two, we also gained a basic understanding of how RestExpress works. Finally, we discovered how to get the Virtual-Vehicles microservices up and running.
Part Three
In part three of this series, we will take the Virtual-Vehicles for a test drive (get it? maybe it was funnier the first time…). There are several tools we can use to test the Virtual-Vehicles API. One of my favorite tools is Postman. We will explore how to use Postman, along with the Virtual-Vehicles API documentation, to test the Virtual-Vehicles microservice’s endpoints, which compose the Virtual-Vehicles API.
Testing the API
There are three categories of tools available to test RESTful APIs, which are GUI-based applications, command line tools, and testing frameworks. Postman, Advanced REST Client, REST Console, and SmartBear’s SoapUI and SoapUI NG Pro, are examples of GUI-based applications, designed specifically to test RESTful APIs. cURL and GNU Wget are two examples of command line tools, which among other capabilities, can test APIs. Lastly, JUnit is an example of a testing framework that can be used to test a RESTful API. Surprisingly, JUnit is not only designed to manage unit tests. Each category of testing tools has their pros and cons, depending on your testing needs. We will explore all of these categories in this post as we test the Virtual-Vehicles REST API.
JUnit
JUnit is probably the best known of all Java unit testing frameworks. JUnit’s website describes JUnit as ‘a simple, open source framework to write and run repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.’ Most Java developers turn to JUnit for unit testing. However, JUnit is capable of other forms of testing, including integration testing. In his post, ‘Unit Testing with JUnit – Tutorial’, Lars Vogel states ‘an integration test has the target to test the behavior of a component or the integration between a set of components. The term functional test is sometimes used as a synonym for integration test. This kind of tests allow you to translate your user stories into a test suite, i.e., the test would resemble an expected user interaction with the application.’
Testing the Virtual-Vehicles RESTful API’s operations with JUnit would be considered integration (functional) testing. At a minimum, to complete requests, we call one microservice, which in turn authenticates the JWT by calling another microservice. If authenticated, the first microservice makes a request to its MongoDB database. As Vogel stated, whereas a unit test targets a small unit of code, such as a method, the request/response operation is integration between a set of components. testing an API call requires several dependencies.
The simplest example of testing the Virtual-Vehicles API with JUnit, would be to test an HTTP GET request to return a single instance of a vehicle. The code below demonstrates how this might be done. Notice the request depends on helper methods (not included, for brevity). To request the vehicle, assuming we already have a registered client, we need a valid JWT. We also need a valid vehicle ObjectId
. To obtain these two pieces of data, we call helper methods, which in turn makes the necessary request to retrieve a JWT and vehicle ObjectId
.
/** | |
* Test of HTTP GET to read a single vehicle. | |
*/ | |
@Test | |
public void testVehicleRead() { | |
String responseBody = ""; | |
String output; | |
Boolean result = true; | |
Boolean expResult = true; | |
try { | |
URL url = new URL(getBaseUrlAndPort() + "/vehicles/" + getVehicleObjectId()); | |
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); | |
conn.setRequestMethod("GET"); | |
conn.setRequestProperty("Authorization", "Bearer " + getJwt()); | |
conn.setRequestProperty("Accept", "application/json"); | |
if (conn.getResponseCode() != 200) { | |
// if not 200 response code then fail test | |
result = false; | |
} | |
BufferedReader br = new BufferedReader(new InputStreamReader( | |
(conn.getInputStream()))); | |
while ((output = br.readLine()) != null) { | |
responseBody = output; | |
} | |
if (responseBody.length() < 1) { | |
// if response body is empty then fail test | |
result = false; | |
} | |
conn.disconnect(); | |
} catch (IOException e) { | |
// if MalformedURLException, ConnectException, etc. then fail test | |
result = false; | |
} | |
assertEquals(expResult, result); | |
} |
Below are the results of the above test, run in NetBeans IDE, using the built-in support for JUnit.
JUnit can also be run from the command line using the Maven goal, surefire:test
:
mvn -q -Dtest=com.example.vehicle.objectid.VehicleControllerIT surefire:test |
cURL
One of the best-known command line tools for calling for all types of operations centered around calling a URL is cURL. According to their website, ‘curl is a command line tool and library for transferring data with URL syntax, supporting…HTTP, HTTPS…curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate, and Kerberos), file transfer resume, proxy tunneling and more.’ I prefer the website’s briefer description, cURL ‘groks those URLs’.
Using cURL, we could make an HTTP PUT request to the Vehicle microservice’s /vehicles/{oid}.{format}
endpoint. With cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object. Below is an example of that cURL command, which can be run from a terminal prompt.
curl --url 'http://virtual-vehicles.com:8581/vehicles/557310cfec7291b25cd3a1c2' \ | |
-X PUT \ | |
-H 'Pragma: no-cache' \ | |
-H 'Cache-Control: no-cache' \ | |
-H 'Accept: application/json; charset=UTF-8' \ | |
-H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlJncjg0YzF6VkdtMFd1N25kWjd5UGNURSIsImV4cCI6MTQzMzY2ODEwNywiYWl0IjoxNDMzNjMyMTA3fQ.xglaKWufcj4TZtMXW3DLa9uy5JB_FJHxxtk_iF1WT6U' \ | |
--data-binary $'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' \ | |
--compressed |
The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created
response status.
The cURL commands may be incorporated into many types of automated testing processes. These might be as simple as a bash script. The script could a series of automated tests, including the following: register an API client, use the API key to create a JWT, use the JWT to create a new vehicle, use the new vehicle’s ObjectId
to modify that same vehicle, delete that vehicle, confirm the vehicle is removed using the count operation and returns a test results report to the user.
cURL Commands from Chrome
Quick tip, instead of hand-coding complex cURL commands, containing form data, URL parameters, and Headers, use Chrome. First, open the Chrome Developer Tools (f12). Next, using the Postman – REST Client for Chrome, available in the Chrome App Store, execute your HTTP request. Finally, in the ‘Network’ tab of the Developers tools, find and right-click on the request and select ‘Copy as cURL’. You have a complete cURL command equivalent of your Postman request, which you can paste directly into the command line or insert into a script. Below is an example of using the Postman – REST Client for Chrome to generate a cURL command.
curl --url 'http://virtual-vehicles.com:8581/vehicles/554e8bd4c830093007d8b949' \ | |
-X PUT \ | |
-H 'Pragma: no-cache' \ | |
-H 'Origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm' \ | |
-H 'Accept-Encoding: gzip, deflate, sdch' \ | |
-H 'Accept-Language: en-US,en;q=0.8' \ | |
-H 'CSP: active' \ | |
-H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlBUMklPSWRaRzZoU0VEZGR1c2h6U04xRyIsImV4cCI6MTQzMzU2MDg5NiwiYWl0IjoxNDMzNTI0ODk2fQ.4q6EMuxE0vS43zILjE6e1tYrb5ulCe69-1QTFLYGbFU' \ | |
-H 'Content-Type: text/plain;charset=UTF-8' \ | |
-H 'Accept: */*' \ | |
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36' \ | |
-H 'Cache-Control: no-cache' \ | |
-H 'Connection: keep-alive' \ | |
-H 'X-FirePHP-Version: 0.0.6' \ | |
--data-binary $'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' \ | |
--compressed |
The generated command is a bit verbose. Compare this command to the cURL command, earlier.
Wget
Similar to cURL, GNU Wget provides the ability to call the Virtual-Vehicles API’s endpoints. According to their website, ‘GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive command line tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.’ Again, like cURL, we can run Wget commands from the command line or incorporate them into scripted testing processes. The Wget website contains excellent documentation.
Using Wget, we could make the same HTTP PUT request to the Vehicle microservice /vehicles/{oid}.{format}
endpoint. Like cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object.
wget -O - 'http://virtual-vehicles.com:8581/vehicles/557310cfec7291b25cd3a1c2' \ | |
--method=PUT \ | |
--header='Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlJncjg0YzF6VkdtMFd1N25kWjd5UGNURSIsImV4cCI6MTQzMzY2ODEwNywiYWl0IjoxNDMzNjMyMTA3fQ.xglaKWufcj4TZtMXW3DLa9uy5JB_FJHxxtk_iF1WT6U' \ | |
--header='Content-Type: text/plain;charset=UTF-8' \ | |
--header='Accept: application/json' \ | |
--body-data=$'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' |
The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created
response status.
cURL Bash Testing
We can combine cURL and Wget with several of the tools bash provides, to develop fairly complex integration tests. The bash-based script below just scratches the surface as a complete set of integration tests. However, the tests demonstrate an efficient multi-stage test approach to handling the complex nature of RESTful service request requirements. The tests build upon each other.
After setting up some variables and doing a quick health check on one service, the tests register a new API client by calling the Authentication service. Next, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the following script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services. Note how we grep certain values from the response, such as the new client’s API key, and then use that value as a parameter for the following test request, such as to obtain a JWT.
#!/bin/sh | |
######################################################################## | |
# | |
# title: Virtual-Vehicles Project Integration Tests | |
# author: Gary A. Stafford (https://programmaticponderings.com) | |
# url: https://github.com/garystafford/virtual-vehicles-docker | |
# description: Performs integration tests on the Virtual-Vehicles | |
# microservices | |
# to run: sh tests_color.sh -v | |
# | |
######################################################################## | |
echo --- Integration Tests --- | |
### VARIABLES ### | |
hostname="localhost" | |
application="Test API Client $(date +%s)" # randomized | |
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized | |
echo hostname: ${hostname} | |
echo application: ${application} | |
echo secret: ${secret} | |
### TESTS ### | |
echo "TEST: GET request should return 'true' in the response body" | |
url="http://${hostname}:8581/vehicles/utils/ping.json" | |
echo ${url} | |
curl -X GET -H 'Accept: application/json; charset=UTF-8' \ | |
--url "${url}" \ | |
| grep true > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "TEST: POST request should return a new client in the response body with an 'id'" | |
url="http://${hostname}:8587/clients" | |
echo ${url} | |
curl -X POST -H "Cache-Control: no-cache" -d "{ | |
\"application\": \"${application}\", | |
\"secret\": \"${secret}\" | |
}" --url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get the new client's apiKey for next test" | |
url="http://${hostname}:8587/clients" | |
echo ${url} | |
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ | |
\"application\": \"${application}\", | |
\"secret\": \"${secret}\" | |
}" --url "${url}" \ | |
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | |
| grep -o '[a-zA-Z0-9]\{24\}' \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo apiKey: ${apiKey} | |
echo | |
echo "TEST: GET request should return a new jwt in the response body" | |
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" | |
echo ${url} | |
curl -X GET -H "Cache-Control: no-cache" \ | |
--url "${url}" \ | |
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get a new jwt using the new client for the next test" | |
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" | |
echo ${url} | |
jwt=$(curl -X GET -H "Cache-Control: no-cache" \ | |
--url "${url}" \ | |
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo jwt: ${jwt} | |
echo "TEST: POST request should return a new vehicle in the response body with an 'id'" | |
url="http://${hostname}:8581/vehicles" | |
echo ${url} | |
curl -X POST -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
-d '{ | |
"year": 2015, | |
"make": "Test", | |
"model": "Foo", | |
"color": "White", | |
"type": "Sedan", | |
"mileage": 250 | |
}' --url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get id from new vehicle for the next test" | |
url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" | |
echo ${url} | |
id=$(curl -X GET -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
--url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' \ | |
| grep -o '[a-zA-Z0-9]\{24\}' \ | |
| tail -1 \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo vehicle id: ${id} | |
echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" | |
url="http://${hostname}:8581/vehicles/${id}" | |
echo ${url} | |
curl -X GET -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
--url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" |
Since these tests are just a bash script, they can from the command line, or easily called from a continuous integration tool, Such as Jenkins CI or Hudson.
Postman
Postman, like several similar tools, is an application designed specifically for test API endpoints. The Postman website describes Postman as tool that allows you to ‘build, test, and document your APIs faster.’ There are two versions of Postman in the Chrome Web Store. They are Postman – REST Client, the in-browser extension, which we mentioned above, and Postman, the standalone application. There is also Postman Interceptor, which helps you send requests that use browser cookies through the Postman application.
Postman and similar applications, have add-ons and extensions to extend their features. In particular, Postman, which is free, offers the Jetpacks paid extension. Jetpacks add the ability to ‘write and run tests inside Postman, extract data from responses, chain requests together and test requests with thousands of variations’. Jetpacks allow you to move beyond basic one-off API request-based testing, to automated regression and performance testing.
Using Postman
Let’s use the same HTTP PUT example we used with cURL and Wget, and see how we would perform the same task with Postman. In the first screen grab below, you can see all elements of the HTTP request, including the RESTful API’s URL, URI including the vehicle’s ObjectId
(/vehicles/{ObjectId}.{format}
), HTTP method (PUT), Authorization Header with JWT (Bearer), and the raw request body. The raw request body contains a JSON representation of the vehicle we want to update. Note how Postman saves the request in history so we can easily replay it later.
In the next screen-grab, we see the response to the HTTP PUT request. Note the response body, response status, timing, and response headers.
Looking at the response body in Postman, you easily see the how RestExpress demonstrates the RESTful principle we discussed in Part Two of the series, HATEOAS (Hypermedia as the Engine of Application State). Note the link to this vehicle’s ‘self’ href) and the entire vehicles collection (‘up’ href).
Postman Collections
A great feature of Postman with Jetpacks is Collections. Collections are sets of requests, which can be saved, recalled, and shared. The Collection Runner runs requests in a collection, in the order in which you set them. Ordered collections are ideal for the Virtual-Vehicles API. The screen grab below shows a collection of requests, arranged in the order we would execute them to test the Virtual-Vehicles API, as it applies to specifically to vehicle CRUD operations:
- Execute HTTP POST request to register the new API client, passing the application name and a shared secret in the request
Receive the new client’s API key in response - Execute HTTP GET to request, passing the new client’s API key and the shared secret in the request
Receive the new JWT in response - Execute HTTP POST request to create a new vehicle, passing the JWT in the header for authentication (used for all following requests)
Receive the new vehicle object in response - Execute HTTP PUT request to modify the new vehicle, using the vehicle’s
ObjectId
Receive the modified vehicle object in response - Execute HTTP GET to request the modified vehicle, to confirm it exists in the expected state
Receive the vehicle object in response - Execute HTTP DELETE request to delete the new vehicle, using the vehicle’s
ObjectId
- Execute HTTP GET to request the new vehicle and to confirm it has been removed
Receive a 404 Not Found status response, as expected
Using saved collections for testing the Virtual-Vehicles API is a real-time saving. However, the collections cannot easily be re-run without hand-editing or some advanced scripting. In the simple example above, we hard-coded a JWT and vehicle ObjectId
in the requests. Unfortunately, the JWT has an expiration of only 10 hours by default. More immediately, the ObjectId
is unique. The earlier collection test run created, then deleted, the vehicle with that ObjectId
.
Negative Testing
You may also perform negative testing with Postman. For example, do you receive the expected response when you don’t include the Authorization Header with JWT in a request (401 Unauthorized status)? When you include a JWT, which has expired (401 Unauthorized status)? When you request a vehicle, whose ObjectId
is incorrect or is not found in the database (400 Bad Request status)? Do you receive the expected response when you call an actual service, but an endpoint that doesn’t exist (405 Method Not Allowed)?
Postman Test Automation
In addition to manually viewing the HTTP response, to verify the results of a request, Postman allows you to write and run automated tests for each request. According to their website, a ‘Postman test is essentially JavaScript code which sets values for the special tests object. You can set a descriptive key for an element in the object and then say if it’s true or false’. This allows you to write a set of response validation tests for each request.
Below is a quick example of testing the same HTTP POST request, used to create the new API client, above. In this example, we:
- Test that the
Content-Type
response header is present - Test that the HTTP POST successfully returned a 201 status code
- Test that the new client’s API key was returned in the response body
- Test that the response time was less than 200ms
Reviewing Postman’s ‘Tests’ tab, above, observe the four tests have run successfully. Using the Postman’s testing feature, you can create even more advanced tests, eliminating the need to manually validate responses.
This post demonstrates a small subset of the features Postman and other similar applications provide for testing RESTful API. The tools and processes you use to test your RESTful API will depend on the stage of development and testing you are in, as well as the existing technology stacks you build, and on which you host your services.
Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 2
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on May 31, 2015
Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.
Note: All code available on GitHub. For the version of the code that matches the details in this blog post, check out the master branch, v1.0.0 tag (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Previous Post
In Part One of this series, we introduced the microservices-based Virtual-Vehicles REST API example. The vehicle-themed Virtual-Vehicles microservices offers a comprehensive set of functionality, through a REST API, to application developers. The developers, in turn, will use the Virtual-Vehicles REST API’s functionality to build applications and games for their end-users.
In Part One, we also decided on the proper amount and responsibility of each microservice. We also determined the functionality of each microservice to meet the hypothetical functional and nonfunctional requirements of Virtual-Vehicles. To review, the four microservices we are building, are as follows:
Virtual-Vehicles REST API Resources |
||
Microservice | Purpose (Business Capability) | Functions |
Authentication |
Manage API clients and JWT authentication |
|
Vehicle |
Manage virtual vehicles |
|
Maintenance |
Manage maintenance on vehicles |
|
Valet Parking |
Manage a valet service to park for vehicles |
|
To review, the first five functions for each service are all basic CRUD operations: create
(POST), read
(GET), readAll
(GET), update
(PUT), delete
(DELETE). The readAll
function also has find
, count
, and pagination
functionality using query parameters. Unfortunately, RestExpress does not support PATCH
for updates. However, I have updated RestExpress’ PUT HTTP methods to return the modified object in the response body instead of the nothing (status of 201 Created vs. 200 OK). See StackOverflow for an explanation.
All services also have an internal authenticateJwt
function, to authenticate the JWT, passed in the HTTP request header, before performing any operation. Additionally, all services have a basic health-check function, ping
(GET). There are only a few other functions required for our Virtual-Vehicles example, such as for creating JWTs.
Part Two Introduction
In Part Two, we will build our four Virtual-Vehicles microservices. Recall from our first post, we will be using RestExpress. RestExpress composes best-of-breed open-source tools to enable quickly creating RESTful microservices that embrace industry best practices. Those best-of-breed tools include Java EE, Maven, MongoDB, and Netty, among others.
In this post, we will accomplish the following:
- Create a default microservice project in NetBeans using RestExpress MongoDB Maven Archetype
- Understand the basic structure of a default RestExpress microservice project
- Review the changes made to the default RestExpress microservice project to create the Virtual-Vehicles example
- Compile and run the individual microservices directly from NetBeans
I used NetBeans IDE 8.0.2 on Linux Ubuntu 14.10 to build the microservices. You may also follow along in other IDE’s, such as Eclipse or IntelliJ, on Mac or Windows. We won’t cover installing MongoDB, Maven, and Java. I’ll assume if your building enterprise applications, you have the basics covered.
Using the RestExpress MongoDB Maven Archetype
All the code for this project is available on GitHub. However, to understand RestExpress, you should go through the exercise of scaffolding a new microservice using the RestExpress MongoDB Maven Archetype. You will also be able to use this default microservice project to compare and contrast to the modified versions, used in the Virtual-Vehicles example. The screen grabs below demonstrate how to create a new microservice project using the RestExpress MongoDB Maven Archetype. At the time of this post, the archetype version was restexpress-mongodb version 1.15.
Default Project Architecture
Reviewing the two screen grabs below (Project tab), note the key components of the RestExpress MongoDB Maven project, which we just created:
- Base Package (com.example.vehicle)
- Configuration class reads in environment properties (see Files tab) and instantiates controllers
- Constants class contains project constants
- Relationships class defines linking resource which aids service discoverability (HATEOAS)
- Main executable class
- Routes class defines the routes (endpoints) exposed by the service and the corresponding controller class
- Model/Controllers Packages (com.example.vehicle.objectid and .uuid)
- Entity class defines the data entity – a Vehicle in this case
- Controller class contains the methods executed when the route (endpoint) is called
- Repository class defines the connection details to MongoDB
- Service class contains the calls to the persistence layer, validation, and business logic
- Serialization Package (com.example.vehicle.serialization)
- XML and JSON serialization classes
- Object ID and UUID serialization and deserialization classes
Again, I strongly recommend reviewing each of these package’s classes. To understand the core functionality of RestExpress, you must understand the relationships between RestExpress microservice’s Route, Controller, Service, Repository, Relationships, and Entity classes. In addition to reviewing the default Maven project, there are limited materials available on the Internet. I would recommend the RestExpress Website on GitHub, RestExpress Google Group Forum, and the YouTube 3-part video series, Instant REST Services with RESTExpress.
Unit Tests?
Disappointingly, the current RestExpress MongoDB Maven Archetype sample project does not come with sample JUnit unit tests. I am tempted to start writing my own unit tests if I decided to continue to use the RestExpress microservices framework for future projects.
Properties Files
Also included in the default RestExpress MongoDB Maven project is a Java properties file (environment.properties
). This properties file is displayed in the Files tab, as shown below. The default properties file is located in the ‘dev’ environment config folder. Later, we will create an additional properties file for our production environment.
Ports
Within the ‘dev’ environment, each microservice is configured to start on separate ports (i.e. port = 8581
). Feel free to change the service’s port mappings if they conflict with previously configured components running on your system. Be careful when changing the Authentication service’s port, 8587, since this port is also mapped in all other microservices using the authentication.port
property (authentication.port = 8587
). Make sure you change both properties, if you change Authentication service’s port mapping.
Base URL
Also, in the properties files is the base.url
property. This property defines the URL the microservice’s endpoints will be expecting calls from, and making internal calls between services. In our post’s example, this property in the ‘dev’ environment is set to localhost (base.url = http://localhost
). You could map an alternate hostname from your hosts file (/etc/hosts
). We will do this in a later post, in our ‘prod’ environment, mapping the base.url
property to Virtual-Vehicles (base.url = http://virtual-vehicles.com
). In the ‘dev’ environment properties file, MongoDB is also mapped to localhost
(i.e. mongodb.uri = mongodb://virtual-vehicles.com:27017/virtual_vehicle
).
Metrics Plugin and Graphite
RestExpress also uses the properties file to hold configuration properties for Metrics Plugin and Graphite. The Metrics Plugin and Graphite are both first class citizens of RestExpress. Below is the copy of the Vehicles service environment.properties
file for the ‘dev’ environment. Note, the Metrics Plugin and Graphite are both disabled in the ‘dev’ environment.
# Default is 8081 | |
port = 8581 | |
# Port used to call Authentication service endpoints | |
authentication.port = 8587 | |
# The size of the executor thread pool | |
# (that can handle blocking back-end processing). | |
executor.threadPool.size = 20 | |
# A MongoDB URI/Connection string | |
# see: http://docs.mongodb.org/manual/reference/connection-string/ | |
mongodb.uri = mongodb://localhost:27017/virtual_vehicle | |
# The base URL, used as a prefix for links returned in data | |
base.url = http://localhost | |
#Configuration for the MetricsPlugin/Graphite | |
metrics.isEnabled = false | |
#metrics.machineName = | |
metrics.prefix = web1.example.com | |
metrics.graphite.isEnabled = false | |
metrics.graphite.host = graphite.example.com | |
metrics.graphite.port = 2003 | |
metrics.graphite.publishSeconds = 60 |
Choosing a Data Identification Method
RestExpress offers two identification models for managing data, the MongoDB ObjectId and a Universally Unique Identifier (UUID). MongoDB uses an ObjectId to identify uniquely a document within a collection. The ObjectId is a special 12-byte BSON type that guarantees uniqueness of the document within the collection. Alternately, you can use the UUID identification model. The UUID identification model in RestExpress uses a UUID, instead of a MongoDB ObjectId. The UUID also contains createdAt
and updatedAt
properties that are automatically maintained by the persistence layer. You may stick with ObjectId, as we will in the Virtual-Vehicles example, or choose the UUID. If you use multiple database engines for your projects, using UUID will give you a universal identification method.
Project Modifications
Many small code changes differentiate our Virtual-Vehicles microservices from the default RestExpress Maven Archetype project. Most changes are superficial; nothing changed about how RestExpress functions. Changes between the screen grabs above, showing the default project, and the screen grabs below, showing the final Virtual-Vehicles microservices, include:
- Remove all packages, classes, and code references to the UUID identification methods (example uses ObjectId)
- Rename several classes for convenience (dropped use of word ‘Entity’)
- Add the Utilities (com.example.utilities) and Authentication (com.example.authenticate) packages
MongoDB
Following a key principle of microservices mentioned in the first post, Decentralized Data Management, each microservice will have its own instance of a MongoDB database associated with it. The below diagram shows each service and its corresponding database, collection, and fields.
From the MongoDB shell, we can observe the individual instances of the four microservice’s databases.
In the case of the Vehicle microservice, the associated MongoDB database is virtual_vehicle
. This database contains a single collection, vehicles
. While the properties file defines the database name, the Vehicles entity class defines the collection name, using the org.mongodb.morphia.annotations
classes annotation functionality.
@Entity("vehicles") | |
public class Vehicle | |
extends AbstractMongodbEntity | |
implements Linkable { | |
private int year; | |
private String make; | |
private String model; | |
private String color; | |
private String type; | |
private int mileage; | |
// abridged... |
Looking at the virtual_vehicle
database in the MongoDB shell, we see that the sample document’s fields correspond to the Vehicle entity classes properties.
Each of the microservice’s MongoDB databases are configured in the environments.properties
file, using the mongodb.uri
property. In the ‘dev’ environment we use localhost
as our host URL (i.e. mongodb.uri = mongodb://localhost:27017/virtual_vehicle
).
Authentication and JSON Web Tokens
The three microservices, Vehicle, Valet, and Maintenance, are almost identical. However, the Authentication microservice is unique. This service is called by each of the other three services, as well as also being called directly. The Authentication service provides a very basic level of authentication using JSON Web Tokens (JWT), pronounced ‘jot‘.
Why do we want authentication? We want to confirm that the requester using the Virtual-Vehicles REST API is the actual registered API client they are who they claim to be. JWT allows us to achieve this requirement with minimal effort.
According to jwt.io, ‘a JSON Web Token is a compact URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is digitally signed using JSON Web Signature (JWS).‘ I recommend reviewing the JWT draft standard to fully understand the structure, creation, and use of JWTs.
Virtual-Vehicles Authentication Process
There are different approaches to implementing JWT. In our Virtual-Vehicles REST API example, we use the following process for JWT authentication:
- Register the new API client by supplying the application name and a shared secret (one time only)
- Receive an API key in response (one time only)
- Obtain a JWT using the API key and the shared secret (each user session or renew when the previous JWT expires)
- Include the JWT in each API call
In our example, we are passing four JSON fields in our set of claims. Those fields are the issuer (‘iss’), API key, expiration (‘exp’), and the time the JWT was issued (‘ait’). Both the ‘iss’ and the ‘exp’ claims are defined in the Authentication service’s environment.properties
file (jwt.expire.length
and jwt.issuer
).
Expiration and Issued date/time use the JWT standard recommended “Seconds Since the Epoch“. The default expiration for a Virtual-Vehicles JWT is set to an arbitrary 10 hours from the time the JWT was issued (jwt.expire.length = 36000
). That amount, 36,000, is equivalent to 10 hours x 60 minutes/hour x 60 seconds/minute.
Decoding a JWT
Using the jwt.io site’s JT Debugger tool, I have decoded a sample JWT issued by the Virtual-Vehicles REST API, generated by the Authentication service. Observe the three parts of the JWT, the JOSE Header, Claims Set, and the JSON Web Signature (JWS).
The JWT’s header indicates that our JWT is a JWS that is MACed using the HMAC SHA-256 algorithm. The shared secret, passed by the API client, represents the HMAC secret cryptographic key. The secret is used in combination with the cryptographic hash function to calculate the message authentication code (MAC). In the example below, note how the API client’s shared secret is used to validate our JWT’s JWS.
Sequence Diagrams of Authentication Process
Below are three sequence diagrams, which detail the following processes: API client registration process, obtaining a new JWT, and a REST call being authenticated using the JWT. The end-user of the API self-registers their application using the Authentication service and receives back an API key. The API key is unique to that client.
The end-user application then uses the API key and the shared secret to receive a JWT from the Authentication service.
After receiving the JWT, the end-user application passes the JWT in the header of each API request. The JWT is validated by calling the Authentication service. If the JWT is valid, the request is fulfilled. If not valid, a ‘401 Unauthorized’ status is returned.
JWT Validation
The JWT draft standard recommends how to validate a JWT. Our Virtual-Vehicles Authentication microservice uses many of those criteria to validate the JWT, which include:
- API Key – Retrieve API client’s shared secret from MongoDB, using API key contained in JWT’s claims set (secret is returned; therefore API key is valid)
- Algorithm – confirm the algorithm (‘alg’), found in the JWT Header, which used to encode JWT, was ‘HS256’ (HMAC SHA-256)
- Valid JWS – Use the client’s shared secret from #1 above, decode HMAC SHA-256 encrypted JWS
- Expiration – confirm JWT is not expired (‘exp’)
Inter-Service Communications
By default, the RestExpress Archetype project does not offer an example of communications between microservices. Service-to-service communications for microservices is most often done using the same HTTP-based REST calls used to by our Virtual-Vehicles REST API. Alternately, a message broker, such as RabbitMQ, Kafka, ActiveMQ, or Kestrel, is often used. Both methods of communicating between microservices, referred to as ‘inter-service communication mechanisms’ by InfoQ, have their pros and cons. The InfoQ website has an excellent microservices post, which discusses the topic of service-to-service communication.
For the Virtual-Vehicles microservices example, we will use HTTP-based REST calls for inter-service communications. The primary service-to-service communications in our example, is between the three microservices, Vehicle, Valet, and Maintenance, and the Authentication microservice. The three services validate the JWT passed in each request to a CRUD operation, by calling the Authentication service and passing the JWT, as shown in the sequence diagram, above. Validation is done using an HTTP GET request to the Authentication service’s .../jwts/{jwt}
endpoint. The Authentication service’s method, called by this endpoint, minus some logging and error handling, looks like the following:
public boolean authenticateJwt(Request request, String baseUrlAndAuthPort) { | |
String jwt, output, valid = ""; | |
try { | |
jwt = (request.getHeader("Authorization").split(" "))[1]; | |
} catch (NullPointerException | ArrayIndexOutOfBoundsException e) { | |
return false; | |
} | |
try { | |
URL url = new URL(baseUrlAndAuthPort + "/jwts/" + jwt); | |
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); | |
conn.setRequestMethod("GET"); | |
conn.setRequestProperty("Accept", "application/json"); | |
if (conn.getResponseCode() != 200) { | |
return false; | |
} | |
BufferedReader br = new BufferedReader(new InputStreamReader( | |
(conn.getInputStream()))); | |
while ((output = br.readLine()) != null) { | |
valid = output; | |
} | |
conn.disconnect(); | |
} catch (IOException e) { | |
return false; | |
} | |
return Boolean.parseBoolean(valid); | |
} |
Primarily, we are using the java.net
and java.io
packages, along with the org.restexpress.Request
class to build and send our HTTP request to the Authentication service. Alternately, you could use just the org.restexpress
package to construct request and handle the response. This same basic method structure shown above can be used to create unit tests for your service’s endpoints.
Health Ping
Each of the Virtual-Vehicles microservices contain a DiagnosticController
in the .utilities
package. In our example, we have created a ping()
method. This simple method, called through the .../utils/ping
endpoint, should return a 200 OK
status and a boolean value of ‘true’, indicating the microservice is running and reachable. This route’s associated method could not be simpler:
public void ping(Request request, Response response) { | |
response.setResponseStatus(HttpResponseStatus.OK); | |
response.setResponseCode(200); | |
response.setBody(true); | |
} |
The ping health check can even be accessed with a simple curl command, curl localhost:8581/vehicles/utils/ping
.
In a real-world application, we would add additional health checks to all services, providing additional insight into the health and performance of each microservice, as well as the service’s dependencies.
API Documentation
A well written RESTful API will not require extensive documentation to understand the API’s operations. Endpoints will be discoverable through linking (see Response Body links section in below example). API documentation should provide HTTP method, required headers and URL parameters, expected response body and response status, and so forth.
An API should be documented before any code is written, similar to TDD. The API documentation is the result of a clear understanding of requirements. The API documentation should make the coding even easier since the documentation serves as a further refinement of the requirements. The requirements are an architectural plan for the microservice’s code structure.
Sample Documentation
Below, is a sample of the Virtual-Vehicles REST API documentation. It details the function responsible for creating a new API client. The documentation provides a brief description of the function, the operation’s endpoint (URI), HTTP method, request parameters, expected response body, expected response status, and even a view of the MongoDB collection’s document for a new API client.
You can download a PDF version of the Virtual-Vehicles RESTful API documentation on GitHub or review the source document on Google Docs. It contains general notes about the API, and details about every one of the API’s operations.
Running the Individual Microservices
For development and testing purposes, the easiest way to start the microservices is in NetBeans using the Run
command. The Run
command executes the Maven exec
goal. Based on the DEFAULT_ENVIRONMENT
constant in the org.restexpress.util
Environment class, RestExpress will use the ‘dev’ environment’s environment.properties
file, in the project’s /config/dev
directory.
Alternately, you can use the RestExpress project’s recommended command from a terminal prompt to start each microservice from its root directory (mvn exec:java -Dexec.mainClass=test.Main -Dexec.args="dev"
). You can also use this command to switch from the ‘dev’ to ‘prod’ environment properties (-Dexec.args="prod"
).
You may use a variety of commands to confirm all the microservices are running. I prefer something basic like sudo netstat -tulpn | grep 858[0-9]
. This will find all the ports within the ‘dev’ port range. For more in-depth info, you can use a command like ps -aux | grep com.example | grep -v grep
Part Three: Testing our Services
We now have a copy of the Virtual-Vehicles project pulled from GitHub, a basic understanding of how RestExpress works, and our four microservices running on different ports. In Part Three of this series, we will take them for a drive (get it?). There are many ways to test our service’s endpoints. One of my favorite tools is Postman. we will explore how to use several tools, including Postman, and our API documentation, to test our microservice’s endpoints.