Posts Tagged Jenkins

Provision and Deploy a Consul Cluster on AWS, using Terraform, Docker, and Jenkins

Cover2

Introduction

Modern DevOps tools, such as HashiCorp’s Packer and Terraform, make it easier to provision and manage complex cloud architecture. Utilizing a CI/CD server, such as Jenkins, to securely automate the use of these DevOps tools, ensures quick and consistent results.

In a recent post, Distributed Service Configuration with Consul, Spring Cloud, and Docker, we built a Consul cluster using Docker swarm mode, to host distributed configurations for a Spring Boot application. The cluster was built locally with VirtualBox. This architecture is fine for development and testing, but not for use in Production.

In this post, we will deploy a highly available three-node Consul cluster to AWS. We will use Terraform to provision a set of EC2 instances and accompanying infrastructure. The instances will be built from a hybrid AMIs containing the new Docker Community Edition (CE). In a recent post, Baking AWS AMI with new Docker CE Using Packer, we provisioned an Ubuntu AMI with Docker CE, using Packer. We will deploy Docker containers to each EC2 host, containing an instance of Consul server.

All source code can be found on GitHub.

Jenkins

I have chosen Jenkins to automate all of the post’s build, provisioning, and deployment tasks. However, none of the code is written specific to Jenkins; you may run all of it from the command line.

For this post, I have built four projects in Jenkins, as follows:

  1. Provision Docker CE AMI: Builds Ubuntu AMI with Docker CE, using Packer
  2. Provision Consul Infra AWS: Provisions Consul infrastructure on AWS, using Terraform
  3. Deploy Consul Cluster AWS: Deploys Consul to AWS, using Docker
  4. Destroy Consul Infra AWS: Destroys Consul infrastructure on AWS, using Terraform

Jenkins UI

We will primarily be using the ‘Provision Consul Infra AWS’, ‘Deploy Consul Cluster AWS’, and ‘Destroy Consul Infra AWS’ Jenkins projects in this post. The fourth Jenkins project, ‘Provision Docker CE AMI’, automates the steps found in the recent post, Baking AWS AMI with new Docker CE Using Packer, to build the AMI used to provision the EC2 instances in this post.

Consul AWS Diagram 2

Terraform

Using Terraform, we will provision EC2 instances in three different Availability Zones within the US East 1 (N. Virginia) Region. Using Terraform’s Amazon Web Services (AWS) provider, we will create the following AWS resources:

  • (1) Virtual Private Cloud (VPC)
  • (1) Internet Gateway
  • (1) Key Pair
  • (3) Elastic Cloud Compute (EC2) Instances
  • (2) Security Groups
  • (3) Subnets
  • (1) Route
  • (3) Route Tables
  • (3) Route Table Associations

The final AWS architecture should resemble the following:

Consul AWS Diagram

Production Ready AWS

Although we have provisioned a fairly complete VPC for this post, it is far from being ready for Production. I have created two security groups, limiting the ingress and egress to the cluster. However, to further productionize the environment would require additional security hardening. At a minimum, you should consider adding public/private subnets, NAT gateways, network access control list rules (network ACLs), and the use of HTTPS for secure communications.

In production, applications would communicate with Consul through local Consul clients. Consul clients would take part in the LAN gossip pool from different subnets, Availability Zones, Regions, or VPCs using VPC peering. Communications would be tightly controlled by IAM, VPC, subnet, IP address, and port.

Also, you would not have direct access to the Consul UI through a publicly exposed IP or DNS address. Access to the UI would be removed altogether or locked down to specific IP addresses, and accessed restricted to secure communication channels.

Consul

We will achieve high availability (HA) by clustering three Consul server nodes across the three Elastic Cloud Compute (EC2) instances. In this minimally sized, three-node cluster of Consul servers, we are protected from the loss of one Consul server node, one EC2 instance, or one Availability Zone(AZ). The cluster will still maintain a quorum of two nodes. An additional level of HA that Consul supports, multiple datacenters (multiple AWS Regions), is not demonstrated in this post.

Docker

Having Docker CE already installed on each EC2 instance allows us to execute remote Docker commands over SSH from Jenkins. These commands will deploy and configure a Consul server node, within a Docker container, on each EC2 instance. The containers are built from HashiCorp’s latest Consul Docker image pulled from Docker Hub.

Getting Started

Preliminary Steps

If you have built infrastructure on AWS with Terraform, these steps should be familiar to you:

  1. First, you will need an AMI with Docker. I suggest reading Baking AWS AMI with new Docker CE Using Packer.
  2. You will need an AWS IAM User with the proper access to create the required infrastructure. For this post, I created a separate Jenkins IAM User with PowerUser level access.
  3. You will need to have an RSA public-private key pair, which can be used to SSH into the EC2 instances and install Consul.
  4. Ensure you have your AWS credentials set. I usually source mine from a .env file, as environment variables. Jenkins can securely manage credentials, using secret text or files.
  5. Fork and/or clone the Consul cluster project from  GitHub.
  6. Change the aws_key_name and public_key_path variable values to your own RSA key, in the variables.tf file
  7. Change the aws_amis_base variable values to your own AMI ID (see step 1)
  8. If you are do not want to use the US East 1 Region and its AZs, modify the variables.tf, network.tf, and instances.tf files.
  9. Disable Terraform’s remote state or modify the resource to match your remote state configuration, in the main.tf file. I am using an Amazon S3 bucket to store my Terraform remote state.

Building an AMI with Docker

If you have not built an Amazon Machine Image (AMI) for use in this post already, you can do so using the scripts provided in the previous post’s GitHub repository. To automate the AMI build task, I built the ‘Provision Docker CE AMI’ Jenkins project. Identical to the other three Jenkins projects in this post, this project has three main tasks, which include: 1) SCM: clone the Packer AMI GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run Packer.

The SCM and Bindings tasks are identical to the other projects (see below for details), except for the use of a different GitHub repository. The project’s Build step, which runs the packer_build_ami.sh script looks as follows:

jenkins_13

The resulting AMI ID will need to be manually placed in Terraform’s variables.tf file, before provisioning the AWS infrastructure with Terraform. The new AMI ID will be displayed in Jenkin’s build output.

jenkins_14

Provisioning with Terraform

Based on the modifications you made in the Preliminary Steps, execute the terraform validate command to confirm your changes. Then, run the terraform plan command to review the plan. Assuming are were no errors, finally, run the terraform apply command to provision the AWS infrastructure components.

In Jenkins, I have created the ‘Provision Consul Infra AWS’ project. This project has three tasks, which include: 1) SCM: clone the GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run Terraform. Those tasks look as follows:

Jenkins_08.png

You will obviously need to use your modified GitHub project, incorporating the configuration changes detailed above, as the SCM source for Jenkins.

Jenkins Credentials

You will also need to configure your AWS credentials.

Jenkins_03.png

The provision_infra.sh script provisions the AWS infrastructure using Terraform. The script also updates Terraform’s remote state. Remember to update the remote state configuration in the script to match your personal settings.

cd tf_env_aws/

terraform remote config \
  -backend=s3 \
  -backend-config="bucket=your_bucket" \
  -backend-config="key=terraform_consul.tfstate" \
  -backend-config="region=your_region"

terraform plan
terraform apply

The Jenkins build output should look similar to the following:

jenkins_12.png

Although the build only takes about 90 seconds to complete, the EC2 instances could take a few extra minutes to complete their Status Checks and be completely ready. The final results in the AWS EC2 Management Console should look as follows:

EC2 Management Console

Note each EC2 instance is running in a different US East 1 Availability Zone.

Installing Consul

Once the AWS infrastructure is running and the EC2 instances have completed their Status Checks successfully, we are ready to deploy Consul. In Jenkins, I have created the ‘Deploy Consul Cluster AWS’ project. This project has three tasks, which include: 1) SCM: clone the GitHub project, 2) Bindings: set up the AWS credentials, and 3) Build: run an SSH remote Docker command on each EC2 instance to deploy Consul. The SCM and Bindings tasks are identical to the project above. The project’s Build step looks as follows:

Jenkins_04.png

First, the delete_containers.sh script deletes any previous instances of Consul containers. This is helpful if you need to re-deploy Consul. Next, the deploy_consul.sh script executes a series of SSH remote Docker commands to install and configure Consul on each EC2 instance.

export ec2_server1_private_ip=$(aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-consul-server-1' \
  --output text --query 'Reservations[*].Instances[*].PrivateIpAddress')
echo "consul-server-1 private ip: ${ec2_server1_private_ip}"
ec2_public_ip=$(aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-consul-server-1' \
  --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
consul_server="consul-server-1"

ssh -oStrictHostKeyChecking=no -T \
  -i ~/.ssh/consul_aws_rsa \
  ubuntu@${ec2_public_ip} << EOSSH
  docker run -d \
    --net=host \
    --hostname ${consul_server} \
    --name ${consul_server} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth0" \
    --volume /home/ubuntu/consul/data:/consul/data \
    --publish 8500:8500 \
    consul:latest \
    consul agent -server -ui -client=0.0.0.0 \
      -bootstrap-expect=3 \
      -advertise='{{ GetInterfaceIP "eth0" }}' \
      -data-dir="/consul/data"

  sleep 5
  docker logs consul-server-1
  docker exec -i consul-server-1 consul members
EOSSH
ec2_public_ip=$(aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-consul-server-2' \
  --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
consul_server="consul-server-2"

ssh -oStrictHostKeyChecking=no -T \
  -i ~/.ssh/consul_aws_rsa \
  ubuntu@${ec2_public_ip} << EOSSH
  docker run -d \
    --net=host \
    --hostname ${consul_server} \
    --name ${consul_server} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth0" \
    --volume /home/ubuntu/consul/data:/consul/data \
    --publish 8500:8500 \
    consul:latest \
    consul agent -server -ui -client=0.0.0.0 \
      -advertise='{{ GetInterfaceIP "eth0" }}' \
      -retry-join="${ec2_server1_private_ip}" \
      -data-dir="/consul/data"

  sleep 5
  docker logs consul-server-2
  docker exec -i consul-server-2 consul members
EOSSH
ec2_public_ip=$(aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-consul-server-3' \
  --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
consul_server="consul-server-3"

ssh -oStrictHostKeyChecking=no -T \
  -i ~/.ssh/consul_aws_rsa \
  ubuntu@${ec2_public_ip} << EOSSH
  docker run -d \
    --net=host \
    --hostname ${consul_server} \
    --name ${consul_server} \
    --env "SERVICE_IGNORE=true" \
    --env "CONSUL_CLIENT_INTERFACE=eth0" \
    --env "CONSUL_BIND_INTERFACE=eth0" \
    --volume /home/ubuntu/consul/data:/consul/data \
    --publish 8500:8500 \
    consul:latest \
    consul agent -server -ui -client=0.0.0.0 \
      -advertise='{{ GetInterfaceIP "eth0" }}' \
      -retry-join="${ec2_server1_private_ip}" \
      -data-dir="/consul/data"

  sleep 5
  docker logs consul-server-3
  docker exec -i consul-server-3 consul members
EOSSH
ec2_public_ip=$(aws ec2 describe-instances \
  --filters Name='tag:Name,Values=tf-instance-consul-server-1' \
  --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo " "
echo "*** Consul UI: http://${ec2_public_ip}:8500/ui/ ***"

The entire Jenkins build process only takes about 30 seconds. Afterward, the output from a successful Jenkins build should show that all three Consul server instances are running, have formed a quorum, and have elected a Leader.

Jenkins_05.png

Persisting State

The Consul Docker image exposes VOLUME /consul/data, which is a path were Consul will place its persisted state. Using Terraform’s remote-exec provisioner, we create a directory on each EC2 instance, at /home/ubuntu/consul/config. The docker run command bind-mounts the container’s /consul/data path to the EC2 host’s /home/ubuntu/consul/config directory.

According to Consul, the Consul server container instance will ‘store the client information plus snapshots and data related to the consensus algorithm and other state, like Consul’s key/value store and catalog’ in the /consul/data directory. That container directory is now bind-mounted to the EC2 host, as demonstrated below.

jenkins_15

Accessing Consul

Following a successful deployment, you should be able to use the public URL, displayed in the build output of the ‘Deploy Consul Cluster AWS’ project, to access the Consul UI. Clicking on the Nodes tab in the UI, you should see all three Consul server instances, one per EC2 instance, running and healthy.

Consul UI

Destroying Infrastructure

When you are finished with the post, you may want to remove the running infrastructure, so you don’t continue to get billed by Amazon. The ‘Destroy Consul Infra AWS’ project destroys all the AWS infrastructure, provisioned as part of this post, in about 60 seconds. The project’s SCM and Bindings tasks are identical to the both previous projects. The Build step calls the destroy_infra.sh script, which is included in the GitHub project. The script executes the terraform destroy -force command. It will delete all running infrastructure components associated with the post and update Terraform’s remote state.

Jenkins_09

Conclusion

This post has demonstrated how modern DevOps tooling, such as HashiCorp’s Packer and Terraform, make it easy to build, provision and manage complex cloud architecture. Using a CI/CD server, such as Jenkins, to securely automate the use of these tools, ensures quick and consistent results.

, , , , , , , , , ,

Leave a comment

Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose

Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.

Docker Machine with Ambassador

Introduction

In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.

In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBoxJenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.

Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).

Docker Machine

If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.

Build and Deploy Results

We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.

Docker Machine with Ambassador

I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.

Jenkins CI Jobs Machine

The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.

Jenkins CI Machine Config 1

Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.

Jenkins CI Machine Config 2

The bash script executed by Jenkins contains the following commands:

# optional: record current versions of docker apps with each build
docker -v && docker-compose -v && docker-machine -v

# set-up: clean up any previous machine failures
docker-machine stop test || echo "nothing to stop" && \
docker-machine rm test   || echo "nothing to remove"

# use docker-machine to create and configure 'test' environment
# add a -D (debug) if having issues
docker-machine create --driver virtualbox test
eval "$(docker-machine env test)"

# use docker-compose to pull and build new images and containers
docker-compose -p jenkins up -d

# optional: list machines, images, and containers
docker-machine ls && docker images && docker ps -a

# wait for containers to fully start before tests fire up
sleep 30

# test the services
sh tests.sh $(docker-machine ip test)

# tear down: stop and remove 'test' environment
docker-machine stop test && docker-machine rm test

As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.

docker-machine create --driver virtualbox test
eval "$(docker-machine env test)"

Once this step is complete you will have the following VirtualBox VM created, running, and active.

NAME   ACTIVE   DRIVER       STATE     URL                         SWARM
test   *        virtualbox   Running   tcp://192.168.99.100:2376

Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.

docker-compose -p jenkins up -d

The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.

########################################################################
#
# title:       Docker Compose YAML file for Virtual-Vehicles Project
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Pulls (5) images, builds (5) images, and builds (11) containers,
#              for the Virtual-Vehicles Java microservices example REST API
# to run:      docker-compose -p <your_project_name_here> up -d
#
########################################################################

graphite:
  image: hopsoft/graphite-statsd:latest
  ports:
   - "8500:80"

mongoAuthentication:
  image: mongo:latest

mongoValet:
  image: mongo:latest

mongoMaintenance:
  image: mongo:latest

mongoVehicle:
  image: mongo:latest

authentication:
  build: authentication/
  links:
   - graphite
   - mongoAuthentication
   - "ambassador:nginx"
  expose:
   - "8587"

valet:
  build: valet/
  links:
   - graphite
   - mongoValet
   - "ambassador:nginx"
  expose:
   - "8585"

maintenance:
  build: maintenance/
  links:
   - graphite
   - mongoMaintenance
   - "ambassador:nginx"
  expose:
   - "8583"

vehicle:
  build: vehicle/
  links:
   - graphite
   - mongoVehicle
   - "ambassador:nginx"
  expose:
   - "8581"

nginx:
  build: nginx/
  ports:
   - "80:80"
  links:
   - "ambassador:vehicle"
   - "ambassador:valet"
   - "ambassador:authentication"
   - "ambassador:maintenance"

ambassador:
  image: cpuguy83/docker-grand-ambassador
  volumes:
   - "/var/run/docker.sock:/var/run/docker.sock"
  command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"

Running the docker-compose.yaml file, will pull these (5) Docker Hub images:

REPOSITORY                           TAG          IMAGE ID
==========                           ===          ========
java                                 8u45-jdk     1f80eb0f8128
nginx                                latest       319d2015d149
mongo                                latest       66b43e3cae49
hopsoft/graphite-statsd              latest       b03e373279e8
cpuguy83/docker-grand-ambassador     latest       c635b1699f78

And, build these (5) Docker images from Dockerfiles:

REPOSITORY                  TAG          IMAGE ID
==========                  ===          ========
jenkins_nginx               latest       0b53a9adb296
jenkins_vehicle             latest       d80f79e605f4
jenkins_valet               latest       cbe8bdf909b8
jenkins_maintenance         latest       15b8a94c00f4
jenkins_authentication      latest       ef0345369079

And, build these (11) Docker containers from corresponding image:

CONTAINER ID     IMAGE                                NAME
============     =====                                ====
17992acc6542     jenkins_nginx                        jenkins_nginx_1
bcbb2a4b1a7d     jenkins_vehicle                      jenkins_vehicle_1
4ac1ac69f230     mongo:latest                         jenkins_mongoVehicle_1
bcc8b9454103     jenkins_valet                        jenkins_valet_1
7c1794ca7b8c     jenkins_maintenance                  jenkins_maintenance_1
2d0e117fa5fb     jenkins_authentication               jenkins_authentication_1
d9146a1b1d89     hopsoft/graphite-statsd:latest       jenkins_graphite_1
56b34cee9cf3     cpuguy83/docker-grand-ambassador     jenkins_ambassador_1
a72199d51851     mongo:latest                         jenkins_mongoAuthentication_1
307cb2c01cc4     mongo:latest                         jenkins_mongoMaintenance_1
4e0807431479     mongo:latest                         jenkins_mongoValet_1

Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.

Integration Testing

As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:

sh tests.sh $(docker-machine ip test)

The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test), is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100. If a parameter is not provided, the test script’s hostname variable will use the default value of localhost, ‘hostname=${1-'localhost'}‘.

Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581 for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:

http://192.168.99.100/vehicles/utils/ping.json
http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI
http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad

Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.

#!/bin/sh

########################################################################
#
# title:          Virtual-Vehicles Project Integration Tests
# author:         Gary A. Stafford (https://programmaticponderings.com)
# url:            https://github.com/garystafford/virtual-vehicles-docker  
# description:    Performs integration tests on the Virtual-Vehicles
#                 microservices
# to run:         sh tests.sh
# docker-machine: sh tests.sh $(docker-machine ip test)
#
########################################################################

echo --- Integration Tests ---
echo

### VARIABLES ###
hostname=${1-'localhost'} # use input param or default to localhost
application="Test API Client $(date +%s)" # randomized
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized
make="Test"
model="Foo"

echo hostname: ${hostname}
echo application: ${application}
echo secret: ${secret}
echo make: ${make}
echo model: ${model}
echo


### TESTS ###
echo "TEST: GET request should return 'true' in the response body"
url="http://${hostname}/vehicles/utils/ping.json"
echo ${url}
curl -X GET -H 'Accept: application/json; charset=UTF-8' \
--url "${url}" \
| grep true > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new client in the response body with an 'id'"
url="http://${hostname}/clients"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get the new client's apiKey for next test"
url="http://${hostname}/clients"
echo ${url}
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo apiKey: ${apiKey}
echo


echo "TEST: GET request should return a new jwt in the response body"
url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get a new jwt using the new client for the next test"
url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
jwt=$(curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo jwt: ${jwt}
echo


echo "TEST: POST request should return a new vehicle in the response body with an 'id'"
url="http://${hostname}/vehicles"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"year\": 2015,
    \"make\": \"${make}\",
    \"model\": \"${model}\",
    \"color\": \"White\",
    \"type\": \"Sedan\",
    \"mileage\": 250
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "SETUP: Get id from new vehicle for the next test"
url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1"
echo ${url}
id=$(curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| tail -1 \
| sed -e 's/^"//'  -e 's/"$//')
echo vehicle id: ${id}
echo


echo "TEST: GET request should return a vehicle in the response body with the requested 'id'"
url="http://${hostname}/vehicles/${id}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new maintenance record in the response body with an 'id'"
url="http://${hostname}/maintenances"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"vehicleId\": \"${id}\",
    \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\",
    \"mileage\": 1000,
    \"type\": \"Test Maintenance\",
    \"notes\": \"This is a test notes.\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo


echo "TEST: POST request should return a new valet transaction in the response body with an 'id'"
url="http://${hostname}/valets"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d "{
    \"vehicleId\": \"${id}\",
    \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\",
    \"parkingLot\": \"Test Parking Ramp\",
    \"parkingSpot\": 10,
    \"notes\": \"This is a test notes.\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"
echo

Tear Down

In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.

docker-machine stop test && \
docker-machine rm test

Jenkins CI Console Output

Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.

Started by user anonymous
Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10
Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git
> git --version # timeout=10
using GIT_SSH to set credentials
using .gitcredentials to set credentials
> git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10
> git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/*
> git config --local --remove-section credential # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5
> git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10
[workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh

+ docker -v
Docker version 1.7.0, build 0baf609
+ docker-compose -v
docker-compose version: 1.3.1
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
+ docker-machine -v
docker-machine version 0.3.0 (0a251fe)

+ docker-machine stop test
+ docker-machine rm test
Successfully removed test

+ docker-machine create --driver virtualbox test
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env test
+ docker-machine env test
+ eval export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test"
export DOCKER_MACHINE_NAME="test"
# Run this command to configure your shell:
# eval "$(docker-machine env test)"
+ export DOCKER_TLS_VERIFY=1
+ export DOCKER_HOST=tcp://192.168.99.100:2376
+ export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test
+ export DOCKER_MACHINE_NAME=test
+ docker-compose -p jenkins up -d
Pulling mongoValet (mongo:latest)...
latest: Pulling from mongo

...Abridged output...

+ docker-machine ls
NAME   ACTIVE   DRIVER       STATE     URL                         SWARM
test   *        virtualbox   Running   tcp://192.168.99.100:2376
+ docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
jenkins_vehicle                    latest              fdd7f9d02ff7        2 seconds ago       837.1 MB
jenkins_valet                      latest              8a592e0fe69a        4 seconds ago       837.1 MB
jenkins_maintenance                latest              5a4a44e136e5        5 seconds ago       837.1 MB
jenkins_authentication             latest              e521e067a701        7 seconds ago       838.7 MB
jenkins_nginx                      latest              085d183df8b4        25 minutes ago      132.8 MB
java                               8u45-jdk            1f80eb0f8128        12 days ago         816.4 MB
nginx                              latest              319d2015d149        12 days ago         132.8 MB
mongo                              latest              66b43e3cae49        12 days ago         260.8 MB
hopsoft/graphite-statsd            latest              b03e373279e8        4 weeks ago         740 MB
cpuguy83/docker-grand-ambassador   latest              c635b1699f78        5 months ago        525.7 MB

+ docker ps -a
CONTAINER ID        IMAGE                              COMMAND                CREATED             STATUS              PORTS                                      NAMES
4ea39fa187bf        jenkins_vehicle                    "java -classpath .:c   2 seconds ago       Up 1 seconds        8581/tcp                                   jenkins_vehicle_1
b248a836546b        mongo:latest                       "/entrypoint.sh mong   3 seconds ago       Up 3 seconds        27017/tcp                                  jenkins_mongoVehicle_1
0c94e6409afc        jenkins_valet                      "java -classpath .:c   4 seconds ago       Up 3 seconds        8585/tcp                                   jenkins_valet_1
657f8432004b        jenkins_maintenance                "java -classpath .:c   5 seconds ago       Up 5 seconds        8583/tcp                                   jenkins_maintenance_1
8ff6de1208e3        jenkins_authentication             "java -classpath .:c   7 seconds ago       Up 6 seconds        8587/tcp                                   jenkins_authentication_1
c799d5f34a1c        hopsoft/graphite-statsd:latest     "/sbin/my_init"        12 minutes ago      Up 12 minutes       2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp   jenkins_graphite_1
040872881b25        jenkins_nginx                      "nginx -g 'daemon of   25 minutes ago      Up 25 minutes       0.0.0.0:80->80/tcp, 443/tcp                jenkins_nginx_1
c6a2dc726abc        mongo:latest                       "/entrypoint.sh mong   26 minutes ago      Up 26 minutes       27017/tcp                                  jenkins_mongoAuthentication_1
db22a44239f4        mongo:latest                       "/entrypoint.sh mong   26 minutes ago      Up 26 minutes       27017/tcp                                  jenkins_mongoMaintenance_1
d5fd655474ba        cpuguy83/docker-grand-ambassador   "/usr/bin/grand-amba   26 minutes ago      Up 26 minutes                                                  jenkins_ambassador_1
2b46bd6f8cfb        mongo:latest                       "/entrypoint.sh mong   31 minutes ago      Up 31 minutes       27017/tcp                                  jenkins_mongoValet_1

+ sleep 30

+ docker-machine ip test
+ sh tests.sh 192.168.99.100

--- Integration Tests ---

hostname: 192.168.99.100
application: Test API Client 1435585062
secret: NGM5OTI5ODAxMTZ
make: Test
model: Foo

TEST: GET request should return 'true' in the response body
http://192.168.99.100/vehicles/utils/ping.json
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     4    0     4    0     0     26      0 --:--:-- --:--:-- --:--:--    25
100     4    0     4    0     0     26      0 --:--:-- --:--:-- --:--:--    25
RESULT: pass

TEST: POST request should return a new client in the response body with an 'id'
http://192.168.99.100/clients
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   399    0   315  100    84    847    225 --:--:-- --:--:-- --:--:--   849
RESULT: pass

SETUP: Get the new client's apiKey for next test
http://192.168.99.100/clients
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   399    0   315  100    84  20482   5461 --:--:-- --:--:-- --:--:-- 21000
apiKey: sv1CA9NdhmXh72NrGKBN3Abb

TEST: GET request should return a new jwt in the response body
http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   222    0   222    0     0    686      0 --:--:-- --:--:-- --:--:--   687
RESULT: pass

SETUP: Get a new jwt using the new client for the next test
http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   222    0   222    0     0  16843      0 --:--:-- --:--:-- --:--:-- 17076
jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM

TEST: POST request should return a new vehicle in the response body with an 'id'
http://192.168.99.100/vehicles
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   123    0     0  100   123      0    612 --:--:-- --:--:-- --:--:--   611
100   419    0   296  100   123    649    270 --:--:-- --:--:-- --:--:--   649
RESULT: pass

SETUP: Get id from new vehicle for the next test
http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   377    0   377    0     0   5564      0 --:--:-- --:--:-- --:--:--  5626
vehicle id: 55914a28e4b04658471dc03a

TEST: GET request should return a vehicle in the response body with the requested 'id'
http://192.168.99.100/vehicles/55914a28e4b04658471dc03a
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   296    0   296    0     0   7051      0 --:--:-- --:--:-- --:--:--  7219
RESULT: pass

TEST: POST request should return a new maintenance record in the response body with an 'id'
http://192.168.99.100/maintenances
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   565    0   376  100   189    506    254 --:--:-- --:--:-- --:--:--   506
100   565    0   376  100   189    506    254 --:--:-- --:--:-- --:--:--   506
RESULT: pass

TEST: POST request should return a new valet transaction in the response body with an 'id'
http://192.168.99.100/valets
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed

0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   561    0   368  100   193    514    269 --:--:-- --:--:-- --:--:--   514
RESULT: pass

+ docker-machine stop test
+ docker-machine rm test
Successfully removed test

Finished: SUCCESS

Graphite and Statsd

If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.

Graphite Dashboard

The Complete Process

The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.

Docker Machine Full Process

, , , , , , , , , , , , , ,

5 Comments

Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose

Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose

IntroDockerCompose

Previous Posts

In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.

Introduction

In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:

  1. Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
  2. Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
  3. Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
  4. Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
  5. Tear Down: Using Jenkins, automatically stop and remove the containers and images

For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.

Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).

Build the Microservices

In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.

Build and Deploy

To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml below, corresponding to each microservice.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <name>Virtual-Vehicles API</name>
    <description>Virtual-Vehicles API
        https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3
    </description>
    <url>https://github.com/garystafford/virtual-vehicle-demo</url>
    <groupId>com.example</groupId>
    <artifactId>Virtual-Vehicles-API</artifactId>
    <version>1</version>
    <packaging>pom</packaging>

    <modules>
        <module>Maintenance</module>
        <module>Valet</module>
        <module>Vehicle</module>
        <module>Authentication</module>
    </modules>
</project>

Below is the view of the four individual Maven modules, within the single Jenkins Maven job.

Maven Modules In Jenkins

Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate‘.

Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.

Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.

Jenkins CI Main Page

Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.

Build and Deploy Config 1

Build and Deploy Config 2

Deploy Build Artifacts

Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.

Build and Deploy Config 3

The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.

Build and Deploy Results

Build and Compose the Containers

IntroDockerCompose

The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.

Docker Compose Config 1

The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.

Docker Compose Config 2

# remove all images and containers from this build
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \
docker images --no-trunc | grep 'jenkins' \
| awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable
export DOCKER_HOST=tcp://localhost:4243

# record installed version of Docker and Maven with each build
mvn --version && \
docker --version && \
docker-compose --version

# use docker-compose to build new images and containers
docker-compose -p jenkins up -d

# list virtual-vehicles related images
docker images | grep 'jenkins' | awk '{print $0}'

# list all containers
docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
########################################################################
#
# title:       Docker Compose YAML file for Virtual-Vehicles Project
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Builds (4) images, pulls (2) images, and builds (9) containers,
#              for the Virtual-Vehicles Java microservices example REST API
# to run:      docker-compose -p virtualvehicles up -d
#
########################################################################

graphite:
  image: hopsoft/graphite-statsd:latest
  ports:
   - "8481:80"

mongoAuthentication:
  image: mongo:latest

mongoValet:
  image: mongo:latest

mongoMaintenance:
  image: mongo:latest

mongoVehicle:
  image: mongo:latest

authentication:
  build: authentication/
  ports:
   - "8587:8587"
  links:
   - graphite
   - mongoAuthentication

valet:
  build: valet/
  ports:
   - "8585:8585"
  links:
   - graphite
   - mongoValet
   - authentication

maintenance:
  build: maintenance/
  ports:
   - "8583:8583"
  links:
   - graphite
   - mongoMaintenance
   - authentication

vehicle:
  build: vehicle/
  ports:
   - "8581:8581"
  links:
   - graphite
   - mongoVehicle
   - authentication

Running the docker-compose.yaml file, produces the following images:

REPOSITORY                TAG        IMAGE ID
==========                ===        ========
jenkins_vehicle           latest     a6ea4dfe7cf5
jenkins_valet             latest     162d3102d43c
jenkins_maintenance       latest     0b6f530cc968
jenkins_authentication    latest     45b50487155e

And, these containers:

CONTAINER ID     IMAGE                              NAME
============     =====                              ====
2b4d5a918f1f     jenkins_vehicle                    jenkins_vehicle_1
492fbd88d267     mongo:latest                       jenkins_mongoVehicle_1
01f410bb1133     jenkins_valet                      jenkins_valet_1
6a63a664c335     jenkins_maintenance                jenkins_maintenance_1
00babf484cf7     jenkins_authentication             jenkins_authentication_1
548a31034c1e     hopsoft/graphite-statsd:latest     jenkins_graphite_1
cdc18bbb51b4     mongo:latest                       jenkins_mongoAuthentication_1
6be5c0558e92     mongo:latest                       jenkins_mongoMaintenance_1
8b71d50a4b4d     mongo:latest                       jenkins_mongoValet_1

Integration Testing

Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.

Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.

Docker Compose Config 3

sleep 15
sh tests.sh -v

The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.

Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl, grep, sed, awk, along with regular expressions, to test our RESTful services.

#!/bin/sh

########################################################################
#
# title:       Virtual-Vehicles Project Integration Tests
# author:      Gary A. Stafford (https://programmaticponderings.com)
# url:         https://github.com/garystafford/virtual-vehicles-docker  
# description: Performs integration tests on the Virtual-Vehicles
#              microservices
# to run:      sh tests.sh -v
#
########################################################################

echo --- Integration Tests ---

### VARIABLES ###
hostname="localhost"
application="Test API Client $(date +%s)" # randomized
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized

echo hostname: ${hostname}
echo application: ${application}
echo secret: ${secret}


### TESTS ###
echo "TEST: GET request should return 'true' in the response body"
url="http://${hostname}:8581/vehicles/utils/ping.json"
echo ${url}
curl -X GET -H 'Accept: application/json; charset=UTF-8' \
--url "${url}" \
| grep true > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "TEST: POST request should return a new client in the response body with an 'id'"
url="http://${hostname}:8587/clients"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get the new client's apiKey for next test"
url="http://${hostname}:8587/clients"
echo ${url}
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{
    \"application\": \"${application}\",
    \"secret\": \"${secret}\"
}" --url "${url}" \
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo apiKey: ${apiKey}
echo

echo "TEST: GET request should return a new jwt in the response body"
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get a new jwt using the new client for the next test"
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}"
echo ${url}
jwt=$(curl -X GET -H "Cache-Control: no-cache" \
--url "${url}" \
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \
| sed -e 's/^"//'  -e 's/"$//')
echo jwt: ${jwt}


echo "TEST: POST request should return a new vehicle in the response body with an 'id'"
url="http://${hostname}:8581/vehicles"
echo ${url}
curl -X POST -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
-d '{
    "year": 2015,
    "make": "Test",
    "model": "Foo",
    "color": "White",
    "type": "Sedan",
    "mileage": 250
}' --url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"


echo "SETUP: Get id from new vehicle for the next test"
url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1"
echo ${url}
id=$(curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' \
| grep -o '[a-zA-Z0-9]\{24\}' \
| tail -1 \
| sed -e 's/^"//'  -e 's/"$//')
echo vehicle id: ${id}


echo "TEST: GET request should return a vehicle in the response body with the requested 'id'"
url="http://${hostname}:8581/vehicles/${id}"
echo ${url}
curl -X GET -H "Cache-Control: no-cache" \
-H "Authorization: Bearer ${jwt}" \
--url "${url}" \
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1
echo "RESULT: pass"

Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.

Running Integration Tests

Tear Down

Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.

# remove all images and containers from this build
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \
docker ps -a --no-trunc  | grep 'jenkins' \
| awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \
docker images --no-trunc | grep 'jenkins' \
| awk '{print $3}' | xargs -r --no-run-if-empty docker rmi

The Complete Process

The below diagram show the entire process, start to finish.

Full Process

, , , , , , , , , , , , ,

6 Comments

Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 2 of 2)

Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.

System Diagram 3a

Introduction

In part 1, Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2), we built the first part of our basic deployment pipeline using leading open-source technologies. In part 2, we will use Jenkins CI Server and Oracle GlassFish Application Server to complete our deployment pipeline.

To review, the three main goals of our deployment pipeline are continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).

Setting up Git Server

As I mentioned in part 1, as a part of a development team using Git, you would place your project on a remote Git Server. You and your team members would each clone the repository from the Git Server to your local development environments. You and your team would commit your code changes locally, then pull, merge, and push your changes back to the remote Git Server. Jenkins will pull the project’s source code from the Git Server.

In part 1 of this post, we just created a local Git repository. In part 2, we will properly set-up our project on a remote Git Server. First, we need to export our local repository into a new, bare repository on the Git Server. The Git term, ‘bare repository’, refers to a repository that does not contain a working directory. The repository has no working copies of your source files. You only use the bare repository to clone, pull from, and push to. The bare repository contains a .git extension (i.e. ssh://user@server:/git-repos/myproject.git).

From the root of your remote Git Server repository, execute the following command, substituting the path to your local project. If your Git Server is on a separate machine that your local project repository, you will need to copy the new bare repository to the remote Git Server. This involves a few simple steps, explained in this post, and at git-scm.com.

Export Local Project to New Bare Repository

Export Local Project to New Bare Repository

Once you have created the repository on the remote Git Server, I would recommend you clone the remote repository to your local machine and discard your original local repository from part 1 of the post. You don’t have to do this step, but cloning fresh from the server will make sure Git is working correctly. The screen grabs below illustrate an example of cloning a new repository to my local NetBeans Project folder.

Clone New Bare Server Repository - Screen 1

Clone New Bare Server Repository – Screen 1

Clone New Bare Server Repository - Screen 2

Clone New Bare Server Repository – Screen 2

Clone New Bare Server Repository - Screen 3

Clone New Bare Server Repository – Screen 3

Configuring Jenkins

The diagram below illustrates the deployment pipeline from Git Server to Jenkins to GlassFish in finer detail. It begins with an initial commit to the local Git project repository and ends with the deployment of the project’s WAR file to the GlassFish domain. We will walk through it step-by-step.

System Diagram 3c

Jenkins Plugins

Before we create our new Jenkins Jobs, we need to configure Jenkins properly. You will need a recent version of Jenkins installed, along with the following plugins:

  1. Build With Parameters Plugin
  2. Copy Artifact Plugin
  3. Jenkins GIT plugin (includes Jenkins GIT client plugin)
  4. Jenkins Parameterized Trigger plugin
  5. Maven Integration plugin
  6. Credentials Plugin (optional for use with Git Server if security is enabled)
  7. ThinBackup (optional to install supplied Jenkins jobs configuration files)

Global Security

Jenkins can be configured with or without Global Security. For this post, I have enabled Global Security, as it typical of most development environments. I chose to use ‘Jenkins’s own user database’ option for authentication. In larger development environments, authentication would normally be done against LDAP.

Jenkins' Configure Global Security

Configuring Global Security

The user I have set up, ‘jenkins’, will be the user that Git authenticates with when connecting to Jenkins (explained later). Set up your own user and note their API Token. Since Global Security has been enabled, we will need the token later to trigger the Jenkins build from Git. Your user’s unique api token will be different than in the example below.

Jenkins User API Token

Jenkins User API Token

Jenkins Jobs

We will set up two Jenkins ‘free-style software project’ jobs, ‘GitMavenGlassFish_Build’ and ‘GitMavenGlassFish_Deploy’. We won’t be using the obvious choice, a ‘maven2/3 project’. If you’re interested, here’s why. The first job, the build job, will be responsible for pulling the source code from the Git Server. The build job, with help from Maven, will compile, test, and assemble the application code. The second job, the deployment job, will pull the artifacts from the build job and deploy them to GlassFish. The build job will trigger the deployment job, once the build job completes successfully. This is explained in detail, to follow.

Why Two Jobs?

Following good modular design and Separation of Concerns (SoC) principles, separating the build from the deployment gains us several advantages, including:

  1. Modularity– Ability to change deployment methodology or deployment targets, without disrupting the build and test process. For example, we might move the application hosting from GlassFish to WebLogic, or decide to use Ant instead of Maven for deployment tasks. This can happen totally independent of the build and testing processes.
  2. Separation/Isolation – For any reason we are unable to deploy the artifacts as part of the deployment job, we won’t impact the continuous integration and automated testing processes, which are part of the separate build job.
  3. Support – Support is easier by having smaller pieces of functionality to troubleshoot and maintain.

In a larger enterprise environment, you would probably encounter further separation of concerns. Unit testing, performance testing, deployment validation, and documentation generation (javadocs) are often handled by separate jobs. Jenkins represents a smaller pipeline within our larger deployment pipeline.

I intentionally left out notification for brevity. At minimum, you would want to be notified when the build or deployment jobs failed. Additionally, with continuous deployment, the deployment would trigger a notification to the stakeholders of that environment, such as the Testers. This lets them know the new software is ready to be tested. Notifications often include a list of bug fixes and feature enhancements that need to be tested. This can easily be pulled from Git into Jenkins and out to the end user.

Both Jenkins jobs definitions are available as xml files on gist.github.com. Using Jenkins’ ThinBackup Plugin, you can save both gists locally, and then restore them to your Jenkins server. The build job gist is here and the deployment job gist is here. This may save you some configuration time.

Jenkins Build Job

Both the build job and the deployment jobs require an input parameter. This property represents the targeted environment (GlassFish domain) for deployment, such as ‘testing’.. How this parameter is passed to Jenkins is discussed later in the Git Hooks section, below.

Reviewing the below screen grab of the build job’s configuration, you will observe the following steps:

  1. Build Request – A build request is received by the job (explained later). The request contains an input parameter indicating the ‘environment’. The parameter must be one of the choices listed in ‘Choices’.
  2. Maven Dependencies – Based on the pom file, Maven retrieves all the required dependencies from the remote Maven repository, if the dependencies are not already contained in the workspace’s local repository. Note the setting ‘User private Maven repository. This creates a local repository for project dependencies within the project’s workspace.
  3. Pull from Git – Jenkins pulls the code from the Git Server using the supplied repository configuration information. Note my Git Server does not require authentication. If it did, we would set-up and use the proper credentials.
  4. Build – Jenkins builds the project using the Maven command ‘clean install -e’. The pom file contains the necessary configuration information.
  5. Unit Test – The above Maven ‘install’ command also calls JUnit to execute the unit tests. The results of these tests are published and displayed as part of the build job’s details.
  6. Assemble WAR – The above Maven ‘install’ command also assembles the project’s WAR file.
  7. Archive Artifacts – Based on the success of the build and unit tests, Jenkins archives specific artifacts needed by the deployment job. Jenkins uses the input parameter in #1 to define which properties file and password file to archive.
  8. Trigger Deployment Job – Based on the success of the build and unit tests, Jenkins triggers the ‘downstream’ deployment job, passing it the same environment parameter.
Jenkins Build Job Configuration

Jenkins Build Job Configuration

Jenkins Deployment Job

Reviewing the below screen grab of the deployment job’s configuration, you will observe the following steps:

  1. Build Request – A build request is received from the upstream build job. The request contains the input parameter indicating ‘environment’.
  2. Copy Artifacts – Jenkins copies the artifacts from the build job that called the deploy job.
  3. Read Properties – Maven executes the command ‘mvn properties:read-project-properties glassfish:redeploy -e’. The first half of this command instructs Maven to read the appropriate properties file, as indicated by the environment parameter, ‘glassfish.properties.file.argument=${environment}’.
  4. POM – Maven substitutes the key ‘glassfish.properties.file.argument’ in the pom file with the environment value. This tells Maven the name of the properties file, which supplies all the remaining property values to the pom file.
  5. Maven Dependencies – If the dependencies are not already contained in the workspace’s local repository, Maven retrieves all the required dependencies from the remote Maven repositories, based on the pom. Note the setting ‘User private Maven repository’ checked in the screen grab below. This option instructs Jenkins to creates a local repository for project dependencies within the project’s workspace.
  6. Deployment – The last half of the command in #3 deploys, or more accurately redeploys the application’s WAR file to GlassFish. The ‘glassfish:redeploy’ works only if the WAR file has already been initially deployed to the GlassFish domain using the ‘glassfish:deploy’ command. For this process, I am assuming the initial deployment was already done directly through the GlassFish Administration Console, NetBeans, or command line.
Jenkins Deploy Job Configuration

Jenkins Deploy Job Configuration

Git Hooks

To achieve continuous integration, we want to automatically build and test our job after each change to our code. We have a number of choices to make this happen. The obvious choice is letting Jenkins poll the Git Server. Although polling would simplify configuration, polling is frowned upon in many environments. Even the creator of Jenkins, Kohsuke Kawaguchi, frowns upon polling in his post, ‘Polling Must Die‘.

Why is polling bad? It adds unnecessary activity and delay. Let’s say Jenkins’ polling frequency is set to every 2 minutes, but you only have an average of 5 pushes to your remote Git Server project repository per day. Based on these stats, in just one day, Jenkins will poll Git 720 times to discover only 5 pushes. That’s 144 times per push. Also, based on the polling frequency, when you do push, you could wait up to 2 minutes for Jenkins to queue the build job. The longer you wait for feedback on your changes, the greater chance your defects could be pulled down by other developers. You should expect immediate and continuous feedback.

A vastly more efficient and configurable method of continuous integration between Git and Jenkins is Git Hooks. Git Hooks allow us to execute scripts based on specific Git actions. In our case, when a developer completes a successful push to the remote Git Server project repository, we want to call Jenkins to build, test, and deploy the modified project code. Using hooks means we only call Jenkins when a successful push is completed. Furthermore, we can be assured Jenkins will immediately queue our request to build and deploy the job when a push occurs.

Post-Receive Hook

There are several types of Git Hooks. They include ‘post-commit’, ‘pre-push’, ‘update’, ‘pre-rebase, and so forth. I recommend this post on kernel.org for a good explanation of the hook types and thier purposes. Git also includes sample hook files inside the ‘hooks’ subdirectory of each new repository .git folder.

For our pipeline, we will employ the ‘post-receive’ hook. Whenever a successful push is received by Git Server’s project repository, the ‘post-receive’ hook will be called. The script commands, contained in the post-receive hook file, will be executed. Hooks can language agnostic; they can be almost any scripting language, such as Perl, Shell, Bash, or Ruby.

To create the hook, create a new file, ‘post-receive’, in the hooks sub-directory of the Git Server’s project repository. Add the below code to the file. Change the command to match your local file path. Also, change the API Token to match your user’s token from Jenkins. Note the command requires cURL to be installed on the Git Server. If installing cURL is not an option, there are other options available to execute the http post call from the hook’s script.

NetBeans and Git Hooks

Now some slightly bad news. As with any integration, there is always trade-offs; that is the case with NetBeans and Git. Although NetBeans works well with Git, there are a few features that have not been implemented. Unfortunately, this lack of complete integration effects NetBeans’ ability to make use of Git Hooks. Only after three hours of troubleshooting and research on the Internet, did I realize this limitation. The hooks fire fine if a git push command is executed from a command prompt or from within a Git application like Git Gui or Git Bash. However, from NetBeans, the Team -> Remote -> Push… does not cause the hooks to be called.

Example Post-Receive Hook - Works from Command Prompt

Post-Receive Hook Working from a Windows Command Prompt

Git Hooks do not work with NetBeans because NetBeans does not use a command line client for Git. NetBeans uses a pure java implementation of the Git client, Java GIT, known as JGit. I understand that other IDE’s also share this limitation. There are several discussions on StackOverflow and on the NetBeans bug tracking site about the issue and workarounds.

So what does this mean? You can use NetBeans to perform all of your local tasks. However, when it comes time to push your code back to the remote Git Server repository, you must use a command prompt, Bash shell, or a command line based tool. I recommend Git Gui. Git ships with built-in GUI tools, including git-gui and gitk. It can be downloaded from git-scm.com.

Git Gui Graphical User Interface for Git

Git Gui Graphical User Interface for Git

Push Files Using Git GUI Instead of NetBeans

Push Files Using Git GUI Instead of NetBeans

Pushing changes to the remote Git Server using Git Gui instead of NetBeans may seem inconvenient at first. However, the more advanced your needs become with Git, the more you will find you need the additional functionality of Git Bash, Git Gui, and gitk. Tasks like resetting the branch to a previous revision, compressing the Git repository database, and visualizing repository history, can all be done with tools like Git Gui and gitk. I have Git Gui running when I am working in NetBeans or other IDEs; it becomes second nature.

Using Git Gui and gitk Used to Examine Repository

Using Git Gui and gitk to Examine and Modify the Project Repository

Deploying to GlassFish

At this point we have configured the Git Server, created the Jenkins build and deploy jobs, and configured our Git hook. We are ready to test our deployment pipeline. First, make sure your GlassFish domains are running. Also, recall we are assuming that an initial deployment of the application has occurred. This might be directly through the GlassFish Administration Console, through NetBeans, or via the command line. Recall, Jenkins will be only be executing a re-deploy.

Check and Start GlassFish Domains

Check and Start GlassFish Domains

To test the system, make an innocuous change to the Project. Commit the change to your local Git repository. Following that, push the change back to the remote Git Server repository using Git Gui. If the hook fired, you will see output to the Git Gui terminal window, echoed from the post-receive hook as it executed its script.

Push with Git Gui Triggering Jenkins Build

Push with Git Gui Triggering Jenkins Build

The post-receive hook executes the cURL command, which posts an HTTP request to Jenkins via the Jenkins Remote API. You should observe is the Jenkins build job queued and running.

Jenkins Build Job Running

Jenkins Build Job Running

When the build completes, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job. The build results window also provides test results, Git Build Data, and the changes pushed to Git that triggered the CI build.

Jenkins Build Job Results

Jenkins Build Job Results

The console output from the build provides a detailed view of the build process. Using the ‘-e’ for echo with the Maven command, increases the level of output detail. You see the details of Maven copying the required dependencies from the remote repository to the local workspace repository, prior to compilation. You see the unit tests being executed. Finally, you see the WAR file assembled and the required artifacts archived.

Regarding Maven Dependencies, you will only see the dependencies copied on the first build to an empty workspace. Maven does not re-pull dependencies if they already exist in the workspace’s local repository. To see the difference, empty your workspace and build the job, then immediately rebuild the job. Compare the console outputs of both jobs. You will see a significant difference in the Maven dependency activities.

Jenkins Build Job Console Results

Jenkins Build Job Console Results

Once the build job has completed successfully, you should notice the Jenkins deployment job running, triggered by the build job. When complete, note the detail that lists the exact build job that called the deployment job, and its build number. For example, the upstream build job #45 triggered the downstream deployment job #33. This linkage between upstream and downstream jobs is retained in the job’s history.

As before, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job, and then on to the deployment job.

Jenkins Deployment Job Complete

Jenkins Deployment Job Complete

A review of the console output will confirm that the artifacts were copied from the build job and the WAR file was deployed to the ‘testing’ GlassFish domain.

Jenkins Deployment Job Console Output

Jenkins Deployment Job Console Output

GlassFish

If the hook fired, and both the Jenkins build and deployment jobs ran successfully, you should observe that the project’s WAR files, containing your recent change, was deployed to the testing GlassFish domain.

Application Installed on GlassFish Server Testing Domain

Application Installed on GlassFish Server Testing Domain

You can verify this by calling the application’s RESTful ‘resources/helloWorld’ URI, from your browser. Repeat the process by changing the output string, commit the change, and push. See if you see your change deployed.

Application Running on GlassFish Server Testing Domain

Application Running on GlassFish Server Testing Domain

Jenkins Workflows

Using our deployment pipeline, we have two distinct workflow options:

  1. Continuous– Use Git hooks to build, test, and deploy the WAR file to the domain(s) of choice when changes are pushed. Any time a change is pushed, a build, test, and deploy, should occur. This would be just for development at first. Once the project enters the testing phase of the SDLC, then it would include deployments to testing.
  2. Semi-Automated – Start the Jenkins build manually in the Jenkins browser-based Administration Console. This is more typical for a release to Production. Most teams are not comfortable extending the continuous deployment functionality into Production. Often, a deployment team will deploy the project artifacts in a controlled and staged approach. The Jenkins build and/or deployment jobs both allow this feature, along with the ability to provide the environment parameter both jobs needs.

Conclusion

In part 1, we learned how to create a simple Java EE web application project in NetBeans using Maven. We learned how to integrate JUnit for unit testing, and how use Git to manage our source code.

In part 2, we learned how to configure a remote Git Server, how to configure Jenkins CI Server to clone our project from the Git Server, build, test, and assemble it. If the build was successful, we learned how to configure Jenkins to deploy our project to a specific GlassFish domain, based on the project’s stage in the SDLC. We achieved our goals of continuous integration, automated testing, and continuous deployment.

Going Forward

To extend and enhance our deployment pipeline, you might consider adding the following features: 1) further separate the Jenkins jobs by function, 2) add build and deploy notifications, 3) add the ability to deploy to multiple environments simultaneously (i.e. development and testing), 4) add additional testing to confirm the deployment to GlassFish, 5) configure a versioning and naming scheme for the deployed artifacts, and 6) add error handling if a parameter is not received or is not one of the expected values.

, , , , , , , , , , , , , , , ,

8 Comments

Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2)

Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.

System Diagram 3a

Introduction

In my earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit, I demonstrated a basic deployment pipeline using leading open-source technologies. In this post, we will demonstrate a similar pipeline, substituting Jenkins CI Server for Hudson, and Oracle’s GlassFish Application Server for WebLogic Server. We will use the same NetBeans Java EE ‘Hello World’ RESTful Web Service sample project.

The three main goals of our deployment pipeline will be continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).

Building a reliable deployment pipeline is complex and time-consuming. To make it as easy as possible in this post, I chose NetBeans IDE for development, Git Distributed Version Control System (DVCS) for managing our source code, Jenkins Continuous Integration (CI) Server for build automation, JUnit for automated unit testing, GlassFish for application hosting, and Apache Maven to manage our project’s dependencies. Maven will also manage the build and deployment process to GlassFish, along with Jenkins. The beauty of NetBeans is its out-of-the-box, built-in integration with Git, Maven, JUnit, and GlassFish. Likewise, Jenkins has plugin-based integration with Git, Maven, JUnit, and GlassFish. Also, Maven has plugin-based integration with GlassFish.

Maven is a powerful tool for managing modern software development projects. This post will only draw upon a small part of Maven’s functionality and plug-in architecture extensibility. Specifically, we will use the Maven GlassFish Plugin. According to the Java.net website, which host’s the plug-in project, ‘the Maven GlassFish Plugin is a Maven2 plugin allowing management of GlassFish domains and component deployments from within the Maven build life cycle.’

 Requirements

To follow along with this post, I will assume you have recent versions of the following software installed and configured on your Windows OS-based computer (the process is nearly identical for Linux):

  1. NetBeans IDE. Current version: 7.4
  2. JUnit. Current version: 4.11 (included with NetBeans 7.4)
  3. GlassFish Server. Current version: 4.0 (included  with NetBeans 7.4)
  4. Jenkins CI Server. Current version: 1.538
  5. Apache Maven. Current version: 3.1.1
  6. cURL. Current version: 7.33.0
  7. Git with Git Gui and gitk. Current version: 1.8.4.3
  8. Necessary system environmental variables:
    M2_HOME, M2, JAVA_HOME, GLASSFISH_HOME, and PATH

GlassFish Domains

To simulate a simple deployment pipeline, we will create three GlassFish domains, simulating three common software environments, Development, Testing, and Production. A typical software project is promoted through these environments as it moves from development, to testing, and finally release to production. Each environment has distinct stakeholders with specific roles to play in the software development life cycle, including developers, testers, deployment teams, and end-users. Larger-scale, enterprise software development often includes other environments, such as Performance and Staging.

Create the domains from the command line using ‘asadmin’ commands such as the ones below. Note I have a ‘GLASSFISH_HOME’ system environment variable set up. The ports are your choice, but make sure they don’t conflict with existing installations of other applications, such as Jenkins, Tomcat, IIS, WebLogic, and so forth.

As part of the creation process, you’re prompted for an admin account and a new password. I kept the ‘admin’ username, but added a new password for each domain created. This password is the same as one used in the separate password files (explained below).

Add the GlassFish domains to NetBeans’ Services -> Server tab, and start them.

Create New GlassFish 4.0 Production Domain - Screen 1

Create New GlassFish 4.0 Production Domain – Screen 1

Create New GlassFish 4.0 Production Domain - Screen 2

Create New GlassFish 4.0 Production Domain – Screen 2

Create New GlassFish 4.0 Production Domain - Screen 3

Create New GlassFish 4.0 Production Domain – Screen 3

Create New GlassFish 4.0 Production Domain - Screen 4

Create New GlassFish 4.0 Production Domain – Screen 4

Setting Up the Project

To set up our NetBeans project, you can clone the repository on GitHub or build your own project from scratch and copy the files into the project. I will not spend a lot of time explaining the code since we have used it in earlier posts. This post is about the deployment pipeline system, not the project’s code.

If you choose to create a new project, first, create a new Maven ‘Project from Archetype’. Select the Archetype for a ‘web application using Java EE 7’ (webapp-javaee7).

New Maven Project - Screen 1

New Maven Project – Screen 1

New Maven Project - Screen 2

New Maven Project – Screen 2

I recommend you create the project inside of your local Git repository folder.

New Maven Project - Screen 3

New Maven Project – Screen 3

Maven will execute a series of commands to create the default NetBeans project with dependencies.

Git

As a part of a development team using Git, you place your project on a remote Git Server. You and your team members each clone the repository on the Git Server to your local development environments. You and your team commit your code changes locally, then pull, merge, and push your changes back to the Git Server. Jenkins will pull the project’s source code from the remote Git Server.

In part 2, we will properly set-up our project on the Git Server, exporting our existing repository into a new, bare repository on the Git Server. However, for brevity in part 1 of this post, we will just create a local Git repository. To start, create a new Git repository for the project. In NetBeans, select Team -> Git -> Initialize Repository… Choose the new Maven project folder.

Initialize New Git Repository

Initialize New Git Repository

The initial view of the Maven project should look like the below screen grabs. Note the icons and the green files show that the project is part of the Git repository.

Initial Projects Tab View of New Maven Project

Initial Projects Tab View of New Maven Project

Initial Files Tab View of New Maven Project

Initial Files Tab View of New Maven Project

Perform an initial commit of the project to Git to make sure everything is working.

Initial Commit of New Maven Project to Git

Initial Commit of New Maven Project to Git

Next, copy the supplied HelloWorldResource. java and NameStorageBean.java classes into the project. The package classpath will be refactored by NetBeans. Copy all the remaining files and folders, including the (3) files in the WEB-INF folder, properties folder with (3) properties files, and passwords folder with (3) password files.

JUnit

Next, right-click on the NameStorageBean.java class and select Tools -> Create Tests. Replace the contents of the new NameStorageBeanTest.java file’s NameStorageBeanTest class with the contents of the supplied NameStorageBeanTest.java file. These are two very simple unit tests that will show how JUnit provides automated testing capabilities.

Create JUnit Tests - Screen 1

Create JUnit Tests – Screen 1

Create JUnit Tests - Screen 2

Create JUnit Tests – Screen 2

Project Object Model (POM)

Copy the contents of the supplied pom file into the new pom file. There is a lot of configuration in the supplied pom. It will be easier to copy the supplied pom file’s contents into your project then trying to configure it from scratch.

Basically, beyond the normal boilerplate pom configuration, we have defined (3) properties, (3) dependencies, and (5) build plugins. The three dependencies are junit, jersey-servlet, and javaee-web-api. The five plugins are maven-compiler-plugin, maven-war-plugin, maven-dependency-plugin, properties-maven-plugin, and the maven-glassfish-plugin. Each plugin contains individual plug-in specific configuration. The name of the plugin should be sufficient to explain their primary purpose.

When complete, right-click on the project and do a ‘Build with Dependencies…’. Make sure everything builds. The final view of the project, with all its Maven-managed dependencies should look like the two screen grabs shown below. Make sure to commit all your new code to Git.

Final Projects Tab View of Project

Final Projects Tab View of Project

Final Files Tab View of Project

Final Files Tab View of Project

Maven and Properties Files

In part 2, will be deploying our project to multiple GlassFish domains. Each domain’s configuration is different. We will use Java properties files to store each of the GlassFish domain’s configuration properties. The ability to use Java properties files with Maven is possible using the Mojo Project’s Properties Maven Plugin. I introduced this plugin in an earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit.

Each environment (Development, Testing, Production), represented by a GlassFish domain, has a separate properties file in the project (see the Files Tab view above). The properties files contain configuration values the Maven GlassFish Plugin will need to deploy the project’s WAR file to each GlassFish domain. Since the build and deployment configurations are required by the project, including them into our Git repository and automating their use based on the environment, are two best practices.

In our project’s particular workflow, Maven accepts a single argument (‘glassfish.properties.file.argument’), which represents the environment we want to deploy to, such as ‘development’. The property value tells Maven which properties file to read, such as ‘development.properties’. Maven replaces the keys in the pom file with the values from the ‘development.properties’ file.

The properties file also tells Maven the full path to the separate password file, containing the admin user password, such as ‘pwdfile_development’. In an actual production environment, we would store encrypted password files on a secured file path. For simplicity in our example, we have included them unencrypted, within the project’s main directory.

System Diagram 3b

There are other Maven capabilities that also would achieve our deployment goals. For example, you might consider the Maven Release Plugin, as well as look at using Maven Build Profiles.

Testing the Pipeline

Although we have not built the second half of our deployment pipeline yet, we can still test the system at this early stage. All the necessary foundational elements are in place. To test the our system, right-click on the Maven Project icon in the Projects tab and select Custom -> Goals… Enter the following Maven Goals: ‘properties:read-project-properties clean install glassfish:redeploy -e’. In the Properties text box, enter the following: ‘glassfish.properties.file.argument=testing’ (see screen grab below). This will execute a number of Maven Goals and associated commands, visible in the Output tab.

With this one simple command, we are asking Maven to 1) read in our Java properties file and password file, 2) clean the project, 3) pull down all our project’s dependencies, 4) compile the project’s code, 5) execute the unit tests with JUnit, 6) assemble the WAR file, and 7) deploy it to the ‘testing’ GlassFish domain using asadmin. The terse nature of the command really demonstrates the power of Maven to manage our project and the deployment pipeline!

Run Maven within NetBeans to Test Pipeline

Run Maven within NetBeans to Test Pipeline

If successful you should see a message in the Output tab, indicating as much. Reviewing the contents of the Output tab will give you complete insight into the Maven process under the NetBeans hood. We used the ‘-e’ (echo) argument with Maven and the ‘Show Debug Output’ to further provide information to us about the process. The output contains all calls to Maven and subsequently to asadmin (GlassFish). You can learn a lot about using Maven and asadmin (GlassFish) by studying the Debug Output.

Conclusion

In the first part of this post, we learned how to create a simple Java EE web application project in NetBeans, using Maven. We learned how to integrate JUnit for automated testing, and how use Git to manage our source code.

In the second half of this post, we will learn how to configure Jenkins CI Server to retrieve our project from the remote Git repository, build, test, and assemble it into a WAR file. If these steps are successful, Jenkins will deploy our project to a GlassFish domain or multiple domains, based on the project’s stage in the software development life cycle. We will demonstrate how to automate Jenkins to achieve true continuous integration and continuous deployment.

, , , , , , , , , , , , , , ,

9 Comments

Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 3/3

Objectives of 3-Part Series:

Part I: Setting up the Example Database and Visual Studio Projects

  • Setup and configure a new instance of SQL Server 2008 R2
  • Setup and configure a copy of Microsoft’s Adventure Works database
  • Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
  • Test the project’s ability to deploy changes to the database

Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT

  • Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
  • Create a second Solution configuration and SSDT publish profile for an additional database environment
  • Test the converted database project’s ability to publish changes to multiple database environments

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

  • Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
  • Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

In this last post we will use Jenkins to publishing of changes from the Adventure Works SSDT database project to the Adventure Works database. Jenkins, formally Hudson, is the industry-standard, java-based open-source continuous integration server.

Jenkins

If you are unfamiliar with Jenkins, I recommend an earlier post, Automated Deployment to GlassFish Using Jenkins and Ant. That post goes into detail on Jenkins and its associated plug-in architecture. Jenkins’ website provides excellent resources for installing and configuring Jenkins on Windows. For this post, I’ll assume that you have Jenkins installed and running as a Windows Service.

The latest available version of Jenkins, at the time of this post is 1.476. To follow along with the post, you will need to install and configure the following (4) plug-ins:

User Authentication

In the first two posts, we connected to the Adventure Works database with the ‘aw_dev’ SQL Server user account, using SQL Authentication. This account was used to perform schema comparisons and publish changes from the Visual Studio project. Although SQL Authentication is an acceptable means of accessing SQL Server, Windows Authentication is more common in corporate and enterprise software environments, especially where Microsoft’s Active Directory is used. Windows Authentication with Active Directory (AD) provides an easier, centralized user account security model. It is considered more secure.

With Windows Authentication, we associate a SQL Server Login with an existing Windows user account. The account may be local to the SQL Server or part of an Active Directory domain. For this post, instead using SQL Authentication, passing the ‘aw_dev’ user’s credentials to SQL Server in database project’s connection strings, we will switch to Windows Authentication. Using Windows Authentication will allow Jenkins to connect directly to SQL Server.

Setting up the Jenkins Windows User Account

Let’s outline the process of creating a Jenkins Windows user account and using Windows Authentication with our Adventure Works project:

  1. Create a new ‘jenkins’ Windows user account.
  2. Change the Jenkins Windows service Log On account to the ‘jenkins’ Windows account.
  3. Create a new ‘jenkins’ SQL Server Login, associated with the ‘jenkins’ Windows user account, using Windows Authentication.
  4. Provide privileges in SQL Server to the ‘jenkins’ user identical to the ‘aw_dev’ user.
  5. Change the connection strings in the publishing profiles to use Windows Authentication.

First, create the ‘jenkins’ Windows user account on the computer where you have SQL Server and Jenkins installed. If they are on separate computers, then you will need to install the account on both computers, or use Active Directory. For this demonstration, I have both SQL Server and Jenkins installed on the same computer. I gave the ‘jenkins’ user administrative-level rights on my machine, by assigning it to the Administrators group.

Create New Jenkins User

Create New Jenkins User

Next, change the ‘Log On’ Windows user account for the Jenkins Windows service to the ‘jenkins’ Windows user account. Restart the Jenkins Windows service to apply the change. If the service fails to restart, it is likely you did not give enough rights to the user. I suggest adding the user to the Administrators group, to check if the problem you have encountering is permissions-related.

Jenkins Windows Service

Jenkins Windows Service

Set Log On Account for Jenkins Windows Service

Set Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service Granted

Log On Account for Jenkins Windows Service Granted

Setting up the Jenkins SQL Server Login

Next, to use Windows Authentication with SQL Server, create a new ‘jenkins’ Login for the Production instance of SQL Server and it with the ‘jenkins’ Windows user account. Replicate the ‘aw_dev’ SQL user’s various permissions for the ‘jenkins’ user. The ‘jenkins’ account will be performing similar tasks to ‘aw_dev’, but this time initiated by Jenkins, not Visual Studio. Repeat this process for the Development instance of SQL Server.

Jenkins Login Added to Development Instance

Jenkins Login Added to Development Instance

Jenkins User Any Definition on Production Instance

Jenkins User Any Definition on Production Instance

Jenkins User View Definition on Production Instance Database

Jenkins User View Definition on Production Instance Database

SSMS View of Jenkins User

SSMS View of Jenkins User

Windows Authentication with the Publishing Profile

In Visual Studio, switch the connection strings in the Development and Production publishing profiles in both the server project and database projects to Windows Authentication with Integrated Security. They should look similar to the code below. Substitute your server name and SQL instance for each profile.

Data Source=[SERVER NAME]\[INSTANCE NAME];Integrated Security=True;Pooling=False

Important note here, once you switch the profile’s connection string to Windows Authentication, the Windows user account that you logged into your computer with, is the account that Visual Studio will now user to connect to the database. Make sure your Windows user account has at least the same level of permissions as the ‘aw_dev’ and ‘jenkins’ accounts. As a developer, you would likely have greater permissions than these two accounts.

Configuring Jenkins for Delivery of Script to Release

In many production environments, delivering or ‘turning over’ release-ready code to another team for deployment, as opposed to deploying the code directly, is common practice. A developer starts or ‘kicks off a build’ of the job in Jenkins, which generates artifact(s). Artifacts are usually logical collections of deployable code and other associated components and files, constituting the application being built. Artifacts are often separated by type, such as database, web, Windows services, web services, configuration files, and so forth. Each type may be deployed by a different team or to a different location. Our project will only have one artifact to deliver, the sql change script.

This first Jenkins job we create will just generate the change script, which will then be delivered to a specific remote location for later release. We start by creating what Jenkins refers to as a parameterized build job. It allows us to pass parameters to each build of our job. We pass the name of the configuration (same as our environment name) we want our build to target. With this single parameter, ‘TARGET_ENVIRONMENT’, we can use a single Jenkins job to target any environment we have configured by simply passing its name to the build; a very powerful, time-saving feature of Jenkins.

Step 1 - Parameterized Build Parameter

Step 1 – Parameterized Build Parameter

Let’s outline the steps we will configure our Jenkins job with, to deliver a change script for release:

  1. Copy the Solution from its current location to the Jenkins job’s workspace.
  2. Accept the target environment as a parameterized build parameter (ex. ‘Production’ or ‘Development’).
  3. Build the database project and its dependencies based on the environment parameter.
  4. Generate the sql change script based on the environment parameter.
  5. Compress and name the sql change script based on the environment parameter and build id.
  6. Deliver the compressed script artifact to a designated release location for deployment.
  7. Notify release team that the artifact is ready for release.
  8. Archive the build’s artifact(s).

Copy the Solution to Jenkins

I am not using a revision control system, such as TFS or Subversion, for our example. The Adventure Works Solution resides in a file directory, on my development machine. To copy the entire Solution from its current location into job’s workspace, we add a step in the Jenkins job to execute a simple xcopy command. With source control, you would replace the xcopy step with a similar step to retrieve the project from a specific branch)within the revision control system, using one of many Jenkins’ revision control plug-ins.

Step 2 - Copy Solution to Jenkins Workspace

Step 2 – Copy Solution to Jenkins Workspace

echo 'Copying Adventure Works Solution to Jenkins workspace...'
xcopy "[Path to your Project]\AdventureWorks2008" "%WORKSPACE%" /S /E /H /Y /R /EXCLUDE:[Path to exclude file]\[name of exclude file].txt

echo 'Deleting artifacts from previous builds...'
del "%WORKSPACE%\*_publish.zip" /F /Q

Excluding Solution files from Jenkins job’s workspace that are unnecessary for the job to succeed is good practice. Excluding files saves time during the xcopy and can make troubleshooting build problems easier. To exclude unneeded Solution files, use the xcopy command’s ‘exclude’ parameter. To use exclude, we must first create an exclude text file, listing the directories we don’t need copied, and call it using with the exclude parameter with the xcopy command. Make sure to change the path shown above to reflect the location and name of your exclude file. Here is a list of the directories I chose to exclude. They are either unused by the build, or created as part of the build, for example the sql directories and there subdirectories.

\AdventureWorks2008\sql\
\AdventureWorks2008\Sandbox\
\AdventureWorks2008\_ConversionReport_Files\
\Development\sql\
\Development\Sandbox\
\Development\_ConversionReport_Files\

Build the Solution with Jenkins

Once the Solution’s files are copied into the Jenkins job’s workspace, we perform a build of the database project with an MSBuild build step, using the Jenkins MSBuild Plug-in. Jenkins executes the same MSBuild command Visual Studio would execute to build the project. Jenkins calls MSBuild, which in turn calls the MSBuild ‘Build’ target with parameters that specify the Solution configuration and platform to target.

Generate the Script with Jenkins

After Building the database project, in the same step as the build, we perform a publish of the database project. MSBuild calls the new SSDT’s ‘Publish’ target with parameters that specify the Solution configuration, target platform, publishing profile to use, and whether to only generate a sql change script, or publish the project’s changes directly to the database. In this first example, we are only generating a script. Note the use of the build parameter (%TARGET_ENVIRONMENT%) and environmental variables (%WORKSPACE%) in the MSBuild command. Again, a very powerful feature of Jenkins.

Step 3 - Build and Publish Project

Step 3 – Build and Publish Project

"%WORKSPACE%\AdventureWorks2008\AdventureWorks2008.sqlproj"
/p:Configuration=%TARGET_ENVIRONMENT%
/p:Platform=AnyCPU
/t:Build;Publish
/p:SqlPublishProfilePath="%WORKSPACE%\AdventureWorks2008\%TARGET_ENVIRONMENT%.publish.xml"
/p:UpdateDatabase=False

Compressing Artifacts with Apache Ant

To streamline the delivery, we will add a step to compress the change script using Jenkins Apache Ant Plug-in. Many consider Ant strictly a build tool for Java development. To the contrary, there are many tasks that can be automated for .NET developers with Ant. One particularly nice feature of Ant is its built-in support for zip compression.

Step 4 - Invoke Ant to Compress Artifact

Step 4 – Invoke Ant to Compress Artifact

configuration=$TARGET_ENVIRONMENT
buildNo=$BUILD_NUMBER

The Ant plug-in calls Ant, which in turn calls an Ant buildfile, passing it the properties we give. First, create an Ant buildfile with a single task to zip the change script. To avoid confusion during release, Ant will also append the configuration name and unique Jenkins job build number to the filename. For example, ‘AdventureWorks.publish.sql’  becomes ‘AdventureWorks_Production_123_publish.zip’. This is accomplished by passing the configuration name (Jenkins parameterized build parameter) and the build number (Jenkins environmental variable), as parameters to the buildfile (shown above). The parameters, in the form of key-value-pairs, are treated as properties within the buildfile. Using Ant to zip and name the script literally took us one line of Ant code. The contents of the build.xml buildfile is shown below.

<?xml version="1.0" encoding="utf-8"?>
<project name="AdventureWorks2008" basedir="." default="default">
<description>SSDT Database Project Type ZIP Example</description>
<!-- Example configuration ant call with parameter:
ant -Dconfiguration=Development -DbuildNo=123 -->
<target name="default" description="ZIP sql deployment script">
<echo>$${basedir}=${basedir}</echo>
<echo>$${configuration}=${configuration}</echo>
<echo>$${buildNo}=${buildNo}</echo>
<zip basedir="AdventureWorks2008/sql/${configuration}"
destfile="AdventureWorks_${configuration}_${buildNo}_publish.zip"
includes="*.publish.sql" />
</target>
</project>

Delivery of Artifacts

Lastly, we add a step to deliver the zipped script artifact to a ‘release’ location. Ideally, another team would retrieve and execute the change script against the database. Delivering the artifact to a remote location is easily accomplished using the Jenkins Artifact Deployer Plug-in. First, if it doesn’t already exist, create the location where you will deliver the scripts. Then, ensure Jenkins has permission to manage the location’s contents. In this example, the ‘release’ location is a shared folder I created. In order for Jenkins to access the ‘release’ location, give the ‘jenkins’ Windows user Read/Write (Change) permissions to the shared folder. With the deployment plug-in, you also have the option to delete the previous artifact(s) each time there is a new deployment, or leave them to accumulate.

Sharing Folder for Released Artifacts

Sharing Folder for Released Artifacts

Jenkins User Permissions for Shared Folder

Jenkins User Permissions for Shared Folder

Permissions for Shared Folder

Permissions for Shared Folder

Step 5 - Deploy Artifact to Release Location

Step 5 – Deploy Artifact to Release Location

Multiple Zipped Artifacts in Release Folder

Multiple Zipped Artifacts in Release Folder

Email Notification

Lastly, we want to alert the right team that artifacts have been turned-over for release. There are many plug-ins Jenkins to communicate with end-users or other system. We will use the Jenkins Email Extension Plug-in to email the release team. Configuring dynamic messages to include the parameterized build parameters and Jenkins’ environmental variables is easy with this plug-in. My sample message includes several variables in the body of the message, including target environment, target database, artifact name, and Jenkins build URL.

I had some trouble passing the Jenkins’ parameterized build parameter (‘TARGET_ENVIRONMENT’) to the email plug-in, until I found this post. The format required by the plug-in for the type of variable is a bit obscure as compared to Ant, MSBuild, or other plug-ins.

Step 6 - Email Notification

Artifact: AdventureWorks_${ENV,var="TARGET_ENVIRONMENT"}_${BUILD_NUMBER}_publish.zip
Environment: ${ENV,var="TARGET_ENVIRONMENT"}
Database: AdventureWorks
Jenkins Build URL: ${BUILD_URL}
Please contact Development for questions or problems regarding this release.
Release Request Notification Email Message

Release Request Notification Email Message

Publishing Directly to the Database

As the last demonstration in this series of posts, we will publish the project changes directly to the database. Good news, we have done 95% of the work already. We merely need to copy the Jenkins job we already created, change one step, remove three others steps, and we’re publishing! Start by creating a new Jenkins job by copying the existing script delivery job. Next, drop the Invoke Ant, Artifact Deployer, and Archive Artifacts steps from the job’s configuration. Lastly, set the last parameter of the MSBuild task, ‘UpdateDatabase’, to True from False. That’s it! Instead of creating the script, compressing it, and sending it to a location to be executed later, the changes are generated and applied to the database in a single step.

Hybrid Solution

If you are not comfortable with the direct approach, there is a middle ground between only generating a script and publishing directly to the database. You can keep a record of the changes made to the database as part of publishing. To do so, change the ‘UpdateDatabase’ parameter to True, and only drop the Artifact Deployer step; leave the Invoke Ant and Archive Artifacts steps. The resulting job generates the change script, publishes the changes to the database, and compresses and archives the script. You now have a record of the changes made to the database.

Conclusion

In this last of three posts we demonstrated the use of Jenkins and its plug-ins to created three jobs, representing three possible SSDT publishing workflows. Using the parameterized build feature of Jenkins, each job capable of being executed against any database environment that we have a configuration and publishing profile defined for. Hopefully, one of these three workflows may fit your particular release methodology.

Jenkins SSDT Jobs View

Jenkins SSDT Jobs View

, , , , , , , , , , , , , , , , , , , ,

7 Comments

Automated Deployment to GlassFish Using Jenkins CI Server and Apache Ant

Use Jenkins and Apace Ant to compile, assemble, test, and deploy a RESTful web service to GlassFish. All source code for this post is available on GitHub. Note GitHub repo reflects updates to project on 10/31/2013.

Jenkins, formally Hudson, is the industry-standard, java-based open-source continuous integration server. According to their website, Jenkins provides over 400 plug-ins to support building and testing almost any type of project. According to Apache, Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets. This post demonstrates the use of Jenkins and Apache Ant to compile, assemble, unit test, and deploy a Java EE 6 RESTful web service to Oracle’s GlassFish open-source application server.

For the sake of brevity, I have chosen to use the HelloWorld RESTful web service example included with NetBeans. I will use NetBeans to create the project, write the unit tests, and produce an Ant target in the build file. I will not delve deeply into the inner workings of the web service itself since the focus of this post is automation.

System Configuration

This post assumes that you have current versions of NetBeans, JUnit, Jenkins, GlassFish, Ant and Java installed and configured on your Microsoft Windows-based computer. A full installation of NetBeans comes with JUnit, Ant, and GlassFish. At the time of the original post, I was using NetBeans 7.1.2, GlassFish 3.1.2, Jenkins 1.4.6.3, Ant 1.8.3, and JDK 1.7.0_02.

For simplicity, I am using a single development machine for this demonstration, on which all applications are installed. In a true production environment you would most likely have a distributed configuration with GlassFish installed on an application server, Jenkins on a build server, and NetBeans on your development machine. Also, for this post, I am also not using a source-code management (SCM) system, also called a version control system (VCS), such as Subversion or Mercurial, to house the project’s source code. Again, in a production environment, your source-code would be placed on SCM/VCS server.

Both GlassFish and Jenkins are configured by default to run on server port 8080. Since I have both applications installed on the same machine, I have changed Jenkins’ default port to another unused port, 9090. Changing Jenkins’ port is easy to do. If you don’t know how, consult this post or similar.

NetBeans

First, create a new project in NetBeans, by selecting the New Project -> Samples -> Java Web Services, REST: Hello World (Java EE 6), as shown below. Rename the project to HelloGlassFish. When complete, the project, in the Projects tab, should look like the screen-grab, below.

New Project View in NetBeans

New Project View in NetBeans

JUnit

Next create a unit-test using JUnit, the open-source unit-testing framework. Jenkins will eventually run this test each time the project is built. Creating unit-tests is easy in NetBeans. Select the ‘NameStorageBean.java’ class object, right-click, and select Tools -> Create JUnit Tests… This will create a default ‘NameStorageBeanTest.java’ class object in a new, ‘Test Packages’ directory. Overwrite NameStorageBeanTest.java contents with the follows code. This will create a single unit test we can use to demonstrate JUnit’s integration with Jenkins. You will also notice new test objects in the Project tab.

Build the project and run the ‘testGetName’ unit-test to make sure it works correctly and the test passes.

Apache Ant

Next, change to the Files tab. Open the ‘build.xml’ file, as shown below. Also, for later reference, note the contents of the ‘HelloGlassFish.war’ and the location of the ‘pwdfile_domain1’ password file.

New Project File View in NetBeans

Place the following Ant target, entitled ‘jenkins-glassfish-deploy’, into the build.xml file, between the end of the commented section and the closing <project/> tag, as shown below.

This is the Ant target Jenkins will use to build, test, and deploy the project. The primary ‘jenkins-glassfish-deploy’ target calls three Ant targets using the antcall element. They include clean, default, and test. Each of these Ant targets has dependencies on other Ant targets, which in turn depend on yet other targets – a dependency tree. For example, default depends on dist and javadoc. The test target depends on other targets to build the .war file. If you are not using test to execute unit tests, you can call the test target to build the .war file.

The last part of the ‘jenkins-glassfish-deploy’ target is a little different. It’s an exec (execute) element, which calls asadmin to deploy the project to GlassFish with a series of GlassFish domain-specific parameters. These parameters include the GlassFish domain’s URL and port, the domain’s administrative user and password account info (found in a password file), the location of the .war file to deploy, and destination of the .war within GlassFish. Calling asadmin deploy gives you fine control over the details of how the project is deployed to GlassFish.

The password file, referenced in the target is a simple text file, which stores the password for the user account used to execute the asadmin deploy call. The contents of the file look like:

AS_ADMIN_PASSWORD=Your_Password_Here

This target could be simplified with the depends attribute. Instead of the three antcall elements, you could simply add depends="clean, default, test" to the target element:

According to Oracle, the asadmin utility is used to perform any administrative tasks for GlassFish from the command line. You can use this asadmin utility in place of using the GlassFish Administrator interface. I am able to call asadmin directly because I have added the path to asadmin.bat to the Windows’ environmental variable, PATH. The asadmin.bat file is in the GlassFish bin directory, similar to ‘C:\Program Files\glassfish-3.1.2\glassfish\bin\’.

Jenkins

Switching to Jenkins, create a new Job named HelloGlassFish. In the HelloGlassFish configuration, we need to add two Build steps and one post-build Action. For the first Build step, since we are not using SCM, we will copy the files from the project in the NetBeans workspace to the Jenkins workspace. To do this, add an ‘Execute Windows batch command’ action with code similar to code snippet below, but substituting your own project’s file path. Note, you can substitute the %WORKSPACE% environmental variable for the xcopy destination (see call-out 1 in the below screen-grab). This variable represents the absolute path of the directory assigned to the build as a workspace, according to Jenkins. Jenkins offers many useful variables, accessible to Windows batch scripts.

xcopy "C:\Users\gstaffor\Documents\NetBeansProjects\HelloGlassFish\HelloGlassFish" "%WORKSPACE%" /s /e /h /y

Next, add the second Build task, ‘Invoke Ant’. I assume you already have Ant configured for Jenkins. In the ‘Target’s text box, enter the Ant target we created in NetBeans build.xml file, entitled ‘jenkins-glassfish-deploy’ (see call-out 2 in the below screen-grab). If the name of your build file is anything other than the default ‘build.xml’, you will need to enter the Ant file name.

Lastly, add the single Post-build Action, ‘Publish JUnit test result report’. This will show us a visual representation of the results of our project’s unit-tests. Input the relative path to your reports from the workspace root. The path should be similar to call-out 3 in the screen-grab, below.

When complete, the HelloGlassFish Job’s configuration should resemble the screen-grab, below.

Jenkins HelloGlassFish Project Configuration

Jenkins HelloGlassFish Project Configuration

Save and close the configuration. Build the HelloGlassFish Job in Jenkins and make sure it succeeds with error.

GlassFish

Open GlassFish’s browser-based Domain Admin Console, usually on server port 4848, by default. On the left-hand side of the main window, under ‘Common Tasks’, tip the ‘Applications’ node. You should see the HelloGlassFish application is now deployed to GlassFish. You don’t have to do anything in GlassFish, Jenkins and Ant has taken care of everything.

GlassFish’s browser-based Domain Admin Console

GlassFish’s browser-based Domain Admin Console

To view the HelloGlassFish application, open a new browser window and direct it to ‘http://localhost:8080/HelloGlassFish/resources/helloWorld’. You should see a ‘Hello World!’ message displayed in your browser’s window. Note, since we only changed the name of the default HelloWorld NetBeans sample project to HelloGlassFish, not the web service’s URI, ‘helloWorld’ is still a required part of the URL path.

Redeploying the Project

Lastly, let’s demonstrate how easily changes to our project can be re-complied, re-tested, and re-deployed to GlassFish by Jenkins and Ant. Return to the HelloGlassFish project in NetBeans and open the NameStorageBean.java class. Change the value of the ‘name’ field from ‘World’ to ‘GlassFish’ and save the changes. Don’t build or do anything else in NetBeans. Instead, return to Jenkins and build the HelloGlassFish Job, again.

Change the NameStorageBean name Field

Change the NameStorageBean name Field

When the Job has finished building, re-direct your browser back to ‘http://localhost:8080/HelloGlassFish/resources/helloWorld’. You should now see a ‘Hello GlassFish!’ message displayed in your browser’s window instead of the earlier message, ‘Hello World!’. Jenkins has called the Ant target, which in turn re-compiled, re-tested, and re-deployed the modified HelloGlassFish application to GlassFish.

HelloGlassFish RESTful Web Service Demo

HelloGlassFish RESTful Web Service Demo

Helpful Links

, , , , , , , , , , , , , , , , ,

5 Comments