Posts Tagged Maven
Continuous Integration and Delivery of Microservices using Jenkins CI, Docker Machine, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development, Java Development on June 27, 2015
Continuously integrate and deploy and test a RestExpress microservices-based, multi-container, Java EE application to a virtual test environment, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, and VirtualBox.
Introduction
In the last post, we learned how to use Jenkins CI, Maven, and Docker Compose to take a set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated Docker containers. We built the microservices, Docker images, and Docker containers. We deployed the containers directly onto the Jenkins CI Server machine. Finally, we performed integration tests to ensure the services were functioning as expected, within the containers.
In a more mature continuous delivery model, we would have deployed the running containers to a fresh ‘production-like’ environment to be more accurately tested, not the Jenkins CI Server host machine. In this post, we will learn how to use the recently released Docker Machine to create a fresh test environment in which to build and host our project’s ten Docker containers. We will couple Docker Machine with Oracle’s VirtualBox, Jenkins CI, and Docker Compose to automatically build and test the services within their containers, within the virtual ‘test’ environment.
Update: All code for this post is available on GitHub, release version v2.1.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v2.1.0’ command).
Docker Machine
If you recall in the last post, after compiling and packaging the microservices, Jenkins was used to deploy the build artifacts to the Virtual-Vehicles Docker GitHub project, as shown below.
We then used Jenkins, with the Docker CLI and the Docker Compose CLI, to automatically build and test the images and containers. This step will not change, however first we will use Docker Machine to automatically build a test environment, in which we will build the Docker images and containers.
I’ve copied and modified the second Jenkins job we used in the last post, as shown below. The new job is titled, ‘Virtual-Vehicles_Docker_Machine’. This will replace the previous job, ‘Virtual-Vehicles_Docker_Compose’.
The first step in the new Jenkins job is to clone the Virtual-Vehicles Docker GitHub repository.
Next, Jenkins run a bash script to automatically build the test VM with Docker Machine, build the Docker images and containers with Docker Compose within the new VM, and finally test the services.
The bash script executed by Jenkins contains the following commands:
# optional: record current versions of docker apps with each build docker -v && docker-compose -v && docker-machine -v # set-up: clean up any previous machine failures docker-machine stop test || echo "nothing to stop" && \ docker-machine rm test || echo "nothing to remove" # use docker-machine to create and configure 'test' environment # add a -D (debug) if having issues docker-machine create --driver virtualbox test eval "$(docker-machine env test)" # use docker-compose to pull and build new images and containers docker-compose -p jenkins up -d # optional: list machines, images, and containers docker-machine ls && docker images && docker ps -a # wait for containers to fully start before tests fire up sleep 30 # test the services sh tests.sh $(docker-machine ip test) # tear down: stop and remove 'test' environment docker-machine stop test && docker-machine rm test
As the above script shows, first Jenkins uses the Docker Machine CLI to build and activate the ‘test’ virtual machine, using the VirtualBox driver. As of docker-machine version 0.3.0, the VirtualBox driver requires at least VirtualBox 4.3.28 to be installed.
docker-machine create --driver virtualbox test eval "$(docker-machine env test)"
Once this step is complete you will have the following VirtualBox VM created, running, and active.
NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376
Next, Jenkins uses the Docker Compose CLI to execute the project’s Docker Compose YAML file.
docker-compose -p jenkins up -d
The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Pulls (5) images, builds (5) images, and builds (11) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p <your_project_name_here> up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8500:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ links: - graphite - mongoAuthentication - "ambassador:nginx" expose: - "8587" valet: build: valet/ links: - graphite - mongoValet - "ambassador:nginx" expose: - "8585" maintenance: build: maintenance/ links: - graphite - mongoMaintenance - "ambassador:nginx" expose: - "8583" vehicle: build: vehicle/ links: - graphite - mongoVehicle - "ambassador:nginx" expose: - "8581" nginx: build: nginx/ ports: - "80:80" links: - "ambassador:vehicle" - "ambassador:valet" - "ambassador:authentication" - "ambassador:maintenance" ambassador: image: cpuguy83/docker-grand-ambassador volumes: - "/var/run/docker.sock:/var/run/docker.sock" command: "-name jenkins_nginx_1 -name jenkins_authentication_1 -name jenkins_maintenance_1 -name jenkins_valet_1 -name jenkins_vehicle_1"
Running the docker-compose.yaml
file, will pull these (5) Docker Hub images:
REPOSITORY TAG IMAGE ID ========== === ======== java 8u45-jdk 1f80eb0f8128 nginx latest 319d2015d149 mongo latest 66b43e3cae49 hopsoft/graphite-statsd latest b03e373279e8 cpuguy83/docker-grand-ambassador latest c635b1699f78
And, build these (5) Docker images from Dockerfiles:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_nginx latest 0b53a9adb296 jenkins_vehicle latest d80f79e605f4 jenkins_valet latest cbe8bdf909b8 jenkins_maintenance latest 15b8a94c00f4 jenkins_authentication latest ef0345369079
And, build these (11) Docker containers from corresponding image:
CONTAINER ID IMAGE NAME ============ ===== ==== 17992acc6542 jenkins_nginx jenkins_nginx_1 bcbb2a4b1a7d jenkins_vehicle jenkins_vehicle_1 4ac1ac69f230 mongo:latest jenkins_mongoVehicle_1 bcc8b9454103 jenkins_valet jenkins_valet_1 7c1794ca7b8c jenkins_maintenance jenkins_maintenance_1 2d0e117fa5fb jenkins_authentication jenkins_authentication_1 d9146a1b1d89 hopsoft/graphite-statsd:latest jenkins_graphite_1 56b34cee9cf3 cpuguy83/docker-grand-ambassador jenkins_ambassador_1 a72199d51851 mongo:latest jenkins_mongoAuthentication_1 307cb2c01cc4 mongo:latest jenkins_mongoMaintenance_1 4e0807431479 mongo:latest jenkins_mongoValet_1
Since we are connected to the brand new Docker Machine ‘test’ VM, there are no locally cached Docker images. All images required to build the containers must be pulled from Docker Hub. The build time will be 3-4x as long as the last post’s build, which used the cached Docker images on the Jenkins CI machine.
Integration Testing
As in the last post, once the containers are built and configured, we run a series of expanded integration tests to confirm the containers and services are working. One difference, this time we will pass a parameter to the test bash script file:
sh tests.sh $(docker-machine ip test)
The parameter is the hostname used in the test’s RESTful service calls. The parameter, $(docker-machine ip test)
, is translated to the IP address of the ‘test’ VM. In our example, 192.168.99.100
. If a parameter is not provided, the test script’s hostname
variable will use the default value of localhost
, ‘hostname=${1-'localhost'}
‘.
Another change since the last post, the project now uses the open source version of Nginx, the free, open-source, high-performance HTTP server and reverse proxy, as a pseudo-API gateway. Instead calling each microservice directly, using their individual ports (i.e. port 8581
for the Vehicle microservice), all traffic is sent through Nginx on default http port 80, for example:
http://192.168.99.100/vehicles/utils/ping.json http://192.168.99.100/jwts?apiKey=Z1nXG8JGKwvGlzQgPLwQdndW&secret=ODc4OGNiNjE5ZmI http://192.168.99.100/vehicles/558f3042e4b0e562c03329ad
Internal traffic between the microservices and MongoDB, and between the microservices and Graphite is still direct, using Docker container linking. Traffic between the microservices and Nginx, in both directions, is handled by an ambassador container, a common pattern. Nginx acts as a reverse proxy for the microservices. Using Nginx brings us closer to a truer production-like experience for testing the services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh # docker-machine: sh tests.sh $(docker-machine ip test) # ######################################################################## echo --- Integration Tests --- echo ### VARIABLES ### hostname=${1-'localhost'} # use input param or default to localhost application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized make="Test" model="Foo" echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} echo make: ${make} echo model: ${model} echo ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"year\": 2015, \"make\": \"${make}\", \"model\": \"${model}\", \"color\": \"White\", \"type\": \"Sedan\", \"mileage\": 250 }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}/vehicles?filter=make::${make}|model::${model}&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new maintenance record in the response body with an 'id'" url="http://${hostname}/maintenances" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"serviceDateTime\": \"2015-27-00T15:00:00.400Z\", \"mileage\": 1000, \"type\": \"Test Maintenance\", \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo echo "TEST: POST request should return a new valet transaction in the response body with an 'id'" url="http://${hostname}/valets" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d "{ \"vehicleId\": \"${id}\", \"dateTimeIn\": \"2015-27-00T15:00:00.400Z\", \"parkingLot\": \"Test Parking Ramp\", \"parkingSpot\": 10, \"notes\": \"This is a test notes.\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo
Tear Down
In true continuous integration fashion, once the integration tests have completed, we tear down the project by removing the VirtualBox ‘test’ VM. This also removed all images and containers.
docker-machine stop test && \ docker-machine rm test
Jenkins CI Console Output
Below is an abridged sample of what the Jenkins CI console output will look like from a successful ‘build’.
Started by user anonymous Building in workspace /var/lib/jenkins/jobs/Virtual-Vehicles_Docker_Machine/workspace > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/garystafford/virtual-vehicles-docker.git # timeout=10 Fetching upstream changes from https://github.com/garystafford/virtual-vehicles-docker.git > git --version # timeout=10 using GIT_SSH to set credentials using .gitcredentials to set credentials > git config --local credential.helper store --file=/tmp/git7588068314920923143.credentials # timeout=10 > git -c core.askpass=true fetch --tags --progress https://github.com/garystafford/virtual-vehicles-docker.git +refs/heads/*:refs/remotes/origin/* > git config --local --remove-section credential # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision f473249f0f70290b75cb320909af1f57cdaf2aa5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f473249f0f70290b75cb320909af1f57cdaf2aa5 > git rev-list f473249f0f70290b75cb320909af1f57cdaf2aa5 # timeout=10 [workspace] $ /bin/sh -xe /tmp/hudson8587699987350884629.sh + docker -v Docker version 1.7.0, build 0baf609 + docker-compose -v docker-compose version: 1.3.1 CPython version: 2.7.9 OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013 + docker-machine -v docker-machine version 0.3.0 (0a251fe) + docker-machine stop test + docker-machine rm test Successfully removed test + docker-machine create --driver virtualbox test Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env test + docker-machine env test + eval export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/var/lib/jenkins/.docker/machine/machines/test" export DOCKER_MACHINE_NAME="test" # Run this command to configure your shell: # eval "$(docker-machine env test)" + export DOCKER_TLS_VERIFY=1 + export DOCKER_HOST=tcp://192.168.99.100:2376 + export DOCKER_CERT_PATH=/var/lib/jenkins/.docker/machine/machines/test + export DOCKER_MACHINE_NAME=test + docker-compose -p jenkins up -d Pulling mongoValet (mongo:latest)... latest: Pulling from mongo ...Abridged output... + docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM test * virtualbox Running tcp://192.168.99.100:2376 + docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jenkins_vehicle latest fdd7f9d02ff7 2 seconds ago 837.1 MB jenkins_valet latest 8a592e0fe69a 4 seconds ago 837.1 MB jenkins_maintenance latest 5a4a44e136e5 5 seconds ago 837.1 MB jenkins_authentication latest e521e067a701 7 seconds ago 838.7 MB jenkins_nginx latest 085d183df8b4 25 minutes ago 132.8 MB java 8u45-jdk 1f80eb0f8128 12 days ago 816.4 MB nginx latest 319d2015d149 12 days ago 132.8 MB mongo latest 66b43e3cae49 12 days ago 260.8 MB hopsoft/graphite-statsd latest b03e373279e8 4 weeks ago 740 MB cpuguy83/docker-grand-ambassador latest c635b1699f78 5 months ago 525.7 MB + docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4ea39fa187bf jenkins_vehicle "java -classpath .:c 2 seconds ago Up 1 seconds 8581/tcp jenkins_vehicle_1 b248a836546b mongo:latest "/entrypoint.sh mong 3 seconds ago Up 3 seconds 27017/tcp jenkins_mongoVehicle_1 0c94e6409afc jenkins_valet "java -classpath .:c 4 seconds ago Up 3 seconds 8585/tcp jenkins_valet_1 657f8432004b jenkins_maintenance "java -classpath .:c 5 seconds ago Up 5 seconds 8583/tcp jenkins_maintenance_1 8ff6de1208e3 jenkins_authentication "java -classpath .:c 7 seconds ago Up 6 seconds 8587/tcp jenkins_authentication_1 c799d5f34a1c hopsoft/graphite-statsd:latest "/sbin/my_init" 12 minutes ago Up 12 minutes 2003/tcp, 8125/udp, 0.0.0.0:8500->80/tcp jenkins_graphite_1 040872881b25 jenkins_nginx "nginx -g 'daemon of 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 443/tcp jenkins_nginx_1 c6a2dc726abc mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoAuthentication_1 db22a44239f4 mongo:latest "/entrypoint.sh mong 26 minutes ago Up 26 minutes 27017/tcp jenkins_mongoMaintenance_1 d5fd655474ba cpuguy83/docker-grand-ambassador "/usr/bin/grand-amba 26 minutes ago Up 26 minutes jenkins_ambassador_1 2b46bd6f8cfb mongo:latest "/entrypoint.sh mong 31 minutes ago Up 31 minutes 27017/tcp jenkins_mongoValet_1 + sleep 30 + docker-machine ip test + sh tests.sh 192.168.99.100 --- Integration Tests --- hostname: 192.168.99.100 application: Test API Client 1435585062 secret: NGM5OTI5ODAxMTZ make: Test model: Foo TEST: GET request should return 'true' in the response body http://192.168.99.100/vehicles/utils/ping.json % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 100 4 0 4 0 0 26 0 --:--:-- --:--:-- --:--:-- 25 RESULT: pass TEST: POST request should return a new client in the response body with an 'id' http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 847 225 --:--:-- --:--:-- --:--:-- 849 RESULT: pass SETUP: Get the new client's apiKey for next test http://192.168.99.100/clients % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 399 0 315 100 84 20482 5461 --:--:-- --:--:-- --:--:-- 21000 apiKey: sv1CA9NdhmXh72NrGKBN3Abb TEST: GET request should return a new jwt in the response body http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 686 0 --:--:-- --:--:-- --:--:-- 687 RESULT: pass SETUP: Get a new jwt using the new client for the next test http://192.168.99.100/jwts?apiKey=sv1CA9NdhmXh72NrGKBN3Abb&secret=NGM5OTI5ODAxMTZ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 222 0 222 0 0 16843 0 --:--:-- --:--:-- --:--:-- 17076 jwt: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGkudmlydHVhbC12ZWhpY2xlcy5jb20iLCJhcGlLZXkiOiJzdjFDQTlOZGhtWGg3Mk5yR0tCTjNBYmIiLCJleHAiOjE0MzU2MjEwNjMsImFpdCI6MTQzNTU4NTA2M30.WVlhIhUcTz6bt3iMVr6MWCPIDd6P0aDZHl_iUd6AgrM TEST: POST request should return a new vehicle in the response body with an 'id' http://192.168.99.100/vehicles % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 123 0 0 100 123 0 612 --:--:-- --:--:-- --:--:-- 611 100 419 0 296 100 123 649 270 --:--:-- --:--:-- --:--:-- 649 RESULT: pass SETUP: Get id from new vehicle for the next test http://192.168.99.100/vehicles?filter=make::Test|model::Foo&limit=1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 377 0 377 0 0 5564 0 --:--:-- --:--:-- --:--:-- 5626 vehicle id: 55914a28e4b04658471dc03a TEST: GET request should return a vehicle in the response body with the requested 'id' http://192.168.99.100/vehicles/55914a28e4b04658471dc03a % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 296 0 296 0 0 7051 0 --:--:-- --:--:-- --:--:-- 7219 RESULT: pass TEST: POST request should return a new maintenance record in the response body with an 'id' http://192.168.99.100/maintenances % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 100 565 0 376 100 189 506 254 --:--:-- --:--:-- --:--:-- 506 RESULT: pass TEST: POST request should return a new valet transaction in the response body with an 'id' http://192.168.99.100/valets % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 561 0 368 100 193 514 269 --:--:-- --:--:-- --:--:-- 514 RESULT: pass + docker-machine stop test + docker-machine rm test Successfully removed test Finished: SUCCESS
Graphite and Statsd
If you’ve chose to build the Virtual-Vehicles Docker project outside of Jenkins CI, then in addition running the test script and using applications like Postman to test the Virtual-Vehicles RESTful API, you may also use Graphite and StatsD. RestExpress comes fully configured out of the box with Graphite integration, through the Metrics plugin. The Virtual-Vehicles RESTful API example is configured to use port 8500 to access the Graphite UI. The Virtual-Vehicles RESTful API example uses the hopsoft/graphite-statsd Docker image to build the Graphite/StatsD Docker container.
The Complete Process
The below diagram show the entire Virtual-Vehicles continuous integration and delivery process, start to finish, using Docker, Docker Hub, Docker Machine, Docker Compose, Jenkins CI, Maven, RestExpress, and VirtualBox.
Continuous Integration and Delivery of Microservices using Jenkins CI, Maven, and Docker Compose
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps, Enterprise Software Development on June 22, 2015
Continuously build, test, package and deploy a microservices-based, multi-container, Java EE application using Jenkins CI, Maven, Docker, and Docker Compose
Previous Posts
In the previous 3-part series, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB, we developed a set of Java EE-based microservices, which formed the Virtual-Vehicles REST API. In Part One of this series, we introduced the concepts of a RESTful API and microservices, using the vehicle-themed Virtual-Vehicles REST API example. In Part Two, we gained a basic understanding of how RestExpress works to build microservices, and discovered how to get the microservices example up and running. Lastly, in Part Three, we explored how to use tools such as Postman, along with the API documentation, to test our microservices.
Introduction
In this post, we will demonstrate how to use Jenkins CI, Maven, and Docker Compose to take our set of microservices all the way from source control on GitHub, to a fully tested and running set of integrated and orchestrated Docker containers. We will build and test the microservices, Docker images, and Docker containers. We will deploy the containers and perform integration tests to ensure the services are functioning as expected, within the containers. The milestones in our process will be:
- Continuous Integration: Using Jenkins CI and Maven, automatically compile, test, and package the individual microservices
- Deployment: Using Jenkins, automatically deploy the build artifacts to the new Virtual-Vehicles Docker project
- Containerization: Using Jenkins and Docker Compose, automatically build the Docker images and containers from the build artifacts and a set of Dockerfiles
- Integration Testing: Using Jenkins, perform automated integration tests on the containerized services
- Tear Down: Using Jenkins, automatically stop and remove the containers and images
For brevity, we will deploy the containers directly to the Jenkins CI Server, where they were built. In an upcoming post, I will demonstrate how to use the recently released Docker Machine to host the containers within an isolated VM.
Note: All code for this post is available on GitHub, release version v1.0.0 on the ‘master’ branch (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Build the Microservices
In order to host the Virtual-Vehicles microservices, we must first compile the source code and produce build artifacts. In the case of the Virtual-Vehicles example, the build artifacts are a JAR file and at least one environment-specific properties file. In Part Two of our previous series, we compiled and produced JAR files for our microservices from the command line using Maven.
To automatically build our Maven-based microservices project in this post, we will use Jenkins CI and the Jenkins Maven Project Plugin. The Virtual-Vehicles microservices are bundled together into what Maven considers a multi-module project, which is defined by a parent POM referring to one or more sub-modules. Using the concept of project inheritance, Jenkins will compile each of the four microservices from the project’s single parent POM file. Note the four modules at the end of the pom.xml
below, corresponding to each microservice.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <name>Virtual-Vehicles API</name> <description>Virtual-Vehicles API https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Example_3 </description> <url>https://github.com/garystafford/virtual-vehicle-demo</url> <groupId>com.example</groupId> <artifactId>Virtual-Vehicles-API</artifactId> <version>1</version> <packaging>pom</packaging> <modules> <module>Maintenance</module> <module>Valet</module> <module>Vehicle</module> <module>Authentication</module> </modules> </project>
Below is the view of the four individual Maven modules, within the single Jenkins Maven job.
Each microservice module contains a Maven POM files. The POM files use the Apache Maven Compiler Plugin to compile code, and the Apache Maven Shade Plugin to create ‘uber-jars’ from the compiled code. The Shade plugin provides the capability to package the artifact in an uber-jar, including its dependencies. This will allow us to independently host the service in its own container, without external dependencies. Lastly, using the Apache Maven Resources Plugin, Maven will copy the environment properties files from the source directory to the ‘target’ directory, which contains the JAR file. To accomplish these Maven tasks, all Jenkins needs to do is a series of Maven life-cycle goals: ‘clean install package validate
‘.
Once the code is compiled and packaged into uber-jars, Jenkins uses the Artifact Deployer Plugin to deploy the build artifacts from Jenkins’ workspace to a remote location. In our example, we will copy the artifacts to a second GitHub project, from which we will containerize our microservices.
Shown below are the two Jenkins jobs. The first one compiles, packages, and deploys the build artifacts. The second job containerizes the services, databases, and monitoring application.
Shown below are two screen grabs showing how we clone the Virtual-Vehicles GitHub repository and build the project using the main parent pom.xml
file. Building the parent POM, in-turn builds all the microservice modules, using their POM files.
Deploy Build Artifacts
Once we have successfully compiled, tested (if we had unit tests with RestExpress), and packages the build artifacts as uber-jars, we deploy each set of build artifacts to a subfolder within the Virtual-Vehicles Docker GitHub project, using Jenkins’ Artifact Deployer Plugin. Shown below is the deployment configuration for just the Vehicles microservice. This deployment pattern is repeated for each service, within the Jenkins job configuration.
The Jenkins’ Artifact Deployer Plugin also provides the convenient ability to view and to redeploy the artifacts. Below, you see a list of the microservice artifacts deployed to the Docker project by Jenkins.
Build and Compose the Containers
The second Jenkins job clones the Virtual-Vehicles Docker GitHub repository.
The second Jenkins job executes commands from the shell prompt. The first commands use the Docker CLI to removes any existing images and containers, which might have been left over from previous job failures. The second commands use the Docker Compose CLI to execute the project’s Docker Compose YAML file. The YAML file directs Docker Compose to pull and build the required Docker images, and to build and configure the Docker containers.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
# set DOCKER_HOST environment variable export DOCKER_HOST=tcp://localhost:4243 # record installed version of Docker and Maven with each build mvn --version && \ docker --version && \ docker-compose --version # use docker-compose to build new images and containers docker-compose -p jenkins up -d # list virtual-vehicles related images docker images | grep 'jenkins' | awk '{print $0}' # list all containers docker ps -a | grep 'jenkins\|mongo_\|graphite' | awk '{print $0}'
######################################################################## # # title: Docker Compose YAML file for Virtual-Vehicles Project # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Builds (4) images, pulls (2) images, and builds (9) containers, # for the Virtual-Vehicles Java microservices example REST API # to run: docker-compose -p virtualvehicles up -d # ######################################################################## graphite: image: hopsoft/graphite-statsd:latest ports: - "8481:80" mongoAuthentication: image: mongo:latest mongoValet: image: mongo:latest mongoMaintenance: image: mongo:latest mongoVehicle: image: mongo:latest authentication: build: authentication/ ports: - "8587:8587" links: - graphite - mongoAuthentication valet: build: valet/ ports: - "8585:8585" links: - graphite - mongoValet - authentication maintenance: build: maintenance/ ports: - "8583:8583" links: - graphite - mongoMaintenance - authentication vehicle: build: vehicle/ ports: - "8581:8581" links: - graphite - mongoVehicle - authentication
Running the docker-compose.yaml
file, produces the following images:
REPOSITORY TAG IMAGE ID ========== === ======== jenkins_vehicle latest a6ea4dfe7cf5 jenkins_valet latest 162d3102d43c jenkins_maintenance latest 0b6f530cc968 jenkins_authentication latest 45b50487155e
And, these containers:
CONTAINER ID IMAGE NAME ============ ===== ==== 2b4d5a918f1f jenkins_vehicle jenkins_vehicle_1 492fbd88d267 mongo:latest jenkins_mongoVehicle_1 01f410bb1133 jenkins_valet jenkins_valet_1 6a63a664c335 jenkins_maintenance jenkins_maintenance_1 00babf484cf7 jenkins_authentication jenkins_authentication_1 548a31034c1e hopsoft/graphite-statsd:latest jenkins_graphite_1 cdc18bbb51b4 mongo:latest jenkins_mongoAuthentication_1 6be5c0558e92 mongo:latest jenkins_mongoMaintenance_1 8b71d50a4b4d mongo:latest jenkins_mongoValet_1
Integration Testing
Once the containers have been successfully built and configured, we run a series of integration tests to confirm the services are up and running. We refer to these tests as integration tests because they test the interaction of multiple components. Integration tests were covered in the last post, Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3.
Note the short pause I have inserted before running the tests. Docker Compose does an excellent job of accounting for the required start-up order of the containers to avoid race conditions (see my previous post). However, depending on the speed of the host box, there is still a start-up period for the container’s processes to be up, running, and ready to receive traffic. Apache Log4j 2 and MongoDB startup, in particular, take extra time. I’ve seen the containers take as long as 1-2 minutes on a slow box to fully start. Without the pause, the tests fail with various errors, since the container’s processes are not all running.
sleep 15 sh tests.sh -v
The bash-based tests below just scratch the surface as a complete set of integration tests. However, they demonstrate an effective multi-stage testing pattern for handling the complex nature of RESTful service request requirements. The tests build upon each other. After setting up some variables, the tests register a new API client. Then, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves, and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services.
#!/bin/sh ######################################################################## # # title: Virtual-Vehicles Project Integration Tests # author: Gary A. Stafford (https://programmaticponderings.com) # url: https://github.com/garystafford/virtual-vehicles-docker # description: Performs integration tests on the Virtual-Vehicles # microservices # to run: sh tests.sh -v # ######################################################################## echo --- Integration Tests --- ### VARIABLES ### hostname="localhost" application="Test API Client $(date +%s)" # randomized secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized echo hostname: ${hostname} echo application: ${application} echo secret: ${secret} ### TESTS ### echo "TEST: GET request should return 'true' in the response body" url="http://${hostname}:8581/vehicles/utils/ping.json" echo ${url} curl -X GET -H 'Accept: application/json; charset=UTF-8' \ --url "${url}" \ | grep true > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "TEST: POST request should return a new client in the response body with an 'id'" url="http://${hostname}:8587/clients" echo ${url} curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get the new client's apiKey for next test" url="http://${hostname}:8587/clients" echo ${url} apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ \"application\": \"${application}\", \"secret\": \"${secret}\" }" --url "${url}" \ | grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | sed -e 's/^"//' -e 's/"$//') echo apiKey: ${apiKey} echo echo "TEST: GET request should return a new jwt in the response body" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get a new jwt using the new client for the next test" url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" echo ${url} jwt=$(curl -X GET -H "Cache-Control: no-cache" \ --url "${url}" \ | grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | sed -e 's/^"//' -e 's/"$//') echo jwt: ${jwt} echo "TEST: POST request should return a new vehicle in the response body with an 'id'" url="http://${hostname}:8581/vehicles" echo ${url} curl -X POST -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ -d '{ "year": 2015, "make": "Test", "model": "Foo", "color": "White", "type": "Sedan", "mileage": 250 }' --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass" echo "SETUP: Get id from new vehicle for the next test" url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" echo ${url} id=$(curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' \ | grep -o '[a-zA-Z0-9]\{24\}' \ | tail -1 \ | sed -e 's/^"//' -e 's/"$//') echo vehicle id: ${id} echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" url="http://${hostname}:8581/vehicles/${id}" echo ${url} curl -X GET -H "Cache-Control: no-cache" \ -H "Authorization: Bearer ${jwt}" \ --url "${url}" \ | grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null [ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 echo "RESULT: pass"
Since our tests are just a bash script, they can also be ran separately from the command line, as in the screen grab below. The output, except for the colored text, is identical to what appears in the Jenkins console output.
Tear Down
Once the integration tests have completed, we ‘tear down’ the project by removing the Virtual-Vehicle images and containers. We simply repeat the first commands we ran at the start of the Jenkins build phase. You could choose to remove the tear down step, and use this job as a way to simply build and start your multi-container application.
# remove all images and containers from this build docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker stop && \ docker ps -a --no-trunc | grep 'jenkins' \ | awk '{print $1}' | xargs -r --no-run-if-empty docker rm && \ docker images --no-trunc | grep 'jenkins' \ | awk '{print $3}' | xargs -r --no-run-if-empty docker rmi
The Complete Process
The below diagram show the entire process, start to finish.
Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 3
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on June 5, 2015
Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.
Note: All code available on GitHub. For the version of the code that matches the details in this blog post, check out the master branch, v1.0.0 tag (after running git clone …, run a git checkout tags/v1.0.0
command).
Previous Posts
In Part One of this series, we introduced the microservices-based Virtual-Vehicles REST API example. The vehicle-themed Virtual-Vehicles microservices offers a comprehensive set of functionality, through a REST API, to application developers. In Part Two, we installed a copy of the Virtual-Vehicles project from GitHub. In Part Two, we also gained a basic understanding of how RestExpress works. Finally, we discovered how to get the Virtual-Vehicles microservices up and running.
Part Three
In part three of this series, we will take the Virtual-Vehicles for a test drive (get it? maybe it was funnier the first time…). There are several tools we can use to test the Virtual-Vehicles API. One of my favorite tools is Postman. We will explore how to use Postman, along with the Virtual-Vehicles API documentation, to test the Virtual-Vehicles microservice’s endpoints, which compose the Virtual-Vehicles API.
Testing the API
There are three categories of tools available to test RESTful APIs, which are GUI-based applications, command line tools, and testing frameworks. Postman, Advanced REST Client, REST Console, and SmartBear’s SoapUI and SoapUI NG Pro, are examples of GUI-based applications, designed specifically to test RESTful APIs. cURL and GNU Wget are two examples of command line tools, which among other capabilities, can test APIs. Lastly, JUnit is an example of a testing framework that can be used to test a RESTful API. Surprisingly, JUnit is not only designed to manage unit tests. Each category of testing tools has their pros and cons, depending on your testing needs. We will explore all of these categories in this post as we test the Virtual-Vehicles REST API.
JUnit
JUnit is probably the best known of all Java unit testing frameworks. JUnit’s website describes JUnit as ‘a simple, open source framework to write and run repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.’ Most Java developers turn to JUnit for unit testing. However, JUnit is capable of other forms of testing, including integration testing. In his post, ‘Unit Testing with JUnit – Tutorial’, Lars Vogel states ‘an integration test has the target to test the behavior of a component or the integration between a set of components. The term functional test is sometimes used as a synonym for integration test. This kind of tests allow you to translate your user stories into a test suite, i.e., the test would resemble an expected user interaction with the application.’
Testing the Virtual-Vehicles RESTful API’s operations with JUnit would be considered integration (functional) testing. At a minimum, to complete requests, we call one microservice, which in turn authenticates the JWT by calling another microservice. If authenticated, the first microservice makes a request to its MongoDB database. As Vogel stated, whereas a unit test targets a small unit of code, such as a method, the request/response operation is integration between a set of components. testing an API call requires several dependencies.
The simplest example of testing the Virtual-Vehicles API with JUnit, would be to test an HTTP GET request to return a single instance of a vehicle. The code below demonstrates how this might be done. Notice the request depends on helper methods (not included, for brevity). To request the vehicle, assuming we already have a registered client, we need a valid JWT. We also need a valid vehicle ObjectId
. To obtain these two pieces of data, we call helper methods, which in turn makes the necessary request to retrieve a JWT and vehicle ObjectId
.
/** | |
* Test of HTTP GET to read a single vehicle. | |
*/ | |
@Test | |
public void testVehicleRead() { | |
String responseBody = ""; | |
String output; | |
Boolean result = true; | |
Boolean expResult = true; | |
try { | |
URL url = new URL(getBaseUrlAndPort() + "/vehicles/" + getVehicleObjectId()); | |
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); | |
conn.setRequestMethod("GET"); | |
conn.setRequestProperty("Authorization", "Bearer " + getJwt()); | |
conn.setRequestProperty("Accept", "application/json"); | |
if (conn.getResponseCode() != 200) { | |
// if not 200 response code then fail test | |
result = false; | |
} | |
BufferedReader br = new BufferedReader(new InputStreamReader( | |
(conn.getInputStream()))); | |
while ((output = br.readLine()) != null) { | |
responseBody = output; | |
} | |
if (responseBody.length() < 1) { | |
// if response body is empty then fail test | |
result = false; | |
} | |
conn.disconnect(); | |
} catch (IOException e) { | |
// if MalformedURLException, ConnectException, etc. then fail test | |
result = false; | |
} | |
assertEquals(expResult, result); | |
} |
Below are the results of the above test, run in NetBeans IDE, using the built-in support for JUnit.
JUnit can also be run from the command line using the Maven goal, surefire:test
:
mvn -q -Dtest=com.example.vehicle.objectid.VehicleControllerIT surefire:test |
cURL
One of the best-known command line tools for calling for all types of operations centered around calling a URL is cURL. According to their website, ‘curl is a command line tool and library for transferring data with URL syntax, supporting…HTTP, HTTPS…curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate, and Kerberos), file transfer resume, proxy tunneling and more.’ I prefer the website’s briefer description, cURL ‘groks those URLs’.
Using cURL, we could make an HTTP PUT request to the Vehicle microservice’s /vehicles/{oid}.{format}
endpoint. With cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object. Below is an example of that cURL command, which can be run from a terminal prompt.
curl --url 'http://virtual-vehicles.com:8581/vehicles/557310cfec7291b25cd3a1c2' \ | |
-X PUT \ | |
-H 'Pragma: no-cache' \ | |
-H 'Cache-Control: no-cache' \ | |
-H 'Accept: application/json; charset=UTF-8' \ | |
-H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlJncjg0YzF6VkdtMFd1N25kWjd5UGNURSIsImV4cCI6MTQzMzY2ODEwNywiYWl0IjoxNDMzNjMyMTA3fQ.xglaKWufcj4TZtMXW3DLa9uy5JB_FJHxxtk_iF1WT6U' \ | |
--data-binary $'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' \ | |
--compressed |
The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created
response status.
The cURL commands may be incorporated into many types of automated testing processes. These might be as simple as a bash script. The script could a series of automated tests, including the following: register an API client, use the API key to create a JWT, use the JWT to create a new vehicle, use the new vehicle’s ObjectId
to modify that same vehicle, delete that vehicle, confirm the vehicle is removed using the count operation and returns a test results report to the user.
cURL Commands from Chrome
Quick tip, instead of hand-coding complex cURL commands, containing form data, URL parameters, and Headers, use Chrome. First, open the Chrome Developer Tools (f12). Next, using the Postman – REST Client for Chrome, available in the Chrome App Store, execute your HTTP request. Finally, in the ‘Network’ tab of the Developers tools, find and right-click on the request and select ‘Copy as cURL’. You have a complete cURL command equivalent of your Postman request, which you can paste directly into the command line or insert into a script. Below is an example of using the Postman – REST Client for Chrome to generate a cURL command.
curl --url 'http://virtual-vehicles.com:8581/vehicles/554e8bd4c830093007d8b949' \ | |
-X PUT \ | |
-H 'Pragma: no-cache' \ | |
-H 'Origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm' \ | |
-H 'Accept-Encoding: gzip, deflate, sdch' \ | |
-H 'Accept-Language: en-US,en;q=0.8' \ | |
-H 'CSP: active' \ | |
-H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlBUMklPSWRaRzZoU0VEZGR1c2h6U04xRyIsImV4cCI6MTQzMzU2MDg5NiwiYWl0IjoxNDMzNTI0ODk2fQ.4q6EMuxE0vS43zILjE6e1tYrb5ulCe69-1QTFLYGbFU' \ | |
-H 'Content-Type: text/plain;charset=UTF-8' \ | |
-H 'Accept: */*' \ | |
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36' \ | |
-H 'Cache-Control: no-cache' \ | |
-H 'Connection: keep-alive' \ | |
-H 'X-FirePHP-Version: 0.0.6' \ | |
--data-binary $'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' \ | |
--compressed |
The generated command is a bit verbose. Compare this command to the cURL command, earlier.
Wget
Similar to cURL, GNU Wget provides the ability to call the Virtual-Vehicles API’s endpoints. According to their website, ‘GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive command line tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.’ Again, like cURL, we can run Wget commands from the command line or incorporate them into scripted testing processes. The Wget website contains excellent documentation.
Using Wget, we could make the same HTTP PUT request to the Vehicle microservice /vehicles/{oid}.{format}
endpoint. Like cURL, we have the ability to add the JWT-based Authorization header and the raw request body, containing the modified vehicle object.
wget -O - 'http://virtual-vehicles.com:8581/vehicles/557310cfec7291b25cd3a1c2' \ | |
--method=PUT \ | |
--header='Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJ2aXJ0dWFsLXZlaGljbGVzLmNvbSIsImFwaUtleSI6IlJncjg0YzF6VkdtMFd1N25kWjd5UGNURSIsImV4cCI6MTQzMzY2ODEwNywiYWl0IjoxNDMzNjMyMTA3fQ.xglaKWufcj4TZtMXW3DLa9uy5JB_FJHxxtk_iF1WT6U' \ | |
--header='Content-Type: text/plain;charset=UTF-8' \ | |
--header='Accept: application/json' \ | |
--body-data=$'{ "year": 2015, "make": "Chevrolet", "model": "Corvette Stingray", "color": "White", "type": "Coupe", "mileage": 902, "createdAt": "2015-05-09T22:36:04.808Z" }' |
The response body contains the expected modified vehicle object in JSON-format, along with a 201 Created
response status.
cURL Bash Testing
We can combine cURL and Wget with several of the tools bash provides, to develop fairly complex integration tests. The bash-based script below just scratches the surface as a complete set of integration tests. However, the tests demonstrate an efficient multi-stage test approach to handling the complex nature of RESTful service request requirements. The tests build upon each other.
After setting up some variables and doing a quick health check on one service, the tests register a new API client by calling the Authentication service. Next, they use the new client’s API key to obtain a JWT. The tests then use the JWT to authenticate themselves and create a new vehicle. Finally, they use the new vehicle’s id and the JWT to verify the existence for the new vehicle.
Although some may consider using bash to test somewhat primitive, the following script demonstrates the effectiveness of bash’s curl
, grep
, sed
, awk
, along with regular expressions, to test our RESTful services. Note how we grep certain values from the response, such as the new client’s API key, and then use that value as a parameter for the following test request, such as to obtain a JWT.
#!/bin/sh | |
######################################################################## | |
# | |
# title: Virtual-Vehicles Project Integration Tests | |
# author: Gary A. Stafford (https://programmaticponderings.com) | |
# url: https://github.com/garystafford/virtual-vehicles-docker | |
# description: Performs integration tests on the Virtual-Vehicles | |
# microservices | |
# to run: sh tests_color.sh -v | |
# | |
######################################################################## | |
echo --- Integration Tests --- | |
### VARIABLES ### | |
hostname="localhost" | |
application="Test API Client $(date +%s)" # randomized | |
secret="$(date +%s | sha256sum | base64 | head -c 15)" # randomized | |
echo hostname: ${hostname} | |
echo application: ${application} | |
echo secret: ${secret} | |
### TESTS ### | |
echo "TEST: GET request should return 'true' in the response body" | |
url="http://${hostname}:8581/vehicles/utils/ping.json" | |
echo ${url} | |
curl -X GET -H 'Accept: application/json; charset=UTF-8' \ | |
--url "${url}" \ | |
| grep true > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "TEST: POST request should return a new client in the response body with an 'id'" | |
url="http://${hostname}:8587/clients" | |
echo ${url} | |
curl -X POST -H "Cache-Control: no-cache" -d "{ | |
\"application\": \"${application}\", | |
\"secret\": \"${secret}\" | |
}" --url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get the new client's apiKey for next test" | |
url="http://${hostname}:8587/clients" | |
echo ${url} | |
apiKey=$(curl -X POST -H "Cache-Control: no-cache" -d "{ | |
\"application\": \"${application}\", | |
\"secret\": \"${secret}\" | |
}" --url "${url}" \ | |
| grep -o '"apiKey":"[a-zA-Z0-9]\{24\}"' \ | |
| grep -o '[a-zA-Z0-9]\{24\}' \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo apiKey: ${apiKey} | |
echo | |
echo "TEST: GET request should return a new jwt in the response body" | |
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" | |
echo ${url} | |
curl -X GET -H "Cache-Control: no-cache" \ | |
--url "${url}" \ | |
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get a new jwt using the new client for the next test" | |
url="http://${hostname}:8587/jwts?apiKey=${apiKey}&secret=${secret}" | |
echo ${url} | |
jwt=$(curl -X GET -H "Cache-Control: no-cache" \ | |
--url "${url}" \ | |
| grep '[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}\.[a-zA-Z0-9_-]\{1,\}' \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo jwt: ${jwt} | |
echo "TEST: POST request should return a new vehicle in the response body with an 'id'" | |
url="http://${hostname}:8581/vehicles" | |
echo ${url} | |
curl -X POST -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
-d '{ | |
"year": 2015, | |
"make": "Test", | |
"model": "Foo", | |
"color": "White", | |
"type": "Sedan", | |
"mileage": 250 | |
}' --url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" | |
echo "SETUP: Get id from new vehicle for the next test" | |
url="http://${hostname}:8581/vehicles?filter=make::Test|model::Foo&limit=1" | |
echo ${url} | |
id=$(curl -X GET -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
--url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' \ | |
| grep -o '[a-zA-Z0-9]\{24\}' \ | |
| tail -1 \ | |
| sed -e 's/^"//' -e 's/"$//') | |
echo vehicle id: ${id} | |
echo "TEST: GET request should return a vehicle in the response body with the requested 'id'" | |
url="http://${hostname}:8581/vehicles/${id}" | |
echo ${url} | |
curl -X GET -H "Cache-Control: no-cache" \ | |
-H "Authorization: Bearer ${jwt}" \ | |
--url "${url}" \ | |
| grep '"id":"[a-zA-Z0-9]\{24\}"' > /dev/null | |
[ "$?" -ne 0 ] && echo "RESULT: fail" && exit 1 | |
echo "RESULT: pass" |
Since these tests are just a bash script, they can from the command line, or easily called from a continuous integration tool, Such as Jenkins CI or Hudson.
Postman
Postman, like several similar tools, is an application designed specifically for test API endpoints. The Postman website describes Postman as tool that allows you to ‘build, test, and document your APIs faster.’ There are two versions of Postman in the Chrome Web Store. They are Postman – REST Client, the in-browser extension, which we mentioned above, and Postman, the standalone application. There is also Postman Interceptor, which helps you send requests that use browser cookies through the Postman application.
Postman and similar applications, have add-ons and extensions to extend their features. In particular, Postman, which is free, offers the Jetpacks paid extension. Jetpacks add the ability to ‘write and run tests inside Postman, extract data from responses, chain requests together and test requests with thousands of variations’. Jetpacks allow you to move beyond basic one-off API request-based testing, to automated regression and performance testing.
Using Postman
Let’s use the same HTTP PUT example we used with cURL and Wget, and see how we would perform the same task with Postman. In the first screen grab below, you can see all elements of the HTTP request, including the RESTful API’s URL, URI including the vehicle’s ObjectId
(/vehicles/{ObjectId}.{format}
), HTTP method (PUT), Authorization Header with JWT (Bearer), and the raw request body. The raw request body contains a JSON representation of the vehicle we want to update. Note how Postman saves the request in history so we can easily replay it later.
In the next screen-grab, we see the response to the HTTP PUT request. Note the response body, response status, timing, and response headers.
Looking at the response body in Postman, you easily see the how RestExpress demonstrates the RESTful principle we discussed in Part Two of the series, HATEOAS (Hypermedia as the Engine of Application State). Note the link to this vehicle’s ‘self’ href) and the entire vehicles collection (‘up’ href).
Postman Collections
A great feature of Postman with Jetpacks is Collections. Collections are sets of requests, which can be saved, recalled, and shared. The Collection Runner runs requests in a collection, in the order in which you set them. Ordered collections are ideal for the Virtual-Vehicles API. The screen grab below shows a collection of requests, arranged in the order we would execute them to test the Virtual-Vehicles API, as it applies to specifically to vehicle CRUD operations:
- Execute HTTP POST request to register the new API client, passing the application name and a shared secret in the request
Receive the new client’s API key in response - Execute HTTP GET to request, passing the new client’s API key and the shared secret in the request
Receive the new JWT in response - Execute HTTP POST request to create a new vehicle, passing the JWT in the header for authentication (used for all following requests)
Receive the new vehicle object in response - Execute HTTP PUT request to modify the new vehicle, using the vehicle’s
ObjectId
Receive the modified vehicle object in response - Execute HTTP GET to request the modified vehicle, to confirm it exists in the expected state
Receive the vehicle object in response - Execute HTTP DELETE request to delete the new vehicle, using the vehicle’s
ObjectId
- Execute HTTP GET to request the new vehicle and to confirm it has been removed
Receive a 404 Not Found status response, as expected
Using saved collections for testing the Virtual-Vehicles API is a real-time saving. However, the collections cannot easily be re-run without hand-editing or some advanced scripting. In the simple example above, we hard-coded a JWT and vehicle ObjectId
in the requests. Unfortunately, the JWT has an expiration of only 10 hours by default. More immediately, the ObjectId
is unique. The earlier collection test run created, then deleted, the vehicle with that ObjectId
.
Negative Testing
You may also perform negative testing with Postman. For example, do you receive the expected response when you don’t include the Authorization Header with JWT in a request (401 Unauthorized status)? When you include a JWT, which has expired (401 Unauthorized status)? When you request a vehicle, whose ObjectId
is incorrect or is not found in the database (400 Bad Request status)? Do you receive the expected response when you call an actual service, but an endpoint that doesn’t exist (405 Method Not Allowed)?
Postman Test Automation
In addition to manually viewing the HTTP response, to verify the results of a request, Postman allows you to write and run automated tests for each request. According to their website, a ‘Postman test is essentially JavaScript code which sets values for the special tests object. You can set a descriptive key for an element in the object and then say if it’s true or false’. This allows you to write a set of response validation tests for each request.
Below is a quick example of testing the same HTTP POST request, used to create the new API client, above. In this example, we:
- Test that the
Content-Type
response header is present - Test that the HTTP POST successfully returned a 201 status code
- Test that the new client’s API key was returned in the response body
- Test that the response time was less than 200ms
Reviewing Postman’s ‘Tests’ tab, above, observe the four tests have run successfully. Using the Postman’s testing feature, you can create even more advanced tests, eliminating the need to manually validate responses.
This post demonstrates a small subset of the features Postman and other similar applications provide for testing RESTful API. The tools and processes you use to test your RESTful API will depend on the stage of development and testing you are in, as well as the existing technology stacks you build, and on which you host your services.
Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 2
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on May 31, 2015
Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.
Note: All code available on GitHub. For the version of the code that matches the details in this blog post, check out the master branch, v1.0.0 tag (after running git clone …, run a ‘git checkout tags/v1.0.0’ command).
Previous Post
In Part One of this series, we introduced the microservices-based Virtual-Vehicles REST API example. The vehicle-themed Virtual-Vehicles microservices offers a comprehensive set of functionality, through a REST API, to application developers. The developers, in turn, will use the Virtual-Vehicles REST API’s functionality to build applications and games for their end-users.
In Part One, we also decided on the proper amount and responsibility of each microservice. We also determined the functionality of each microservice to meet the hypothetical functional and nonfunctional requirements of Virtual-Vehicles. To review, the four microservices we are building, are as follows:
Virtual-Vehicles REST API Resources |
||
Microservice | Purpose (Business Capability) | Functions |
Authentication |
Manage API clients and JWT authentication |
|
Vehicle |
Manage virtual vehicles |
|
Maintenance |
Manage maintenance on vehicles |
|
Valet Parking |
Manage a valet service to park for vehicles |
|
To review, the first five functions for each service are all basic CRUD operations: create
(POST), read
(GET), readAll
(GET), update
(PUT), delete
(DELETE). The readAll
function also has find
, count
, and pagination
functionality using query parameters. Unfortunately, RestExpress does not support PATCH
for updates. However, I have updated RestExpress’ PUT HTTP methods to return the modified object in the response body instead of the nothing (status of 201 Created vs. 200 OK). See StackOverflow for an explanation.
All services also have an internal authenticateJwt
function, to authenticate the JWT, passed in the HTTP request header, before performing any operation. Additionally, all services have a basic health-check function, ping
(GET). There are only a few other functions required for our Virtual-Vehicles example, such as for creating JWTs.
Part Two Introduction
In Part Two, we will build our four Virtual-Vehicles microservices. Recall from our first post, we will be using RestExpress. RestExpress composes best-of-breed open-source tools to enable quickly creating RESTful microservices that embrace industry best practices. Those best-of-breed tools include Java EE, Maven, MongoDB, and Netty, among others.
In this post, we will accomplish the following:
- Create a default microservice project in NetBeans using RestExpress MongoDB Maven Archetype
- Understand the basic structure of a default RestExpress microservice project
- Review the changes made to the default RestExpress microservice project to create the Virtual-Vehicles example
- Compile and run the individual microservices directly from NetBeans
I used NetBeans IDE 8.0.2 on Linux Ubuntu 14.10 to build the microservices. You may also follow along in other IDE’s, such as Eclipse or IntelliJ, on Mac or Windows. We won’t cover installing MongoDB, Maven, and Java. I’ll assume if your building enterprise applications, you have the basics covered.
Using the RestExpress MongoDB Maven Archetype
All the code for this project is available on GitHub. However, to understand RestExpress, you should go through the exercise of scaffolding a new microservice using the RestExpress MongoDB Maven Archetype. You will also be able to use this default microservice project to compare and contrast to the modified versions, used in the Virtual-Vehicles example. The screen grabs below demonstrate how to create a new microservice project using the RestExpress MongoDB Maven Archetype. At the time of this post, the archetype version was restexpress-mongodb version 1.15.
Default Project Architecture
Reviewing the two screen grabs below (Project tab), note the key components of the RestExpress MongoDB Maven project, which we just created:
- Base Package (com.example.vehicle)
- Configuration class reads in environment properties (see Files tab) and instantiates controllers
- Constants class contains project constants
- Relationships class defines linking resource which aids service discoverability (HATEOAS)
- Main executable class
- Routes class defines the routes (endpoints) exposed by the service and the corresponding controller class
- Model/Controllers Packages (com.example.vehicle.objectid and .uuid)
- Entity class defines the data entity – a Vehicle in this case
- Controller class contains the methods executed when the route (endpoint) is called
- Repository class defines the connection details to MongoDB
- Service class contains the calls to the persistence layer, validation, and business logic
- Serialization Package (com.example.vehicle.serialization)
- XML and JSON serialization classes
- Object ID and UUID serialization and deserialization classes
Again, I strongly recommend reviewing each of these package’s classes. To understand the core functionality of RestExpress, you must understand the relationships between RestExpress microservice’s Route, Controller, Service, Repository, Relationships, and Entity classes. In addition to reviewing the default Maven project, there are limited materials available on the Internet. I would recommend the RestExpress Website on GitHub, RestExpress Google Group Forum, and the YouTube 3-part video series, Instant REST Services with RESTExpress.
Unit Tests?
Disappointingly, the current RestExpress MongoDB Maven Archetype sample project does not come with sample JUnit unit tests. I am tempted to start writing my own unit tests if I decided to continue to use the RestExpress microservices framework for future projects.
Properties Files
Also included in the default RestExpress MongoDB Maven project is a Java properties file (environment.properties
). This properties file is displayed in the Files tab, as shown below. The default properties file is located in the ‘dev’ environment config folder. Later, we will create an additional properties file for our production environment.
Ports
Within the ‘dev’ environment, each microservice is configured to start on separate ports (i.e. port = 8581
). Feel free to change the service’s port mappings if they conflict with previously configured components running on your system. Be careful when changing the Authentication service’s port, 8587, since this port is also mapped in all other microservices using the authentication.port
property (authentication.port = 8587
). Make sure you change both properties, if you change Authentication service’s port mapping.
Base URL
Also, in the properties files is the base.url
property. This property defines the URL the microservice’s endpoints will be expecting calls from, and making internal calls between services. In our post’s example, this property in the ‘dev’ environment is set to localhost (base.url = http://localhost
). You could map an alternate hostname from your hosts file (/etc/hosts
). We will do this in a later post, in our ‘prod’ environment, mapping the base.url
property to Virtual-Vehicles (base.url = http://virtual-vehicles.com
). In the ‘dev’ environment properties file, MongoDB is also mapped to localhost
(i.e. mongodb.uri = mongodb://virtual-vehicles.com:27017/virtual_vehicle
).
Metrics Plugin and Graphite
RestExpress also uses the properties file to hold configuration properties for Metrics Plugin and Graphite. The Metrics Plugin and Graphite are both first class citizens of RestExpress. Below is the copy of the Vehicles service environment.properties
file for the ‘dev’ environment. Note, the Metrics Plugin and Graphite are both disabled in the ‘dev’ environment.
# Default is 8081 | |
port = 8581 | |
# Port used to call Authentication service endpoints | |
authentication.port = 8587 | |
# The size of the executor thread pool | |
# (that can handle blocking back-end processing). | |
executor.threadPool.size = 20 | |
# A MongoDB URI/Connection string | |
# see: http://docs.mongodb.org/manual/reference/connection-string/ | |
mongodb.uri = mongodb://localhost:27017/virtual_vehicle | |
# The base URL, used as a prefix for links returned in data | |
base.url = http://localhost | |
#Configuration for the MetricsPlugin/Graphite | |
metrics.isEnabled = false | |
#metrics.machineName = | |
metrics.prefix = web1.example.com | |
metrics.graphite.isEnabled = false | |
metrics.graphite.host = graphite.example.com | |
metrics.graphite.port = 2003 | |
metrics.graphite.publishSeconds = 60 |
Choosing a Data Identification Method
RestExpress offers two identification models for managing data, the MongoDB ObjectId and a Universally Unique Identifier (UUID). MongoDB uses an ObjectId to identify uniquely a document within a collection. The ObjectId is a special 12-byte BSON type that guarantees uniqueness of the document within the collection. Alternately, you can use the UUID identification model. The UUID identification model in RestExpress uses a UUID, instead of a MongoDB ObjectId. The UUID also contains createdAt
and updatedAt
properties that are automatically maintained by the persistence layer. You may stick with ObjectId, as we will in the Virtual-Vehicles example, or choose the UUID. If you use multiple database engines for your projects, using UUID will give you a universal identification method.
Project Modifications
Many small code changes differentiate our Virtual-Vehicles microservices from the default RestExpress Maven Archetype project. Most changes are superficial; nothing changed about how RestExpress functions. Changes between the screen grabs above, showing the default project, and the screen grabs below, showing the final Virtual-Vehicles microservices, include:
- Remove all packages, classes, and code references to the UUID identification methods (example uses ObjectId)
- Rename several classes for convenience (dropped use of word ‘Entity’)
- Add the Utilities (com.example.utilities) and Authentication (com.example.authenticate) packages
MongoDB
Following a key principle of microservices mentioned in the first post, Decentralized Data Management, each microservice will have its own instance of a MongoDB database associated with it. The below diagram shows each service and its corresponding database, collection, and fields.
From the MongoDB shell, we can observe the individual instances of the four microservice’s databases.
In the case of the Vehicle microservice, the associated MongoDB database is virtual_vehicle
. This database contains a single collection, vehicles
. While the properties file defines the database name, the Vehicles entity class defines the collection name, using the org.mongodb.morphia.annotations
classes annotation functionality.
@Entity("vehicles") | |
public class Vehicle | |
extends AbstractMongodbEntity | |
implements Linkable { | |
private int year; | |
private String make; | |
private String model; | |
private String color; | |
private String type; | |
private int mileage; | |
// abridged... |
Looking at the virtual_vehicle
database in the MongoDB shell, we see that the sample document’s fields correspond to the Vehicle entity classes properties.
Each of the microservice’s MongoDB databases are configured in the environments.properties
file, using the mongodb.uri
property. In the ‘dev’ environment we use localhost
as our host URL (i.e. mongodb.uri = mongodb://localhost:27017/virtual_vehicle
).
Authentication and JSON Web Tokens
The three microservices, Vehicle, Valet, and Maintenance, are almost identical. However, the Authentication microservice is unique. This service is called by each of the other three services, as well as also being called directly. The Authentication service provides a very basic level of authentication using JSON Web Tokens (JWT), pronounced ‘jot‘.
Why do we want authentication? We want to confirm that the requester using the Virtual-Vehicles REST API is the actual registered API client they are who they claim to be. JWT allows us to achieve this requirement with minimal effort.
According to jwt.io, ‘a JSON Web Token is a compact URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is digitally signed using JSON Web Signature (JWS).‘ I recommend reviewing the JWT draft standard to fully understand the structure, creation, and use of JWTs.
Virtual-Vehicles Authentication Process
There are different approaches to implementing JWT. In our Virtual-Vehicles REST API example, we use the following process for JWT authentication:
- Register the new API client by supplying the application name and a shared secret (one time only)
- Receive an API key in response (one time only)
- Obtain a JWT using the API key and the shared secret (each user session or renew when the previous JWT expires)
- Include the JWT in each API call
In our example, we are passing four JSON fields in our set of claims. Those fields are the issuer (‘iss’), API key, expiration (‘exp’), and the time the JWT was issued (‘ait’). Both the ‘iss’ and the ‘exp’ claims are defined in the Authentication service’s environment.properties
file (jwt.expire.length
and jwt.issuer
).
Expiration and Issued date/time use the JWT standard recommended “Seconds Since the Epoch“. The default expiration for a Virtual-Vehicles JWT is set to an arbitrary 10 hours from the time the JWT was issued (jwt.expire.length = 36000
). That amount, 36,000, is equivalent to 10 hours x 60 minutes/hour x 60 seconds/minute.
Decoding a JWT
Using the jwt.io site’s JT Debugger tool, I have decoded a sample JWT issued by the Virtual-Vehicles REST API, generated by the Authentication service. Observe the three parts of the JWT, the JOSE Header, Claims Set, and the JSON Web Signature (JWS).
The JWT’s header indicates that our JWT is a JWS that is MACed using the HMAC SHA-256 algorithm. The shared secret, passed by the API client, represents the HMAC secret cryptographic key. The secret is used in combination with the cryptographic hash function to calculate the message authentication code (MAC). In the example below, note how the API client’s shared secret is used to validate our JWT’s JWS.
Sequence Diagrams of Authentication Process
Below are three sequence diagrams, which detail the following processes: API client registration process, obtaining a new JWT, and a REST call being authenticated using the JWT. The end-user of the API self-registers their application using the Authentication service and receives back an API key. The API key is unique to that client.
The end-user application then uses the API key and the shared secret to receive a JWT from the Authentication service.
After receiving the JWT, the end-user application passes the JWT in the header of each API request. The JWT is validated by calling the Authentication service. If the JWT is valid, the request is fulfilled. If not valid, a ‘401 Unauthorized’ status is returned.
JWT Validation
The JWT draft standard recommends how to validate a JWT. Our Virtual-Vehicles Authentication microservice uses many of those criteria to validate the JWT, which include:
- API Key – Retrieve API client’s shared secret from MongoDB, using API key contained in JWT’s claims set (secret is returned; therefore API key is valid)
- Algorithm – confirm the algorithm (‘alg’), found in the JWT Header, which used to encode JWT, was ‘HS256’ (HMAC SHA-256)
- Valid JWS – Use the client’s shared secret from #1 above, decode HMAC SHA-256 encrypted JWS
- Expiration – confirm JWT is not expired (‘exp’)
Inter-Service Communications
By default, the RestExpress Archetype project does not offer an example of communications between microservices. Service-to-service communications for microservices is most often done using the same HTTP-based REST calls used to by our Virtual-Vehicles REST API. Alternately, a message broker, such as RabbitMQ, Kafka, ActiveMQ, or Kestrel, is often used. Both methods of communicating between microservices, referred to as ‘inter-service communication mechanisms’ by InfoQ, have their pros and cons. The InfoQ website has an excellent microservices post, which discusses the topic of service-to-service communication.
For the Virtual-Vehicles microservices example, we will use HTTP-based REST calls for inter-service communications. The primary service-to-service communications in our example, is between the three microservices, Vehicle, Valet, and Maintenance, and the Authentication microservice. The three services validate the JWT passed in each request to a CRUD operation, by calling the Authentication service and passing the JWT, as shown in the sequence diagram, above. Validation is done using an HTTP GET request to the Authentication service’s .../jwts/{jwt}
endpoint. The Authentication service’s method, called by this endpoint, minus some logging and error handling, looks like the following:
public boolean authenticateJwt(Request request, String baseUrlAndAuthPort) { | |
String jwt, output, valid = ""; | |
try { | |
jwt = (request.getHeader("Authorization").split(" "))[1]; | |
} catch (NullPointerException | ArrayIndexOutOfBoundsException e) { | |
return false; | |
} | |
try { | |
URL url = new URL(baseUrlAndAuthPort + "/jwts/" + jwt); | |
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); | |
conn.setRequestMethod("GET"); | |
conn.setRequestProperty("Accept", "application/json"); | |
if (conn.getResponseCode() != 200) { | |
return false; | |
} | |
BufferedReader br = new BufferedReader(new InputStreamReader( | |
(conn.getInputStream()))); | |
while ((output = br.readLine()) != null) { | |
valid = output; | |
} | |
conn.disconnect(); | |
} catch (IOException e) { | |
return false; | |
} | |
return Boolean.parseBoolean(valid); | |
} |
Primarily, we are using the java.net
and java.io
packages, along with the org.restexpress.Request
class to build and send our HTTP request to the Authentication service. Alternately, you could use just the org.restexpress
package to construct request and handle the response. This same basic method structure shown above can be used to create unit tests for your service’s endpoints.
Health Ping
Each of the Virtual-Vehicles microservices contain a DiagnosticController
in the .utilities
package. In our example, we have created a ping()
method. This simple method, called through the .../utils/ping
endpoint, should return a 200 OK
status and a boolean value of ‘true’, indicating the microservice is running and reachable. This route’s associated method could not be simpler:
public void ping(Request request, Response response) { | |
response.setResponseStatus(HttpResponseStatus.OK); | |
response.setResponseCode(200); | |
response.setBody(true); | |
} |
The ping health check can even be accessed with a simple curl command, curl localhost:8581/vehicles/utils/ping
.
In a real-world application, we would add additional health checks to all services, providing additional insight into the health and performance of each microservice, as well as the service’s dependencies.
API Documentation
A well written RESTful API will not require extensive documentation to understand the API’s operations. Endpoints will be discoverable through linking (see Response Body links section in below example). API documentation should provide HTTP method, required headers and URL parameters, expected response body and response status, and so forth.
An API should be documented before any code is written, similar to TDD. The API documentation is the result of a clear understanding of requirements. The API documentation should make the coding even easier since the documentation serves as a further refinement of the requirements. The requirements are an architectural plan for the microservice’s code structure.
Sample Documentation
Below, is a sample of the Virtual-Vehicles REST API documentation. It details the function responsible for creating a new API client. The documentation provides a brief description of the function, the operation’s endpoint (URI), HTTP method, request parameters, expected response body, expected response status, and even a view of the MongoDB collection’s document for a new API client.
You can download a PDF version of the Virtual-Vehicles RESTful API documentation on GitHub or review the source document on Google Docs. It contains general notes about the API, and details about every one of the API’s operations.
Running the Individual Microservices
For development and testing purposes, the easiest way to start the microservices is in NetBeans using the Run
command. The Run
command executes the Maven exec
goal. Based on the DEFAULT_ENVIRONMENT
constant in the org.restexpress.util
Environment class, RestExpress will use the ‘dev’ environment’s environment.properties
file, in the project’s /config/dev
directory.
Alternately, you can use the RestExpress project’s recommended command from a terminal prompt to start each microservice from its root directory (mvn exec:java -Dexec.mainClass=test.Main -Dexec.args="dev"
). You can also use this command to switch from the ‘dev’ to ‘prod’ environment properties (-Dexec.args="prod"
).
You may use a variety of commands to confirm all the microservices are running. I prefer something basic like sudo netstat -tulpn | grep 858[0-9]
. This will find all the ports within the ‘dev’ port range. For more in-depth info, you can use a command like ps -aux | grep com.example | grep -v grep
Part Three: Testing our Services
We now have a copy of the Virtual-Vehicles project pulled from GitHub, a basic understanding of how RestExpress works, and our four microservices running on different ports. In Part Three of this series, we will take them for a drive (get it?). There are many ways to test our service’s endpoints. One of my favorite tools is Postman. we will explore how to use several tools, including Postman, and our API documentation, to test our microservice’s endpoints.
Building a Microservices-based REST API with RestExpress, Java EE, and MongoDB: Part 1
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on May 18, 2015
Develop a well-architected and well-documented REST API, built on a tightly integrated collection of Java EE-based microservices.
Microservices
Microservices are a popular and growing trend in software development. According to Wikipedia, microservices are “a software architecture style, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task.”
Martin Fowler and James Lewis (ThoughtWorks) have done an exemplary job capturing the essence of microservice architecture in their March 2014 post, microservices. Fowler has also discussed these principles in several presentations, including the January 2015 goto; Conference, Keynote: Microservices by Martin Fowler.
Additionally, noted technical consultant and speaker, Adrian Cockcroft (Battery Ventures), has made significant contributions to the definition of microservices, such as in his December 2014 dockercon14 | eu presentation, State of the Art in Microservices.
Lastly, Zhamak Dehghani (ThoughtWorks), delivered an in-depth discussion of microservices, including customer perspectives, in her October 2014 presentation, Real-World Microservices: Lessons from the Frontline.
Some of the major characteristics of microservices and REST cited by these experts, include:
- Organized Around Business Capabilities
- Single Responsibility
- Loose Coupling / High Cohesion
- Smart Endpoints and Dumb Pipes
- Decentralized Data Management
- Hypermedia as the Engine of Application State (HATEOAS)
As we develop this post’s example, I will demonstrate how all of the above characteristics are implemented.
REST API
A REST API is the mash-up of two common software concepts, Representational State Transfer (REST) and an application programming interface (API). Although even Wikipedia doesn’t have an exact definition of a REST API, they come close to their discussion of REST. According to Wikipedia, “Web service APIs that adhere to the REST architectural constraints are called RESTful APIs. HTTP-based RESTful APIs are defined with these aspects: base URI, an Internet media type for the data, standard HTTP methods, hypertext links to a reference state, and hypertext links to reference related resources.”
An important nuance and differentiator from SOA-based APIs, RESTful APIs do not require XML-based Web service protocols (SOAP and WSDL) to support their interfaces (Wikipedia).
The author of the WebConcepts channel does an excellent job capturing the essence of REST APIs in REST API concepts and examples. Two additional presentations I strongly recommend are REST+JSON API Design – Best Practices for Developers and Designing a Beautiful REST+JSON API, both by Les Hazlewood, CTO of Stormpath. Stormpath is a leader in the commercial REST API space.
Microservices-Based REST API
A microservices-based REST API is a REST API, whose HTTP requests call an orchestrated collection or collections of language-agnostic and platform-agnostic microservices. The combination of these two trends, microservices, and a REST API, offers a simple, reliable, and scalable solution for providing flexible functionality to an end-user, in a technology-agnostic manner.
REST API Example
There is a fast-growing volume of reference materials describing the characteristics, benefits, and general architecture of microservices and REST APIs. However, in researching these topics, I have found a shortage of practical examples or tutorials on building microservices-based REST API solutions.
Undoubtedly, the complexity of even the simplest microservices-based solution limits the number of available cases. A minimally viable solution require planning, coding, testing, and documentation. The addition of cross-cutting features such as security, logging, monitoring, and orchestration, creates an enormous task to build a practical microservices-based example.
In the following series of posts, we will use many of the characteristics of a modern microservice architecture as described by Fowler, Lewis, Cockcroft, and Dehghani. We will combine these microservice characteristics with the best practices of good REST API design, as described by Hazlewood and WebConcepts, to build a minimally viable microservices-based REST API.
In a future post, we will create an application, which leverages the microservices-based solution, through the REST API. Additionally, we will demonstrate how to ensure high-availability of the individual microservices and data sources.
Vehicle for Learning
In a similar vein to the publicly available Twitter, Facebook, and Google REST APIs, we will build the Virtual-Vehicles REST API. The Virtual-Vehicles REST API will constitute a collection of vehicle-themed microservices. Collectively, the microservices will offer a comprehensive set of functionality to the end-user, an application developer. They, in turn, will use the functionality of the Virtual-Vehicles REST API to build applications and games for their end-users.
Technology Choices
There are a seemingly infinite number of technology choices for building microservices and REST APIs. Your choice of development languages, databases, application servers, third-party libraries, API gateway, logging, monitoring, automated testing, ORM or ODM, and even the IDE, all define your technology stack.
For the Virtual-Vehicles solution, we will use the following key technologies:
- RestExpress-Create performant, stand-alone microservices-based REST APIs
- Java EE – Primary development language
- Netty – Async, event-driven Java network application framework
- Apache Maven – Dependency management and more
- MongoDB – Data source
- JSON Web Tokens (JWT) – Claims-based authentication
- Apache Log4j 2 – Logging
- JUnit – Unit testing
- HAProxy – Service gateway and load-balancing
- NetBeans – IDE
- Git and GitHub – SCM
- Postman – REST API testing
What is RestExpress?
According to their website, RestExpress, composes best-of-breed open-source tools to enable quickly creating RESTful microservices that embrace industry best practices. Built from the ground-up for container-less, microservice architectures, RestExpress is the easiest way to create RESTful APIs in Java. RestExpress is an extremely lightweight, fast, REST engine and API for Java. RestExpress is a thin wrapper on Netty IO HTTP handling. RestExpress lets you create performant, stand-alone REST APIs rapidly. RestExpress provides several Maven archetypes, which we will use as a basis for our microservices.
RestExpress will also drive our technology decisions to use Java EE, Maven, MongoDB, and Netty.
Virtual-Vehicles Microservices
Adhering to the first few microservice architectural principles listed above, organized around business capabilities, single responsibility, and high cohesion, we first must determine the proper number of microservices, and their individual responsibilities. In the case of our solution, we will break down Virtual-Vehicles’ business capabilities into the following microservices:
Virtual-Vehicles Services |
|
Microservice | Purpose (Business Capability) |
Authentication Service | Manage API clients and JWT authentication |
Vehicle Service | Manage virtual vehicles |
Maintenance Service | Manage maintenance on vehicles |
Valet Parking Service | Manage a valet service to park for vehicles |
Sales Service | Manage the buying and selling of vehicles |
Registration Service | Manage registration of vehicles to owners |
Auction Service | Manage a virtual car auction |
Car Show Service | Manage a virtual car show |
Interaction Service | Manage interaction of users with vehicles |
For simplicity in this post’s example, we will only be exploring the (4) services shown above in bold.
This segmentation of service functionality is unlike what we might encounter in traditional monolithic, n-tier applications, and SOA-based architecture. Traditional applications were built around application-centric functionality or business’ organizational structure. Microservices, however, are client-centric and built around business capabilities.
REST API Functionality
The next decision we need to make is required functionality. What are the operational requirements of each business segment, represented by the microservices? Additionally, what are the nonfunctional requirements, such as monitoring, logging, and authentication. Requirements are translated into functionality, which is translated into the available resources exposed via the service’s RESTful endpoints.
For the Virtual-Vehicles microservices solution, based on a hypothetical set of business and non-functional requirements, we will expose the following resources. Collectively, they will compose the REST API:
Virtual-Vehicles REST API Resources |
||
Microservice | Purpose (Business Capability) | Functions |
Authentication Service | Manage API clients and JWT authentication |
|
Vehicle Service | Manage virtual vehicles |
|
Maintenance Service | Manage maintenance on vehicles |
|
Valet Parking Service | Manage a valet service to park for vehicles |
|
Reviewing the table above, note the first five functions for each service are all basic CRUD operations: create
(POST), read
(GET), readAll
(GET), update
(PUT), delete
(DELETE). The readAll
function also has find
, count
, and pagination
functionality using query parameters.
All services also have an internal authenticateJwt
function, to authenticate the JWT, passed in the HTTP request header, before performing any operation. Additionally, all services have a basic health-check function, ping
(GET). There are only a few other functions required for our Virtual-Vehicles example, such as for creating JWTs.
I’ve labeled each function as to suggested user scope. Scopes include public, admin, and internal. As a consumer of the REST API, you may only want to expose certain functionality to your general end-user (public). Additional functionality may be reserved for an administrative user (admin) or only yourself as a developer (internal). Creating a new vehicle might be a common end-user feature. However, the ability to permanently delete one or more vehicles may be reserved for an admin-level user, or not exposed at all.
REST API Patterns
We will not spend a lot of time discussing patterns for building REST APIs. There are many useful materials available on the Internet regarding industry-standard patterns for REST API resource URI construction. The two presentations I recommend above by Les Hazlewood, CTO of Stormpath, are excellent. Also, Microservices.io, RestApiTutorial.com, swagger.io, and raml.org websites offer solid overviews of REST patterns and RESTful standards.
A common RESTful anti-pattern, which is hard to avoid as a OOP developer, is the temptation to use verbs versus nouns and method-like names, in resource URIs. Remember, we are not designing an end-user application. We are building an API, used by API consumers (application developers), to build a variety of platform and language-agnostic applications. Functions like paintCar
, changeOil
, or parkVehicle
are not something the API should define. The Vehicle microservice exposes the update operation, which allows an application developer to change the car’s paint color in their paintCar
method. Similarly, the valet service exposes the create operation, which allows the application developer to create a function to park the vehicle (or car, or truck, in a garage, or parking lot, etc.). A good REST API allows for maximum end-user flexibility.
Part Two
In Part Two, we will install a copy of the Virtual-Vehicles project from GitHub. In Part Two, we will gain a basic understanding of how RestExpress works. Finally, we will discover how to get the Virtual-Vehicles microservices up and running.
Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 2 of 2)
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Software Development on November 13, 2013
Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.
Introduction
In part 1, Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2), we built the first part of our basic deployment pipeline using leading open-source technologies. In part 2, we will use Jenkins CI Server and Oracle GlassFish Application Server to complete our deployment pipeline.
To review, the three main goals of our deployment pipeline are continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).
Setting up Git Server
As I mentioned in part 1, as a part of a development team using Git, you would place your project on a remote Git Server. You and your team members would each clone the repository from the Git Server to your local development environments. You and your team would commit your code changes locally, then pull, merge, and push your changes back to the remote Git Server. Jenkins will pull the project’s source code from the Git Server.
In part 1 of this post, we just created a local Git repository. In part 2, we will properly set-up our project on a remote Git Server. First, we need to export our local repository into a new, bare repository on the Git Server. The Git term, ‘bare repository’, refers to a repository that does not contain a working directory. The repository has no working copies of your source files. You only use the bare repository to clone, pull from, and push to. The bare repository contains a .git extension (i.e. ssh://user@server:/git-repos/myproject.git).
From the root of your remote Git Server repository, execute the following command, substituting the path to your local project. If your Git Server is on a separate machine that your local project repository, you will need to copy the new bare repository to the remote Git Server. This involves a few simple steps, explained in this post, and at git-scm.com.
git clone --bare {path-to-existing-local-repository}\{name-of-repository} {name-of-repository}.git |
Once you have created the repository on the remote Git Server, I would recommend you clone the remote repository to your local machine and discard your original local repository from part 1 of the post. You don’t have to do this step, but cloning fresh from the server will make sure Git is working correctly. The screen grabs below illustrate an example of cloning a new repository to my local NetBeans Project folder.
Configuring Jenkins
The diagram below illustrates the deployment pipeline from Git Server to Jenkins to GlassFish in finer detail. It begins with an initial commit to the local Git project repository and ends with the deployment of the project’s WAR file to the GlassFish domain. We will walk through it step-by-step.
Jenkins Plugins
Before we create our new Jenkins Jobs, we need to configure Jenkins properly. You will need a recent version of Jenkins installed, along with the following plugins:
- Build With Parameters Plugin
- Copy Artifact Plugin
- Jenkins GIT plugin (includes Jenkins GIT client plugin)
- Jenkins Parameterized Trigger plugin
- Maven Integration plugin
- Credentials Plugin (optional for use with Git Server if security is enabled)
- ThinBackup (optional to install supplied Jenkins jobs configuration files)
Global Security
Jenkins can be configured with or without Global Security. For this post, I have enabled Global Security, as it typical of most development environments. I chose to use ‘Jenkins’s own user database’ option for authentication. In larger development environments, authentication would normally be done against LDAP.
The user I have set up, ‘jenkins’, will be the user that Git authenticates with when connecting to Jenkins (explained later). Set up your own user and note their API Token. Since Global Security has been enabled, we will need the token later to trigger the Jenkins build from Git. Your user’s unique api token will be different than in the example below.
Jenkins Jobs
We will set up two Jenkins ‘free-style software project’ jobs, ‘GitMavenGlassFish_Build’ and ‘GitMavenGlassFish_Deploy’. We won’t be using the obvious choice, a ‘maven2/3 project’. If you’re interested, here’s why. The first job, the build job, will be responsible for pulling the source code from the Git Server. The build job, with help from Maven, will compile, test, and assemble the application code. The second job, the deployment job, will pull the artifacts from the build job and deploy them to GlassFish. The build job will trigger the deployment job, once the build job completes successfully. This is explained in detail, to follow.
Why Two Jobs?
Following good modular design and Separation of Concerns (SoC) principles, separating the build from the deployment gains us several advantages, including:
- Modularity– Ability to change deployment methodology or deployment targets, without disrupting the build and test process. For example, we might move the application hosting from GlassFish to WebLogic, or decide to use Ant instead of Maven for deployment tasks. This can happen totally independent of the build and testing processes.
- Separation/Isolation – For any reason we are unable to deploy the artifacts as part of the deployment job, we won’t impact the continuous integration and automated testing processes, which are part of the separate build job.
- Support – Support is easier by having smaller pieces of functionality to troubleshoot and maintain.
In a larger enterprise environment, you would probably encounter further separation of concerns. Unit testing, performance testing, deployment validation, and documentation generation (javadocs) are often handled by separate jobs. Jenkins represents a smaller pipeline within our larger deployment pipeline.
I intentionally left out notification for brevity. At minimum, you would want to be notified when the build or deployment jobs failed. Additionally, with continuous deployment, the deployment would trigger a notification to the stakeholders of that environment, such as the Testers. This lets them know the new software is ready to be tested. Notifications often include a list of bug fixes and feature enhancements that need to be tested. This can easily be pulled from Git into Jenkins and out to the end user.
Both Jenkins jobs definitions are available as xml files on gist.github.com. Using Jenkins’ ThinBackup Plugin, you can save both gists locally, and then restore them to your Jenkins server. The build job gist is here and the deployment job gist is here. This may save you some configuration time.
Jenkins Build Job
Both the build job and the deployment jobs require an input parameter. This property represents the targeted environment (GlassFish domain) for deployment, such as ‘testing’.. How this parameter is passed to Jenkins is discussed later in the Git Hooks section, below.
Reviewing the below screen grab of the build job’s configuration, you will observe the following steps:
- Build Request – A build request is received by the job (explained later). The request contains an input parameter indicating the ‘environment’. The parameter must be one of the choices listed in ‘Choices’.
- Maven Dependencies – Based on the pom file, Maven retrieves all the required dependencies from the remote Maven repository, if the dependencies are not already contained in the workspace’s local repository. Note the setting ‘User private Maven repository. This creates a local repository for project dependencies within the project’s workspace.
- Pull from Git – Jenkins pulls the code from the Git Server using the supplied repository configuration information. Note my Git Server does not require authentication. If it did, we would set-up and use the proper credentials.
- Build – Jenkins builds the project using the Maven command ‘clean install -e’. The pom file contains the necessary configuration information.
- Unit Test – The above Maven ‘install’ command also calls JUnit to execute the unit tests. The results of these tests are published and displayed as part of the build job’s details.
- Assemble WAR – The above Maven ‘install’ command also assembles the project’s WAR file.
- Archive Artifacts – Based on the success of the build and unit tests, Jenkins archives specific artifacts needed by the deployment job. Jenkins uses the input parameter in #1 to define which properties file and password file to archive.
- Trigger Deployment Job – Based on the success of the build and unit tests, Jenkins triggers the ‘downstream’ deployment job, passing it the same environment parameter.
Jenkins Deployment Job
Reviewing the below screen grab of the deployment job’s configuration, you will observe the following steps:
- Build Request – A build request is received from the upstream build job. The request contains the input parameter indicating ‘environment’.
- Copy Artifacts – Jenkins copies the artifacts from the build job that called the deploy job.
- Read Properties – Maven executes the command ‘mvn properties:read-project-properties glassfish:redeploy -e’. The first half of this command instructs Maven to read the appropriate properties file, as indicated by the environment parameter, ‘glassfish.properties.file.argument=${environment}’.
- POM – Maven substitutes the key ‘glassfish.properties.file.argument’ in the pom file with the environment value. This tells Maven the name of the properties file, which supplies all the remaining property values to the pom file.
- Maven Dependencies – If the dependencies are not already contained in the workspace’s local repository, Maven retrieves all the required dependencies from the remote Maven repositories, based on the pom. Note the setting ‘User private Maven repository’ checked in the screen grab below. This option instructs Jenkins to creates a local repository for project dependencies within the project’s workspace.
- Deployment – The last half of the command in #3 deploys, or more accurately redeploys the application’s WAR file to GlassFish. The ‘glassfish:redeploy’ works only if the WAR file has already been initially deployed to the GlassFish domain using the ‘glassfish:deploy’ command. For this process, I am assuming the initial deployment was already done directly through the GlassFish Administration Console, NetBeans, or command line.
Git Hooks
To achieve continuous integration, we want to automatically build and test our job after each change to our code. We have a number of choices to make this happen. The obvious choice is letting Jenkins poll the Git Server. Although polling would simplify configuration, polling is frowned upon in many environments. Even the creator of Jenkins, Kohsuke Kawaguchi, frowns upon polling in his post, ‘Polling Must Die‘.
Why is polling bad? It adds unnecessary activity and delay. Let’s say Jenkins’ polling frequency is set to every 2 minutes, but you only have an average of 5 pushes to your remote Git Server project repository per day. Based on these stats, in just one day, Jenkins will poll Git 720 times to discover only 5 pushes. That’s 144 times per push. Also, based on the polling frequency, when you do push, you could wait up to 2 minutes for Jenkins to queue the build job. The longer you wait for feedback on your changes, the greater chance your defects could be pulled down by other developers. You should expect immediate and continuous feedback.
A vastly more efficient and configurable method of continuous integration between Git and Jenkins is Git Hooks. Git Hooks allow us to execute scripts based on specific Git actions. In our case, when a developer completes a successful push to the remote Git Server project repository, we want to call Jenkins to build, test, and deploy the modified project code. Using hooks means we only call Jenkins when a successful push is completed. Furthermore, we can be assured Jenkins will immediately queue our request to build and deploy the job when a push occurs.
Post-Receive Hook
There are several types of Git Hooks. They include ‘post-commit’, ‘pre-push’, ‘update’, ‘pre-rebase, and so forth. I recommend this post on kernel.org for a good explanation of the hook types and thier purposes. Git also includes sample hook files inside the ‘hooks’ subdirectory of each new repository .git folder.
For our pipeline, we will employ the ‘post-receive’ hook. Whenever a successful push is received by Git Server’s project repository, the ‘post-receive’ hook will be called. The script commands, contained in the post-receive hook file, will be executed. Hooks can language agnostic; they can be almost any scripting language, such as Perl, Shell, Bash, or Ruby.
To create the hook, create a new file, ‘post-receive’, in the hooks sub-directory of the Git Server’s project repository. Add the below code to the file. Change the command to match your local file path. Also, change the API Token to match your user’s token from Jenkins. Note the command requires cURL to be installed on the Git Server. If installing cURL is not an option, there are other options available to execute the http post call from the hook’s script.
#!/bin/sh | |
# Call Jenkins to start build and pass environment parameter | |
# | |
echo "executing post-receive hook" | |
echo "environment=testing" | |
echo "user=jenkins" | |
# cURL POST request using jenkins user with API token | |
curl -u jenkins:{your-api-token-here} \ | |
--data "delay=0sec&environment=testing" \ | |
"{your-jenkins-server-url:port}/job/GitMavenGlassFish_Build/buildWithParameters" |
NetBeans and Git Hooks
Now some slightly bad news. As with any integration, there is always trade-offs; that is the case with NetBeans and Git. Although NetBeans works well with Git, there are a few features that have not been implemented. Unfortunately, this lack of complete integration effects NetBeans’ ability to make use of Git Hooks. Only after three hours of troubleshooting and research on the Internet, did I realize this limitation. The hooks fire fine if a git push command is executed from a command prompt or from within a Git application like Git Gui or Git Bash. However, from NetBeans, the Team -> Remote -> Push… does not cause the hooks to be called.
Git Hooks do not work with NetBeans because NetBeans does not use a command line client for Git. NetBeans uses a pure java implementation of the Git client, Java GIT, known as JGit. I understand that other IDE’s also share this limitation. There are several discussions on StackOverflow and on the NetBeans bug tracking site about the issue and workarounds.
So what does this mean? You can use NetBeans to perform all of your local tasks. However, when it comes time to push your code back to the remote Git Server repository, you must use a command prompt, Bash shell, or a command line based tool. I recommend Git Gui. Git ships with built-in GUI tools, including git-gui and gitk. It can be downloaded from git-scm.com.
Pushing changes to the remote Git Server using Git Gui instead of NetBeans may seem inconvenient at first. However, the more advanced your needs become with Git, the more you will find you need the additional functionality of Git Bash, Git Gui, and gitk. Tasks like resetting the branch to a previous revision, compressing the Git repository database, and visualizing repository history, can all be done with tools like Git Gui and gitk. I have Git Gui running when I am working in NetBeans or other IDEs; it becomes second nature.
Deploying to GlassFish
At this point we have configured the Git Server, created the Jenkins build and deploy jobs, and configured our Git hook. We are ready to test our deployment pipeline. First, make sure your GlassFish domains are running. Also, recall we are assuming that an initial deployment of the application has occurred. This might be directly through the GlassFish Administration Console, through NetBeans, or via the command line. Recall, Jenkins will be only be executing a re-deploy.
To test the system, make an innocuous change to the Project. Commit the change to your local Git repository. Following that, push the change back to the remote Git Server repository using Git Gui. If the hook fired, you will see output to the Git Gui terminal window, echoed from the post-receive hook as it executed its script.
The post-receive hook executes the cURL command, which posts an HTTP request to Jenkins via the Jenkins Remote API. You should observe is the Jenkins build job queued and running.
When the build completes, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job. The build results window also provides test results, Git Build Data, and the changes pushed to Git that triggered the CI build.
The console output from the build provides a detailed view of the build process. Using the ‘-e’ for echo with the Maven command, increases the level of output detail. You see the details of Maven copying the required dependencies from the remote repository to the local workspace repository, prior to compilation. You see the unit tests being executed. Finally, you see the WAR file assembled and the required artifacts archived.
Regarding Maven Dependencies, you will only see the dependencies copied on the first build to an empty workspace. Maven does not re-pull dependencies if they already exist in the workspace’s local repository. To see the difference, empty your workspace and build the job, then immediately rebuild the job. Compare the console outputs of both jobs. You will see a significant difference in the Maven dependency activities.
Once the build job has completed successfully, you should notice the Jenkins deployment job running, triggered by the build job. When complete, note the detail that lists the exact build job that called the deployment job, and its build number. For example, the upstream build job #45 triggered the downstream deployment job #33. This linkage between upstream and downstream jobs is retained in the job’s history.
As before, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job, and then on to the deployment job.
A review of the console output will confirm that the artifacts were copied from the build job and the WAR file was deployed to the ‘testing’ GlassFish domain.
GlassFish
If the hook fired, and both the Jenkins build and deployment jobs ran successfully, you should observe that the project’s WAR files, containing your recent change, was deployed to the testing GlassFish domain.
You can verify this by calling the application’s RESTful ‘resources/helloWorld’ URI, from your browser. Repeat the process by changing the output string, commit the change, and push. See if you see your change deployed.
Jenkins Workflows
Using our deployment pipeline, we have two distinct workflow options:
- Continuous– Use Git hooks to build, test, and deploy the WAR file to the domain(s) of choice when changes are pushed. Any time a change is pushed, a build, test, and deploy, should occur. This would be just for development at first. Once the project enters the testing phase of the SDLC, then it would include deployments to testing.
- Semi-Automated – Start the Jenkins build manually in the Jenkins browser-based Administration Console. This is more typical for a release to Production. Most teams are not comfortable extending the continuous deployment functionality into Production. Often, a deployment team will deploy the project artifacts in a controlled and staged approach. The Jenkins build and/or deployment jobs both allow this feature, along with the ability to provide the environment parameter both jobs needs.
Conclusion
In part 1, we learned how to create a simple Java EE web application project in NetBeans using Maven. We learned how to integrate JUnit for unit testing, and how use Git to manage our source code.
In part 2, we learned how to configure a remote Git Server, how to configure Jenkins CI Server to clone our project from the Git Server, build, test, and assemble it. If the build was successful, we learned how to configure Jenkins to deploy our project to a specific GlassFish domain, based on the project’s stage in the SDLC. We achieved our goals of continuous integration, automated testing, and continuous deployment.
Going Forward
To extend and enhance our deployment pipeline, you might consider adding the following features: 1) further separate the Jenkins jobs by function, 2) add build and deploy notifications, 3) add the ability to deploy to multiple environments simultaneously (i.e. development and testing), 4) add additional testing to confirm the deployment to GlassFish, 5) configure a versioning and naming scheme for the deployed artifacts, and 6) add error handling if a parameter is not received or is not one of the expected values.
Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2)
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on November 4, 2013
Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.
Introduction
In my earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit, I demonstrated a basic deployment pipeline using leading open-source technologies. In this post, we will demonstrate a similar pipeline, substituting Jenkins CI Server for Hudson, and Oracle’s GlassFish Application Server for WebLogic Server. We will use the same NetBeans Java EE ‘Hello World’ RESTful Web Service sample project.
The three main goals of our deployment pipeline will be continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).
Building a reliable deployment pipeline is complex and time-consuming. To make it as easy as possible in this post, I chose NetBeans IDE for development, Git Distributed Version Control System (DVCS) for managing our source code, Jenkins Continuous Integration (CI) Server for build automation, JUnit for automated unit testing, GlassFish for application hosting, and Apache Maven to manage our project’s dependencies. Maven will also manage the build and deployment process to GlassFish, along with Jenkins. The beauty of NetBeans is its out-of-the-box, built-in integration with Git, Maven, JUnit, and GlassFish. Likewise, Jenkins has plugin-based integration with Git, Maven, JUnit, and GlassFish. Also, Maven has plugin-based integration with GlassFish.
Maven is a powerful tool for managing modern software development projects. This post will only draw upon a small part of Maven’s functionality and plug-in architecture extensibility. Specifically, we will use the Maven GlassFish Plugin. According to the Java.net website, which host’s the plug-in project, ‘the Maven GlassFish Plugin is a Maven2 plugin allowing management of GlassFish domains and component deployments from within the Maven build life cycle.’
Requirements
To follow along with this post, I will assume you have recent versions of the following software installed and configured on your Windows OS-based computer (the process is nearly identical for Linux):
- NetBeans IDE. Current version: 7.4
- JUnit. Current version: 4.11 (included with NetBeans 7.4)
- GlassFish Server. Current version: 4.0 (included with NetBeans 7.4)
- Jenkins CI Server. Current version: 1.538
- Apache Maven. Current version: 3.1.1
- cURL. Current version: 7.33.0
- Git with Git Gui and gitk. Current version: 1.8.4.3
- Necessary system environmental variables:
M2_HOME, M2, JAVA_HOME, GLASSFISH_HOME, and PATH
GlassFish Domains
To simulate a simple deployment pipeline, we will create three GlassFish domains, simulating three common software environments, Development, Testing, and Production. A typical software project is promoted through these environments as it moves from development, to testing, and finally release to production. Each environment has distinct stakeholders with specific roles to play in the software development life cycle, including developers, testers, deployment teams, and end-users. Larger-scale, enterprise software development often includes other environments, such as Performance and Staging.
Create the domains from the command line using ‘asadmin’ commands such as the ones below. Note I have a ‘GLASSFISH_HOME’ system environment variable set up. The ports are your choice, but make sure they don’t conflict with existing installations of other applications, such as Jenkins, Tomcat, IIS, WebLogic, and so forth.
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production | |
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 6060 --instanceport 6061 testing | |
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 5050 --instanceport 5051 development |
As part of the creation process, you’re prompted for an admin account and a new password. I kept the ‘admin’ username, but added a new password for each domain created. This password is the same as one used in the separate password files (explained below).
C:\Users\gstaffor>asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production | |
Enter admin user name [Enter to accept default "admin" / no password]>admin | |
Enter the admin password [Enter to accept default of no password]> | |
Enter the admin password again> | |
Using port 7070 for Admin. | |
Using port 7071 for HTTP Instance. | |
Using default port 7676 for JMS. | |
Using default port 3700 for IIOP. | |
Using default port 8181 for HTTP_SSL. | |
Using default port 3820 for IIOP_SSL. | |
Using default port 3920 for IIOP_MUTUALAUTH. | |
Using default port 8686 for JMX_ADMIN. | |
Using default port 6666 for OSGI_SHELL. | |
Using default port 9009 for JAVA_DEBUGGER. | |
Distinguished Name of the self-signed X.509 Server Certificate is: | |
[CN={my_computer_name},OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C=US] | |
Distinguished Name of the self-signed X.509 Server Certificate is: | |
[CN={my_computer_name}-instance,OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C | |
=US] | |
Domain production created. | |
Domain production admin port is 7070. | |
Domain production admin user is "admin". | |
Command create-domain executed successfully. |
Add the GlassFish domains to NetBeans’ Services -> Server tab, and start them.
Setting Up the Project
To set up our NetBeans project, you can clone the repository on GitHub or build your own project from scratch and copy the files into the project. I will not spend a lot of time explaining the code since we have used it in earlier posts. This post is about the deployment pipeline system, not the project’s code.
If you choose to create a new project, first, create a new Maven ‘Project from Archetype’. Select the Archetype for a ‘web application using Java EE 7’ (webapp-javaee7).
I recommend you create the project inside of your local Git repository folder.
Maven will execute a series of commands to create the default NetBeans project with dependencies.
Git
As a part of a development team using Git, you place your project on a remote Git Server. You and your team members each clone the repository on the Git Server to your local development environments. You and your team commit your code changes locally, then pull, merge, and push your changes back to the Git Server. Jenkins will pull the project’s source code from the remote Git Server.
In part 2, we will properly set-up our project on the Git Server, exporting our existing repository into a new, bare repository on the Git Server. However, for brevity in part 1 of this post, we will just create a local Git repository. To start, create a new Git repository for the project. In NetBeans, select Team -> Git -> Initialize Repository… Choose the new Maven project folder.
The initial view of the Maven project should look like the below screen grabs. Note the icons and the green files show that the project is part of the Git repository.
Perform an initial commit of the project to Git to make sure everything is working.
Next, copy the supplied HelloWorldResource. java and NameStorageBean.java classes into the project. The package classpath will be refactored by NetBeans. Copy all the remaining files and folders, including the (3) files in the WEB-INF folder, properties folder with (3) properties files, and passwords folder with (3) password files.
JUnit
Next, right-click on the NameStorageBean.java class and select Tools -> Create Tests. Replace the contents of the new NameStorageBeanTest.java file’s NameStorageBeanTest class with the contents of the supplied NameStorageBeanTest.java file. These are two very simple unit tests that will show how JUnit provides automated testing capabilities.
Project Object Model (POM)
Copy the contents of the supplied pom file into the new pom file. There is a lot of configuration in the supplied pom. It will be easier to copy the supplied pom file’s contents into your project then trying to configure it from scratch.
Basically, beyond the normal boilerplate pom configuration, we have defined (3) properties, (3) dependencies, and (5) build plugins. The three dependencies are junit, jersey-servlet, and javaee-web-api. The five plugins are maven-compiler-plugin, maven-war-plugin, maven-dependency-plugin, properties-maven-plugin, and the maven-glassfish-plugin. Each plugin contains individual plug-in specific configuration. The name of the plugin should be sufficient to explain their primary purpose.
<?xml version="1.0" encoding="UTF-8"?> | |
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> | |
<modelVersion>4.0.0</modelVersion> | |
<groupId>com.blogpost</groupId> | |
<artifactId>HelloGlassFishMaven</artifactId> | |
<version>1.0-SNAPSHOT</version> | |
<packaging>war</packaging> | |
<name>HelloGlassFishMaven</name> | |
<properties> | |
<!-- Input Parameter - GlassFish properties file --> | |
<glassfish.properties.file.argument></glassfish.properties.file.argument> | |
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> | |
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> | |
</properties> | |
<dependencies> | |
<dependency> | |
<groupId>junit</groupId> | |
<artifactId>junit</artifactId> | |
<version>4.11</version> | |
</dependency> | |
<dependency> | |
<groupId>com.sun.jersey</groupId> | |
<artifactId>jersey-servlet</artifactId> | |
<version>1.13</version> | |
</dependency> | |
<dependency> | |
<groupId>javax</groupId> | |
<artifactId>javaee-web-api</artifactId> | |
<version>7.0</version> | |
<scope>provided</scope> | |
</dependency> | |
</dependencies> | |
<build> | |
<plugins> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-compiler-plugin</artifactId> | |
<version>3.1</version> | |
<configuration> | |
<source>1.7</source> | |
<target>1.7</target> | |
<compilerArguments> | |
<endorseddirs>${endorsed.dir}</endorseddirs> | |
</compilerArguments> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-war-plugin</artifactId> | |
<version>2.3</version> | |
<configuration> | |
<failOnMissingWebXml>false</failOnMissingWebXml> | |
<filteringDeploymentDescriptors>true</filteringDeploymentDescriptors> | |
<webresources> | |
<resource> | |
<directory>${basedir}/src/main/webapp/WEB-INF</directory> | |
<filtering>true</filtering> | |
<targetpath>WEB-INF</targetpath> | |
<includes> | |
<include>**/glassfish-web.xml</include> | |
</includes> | |
</resource> | |
</webresources> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-dependency-plugin</artifactId> | |
<version>2.6</version> | |
<executions> | |
<execution> | |
<phase>validate</phase> | |
<goals> | |
<goal>copy</goal> | |
</goals> | |
<configuration> | |
<outputDirectory>${endorsed.dir}</outputDirectory> | |
<silent>true</silent> | |
<artifactItems> | |
<artifactItem> | |
<groupId>javax</groupId> | |
<artifactId>javaee-endorsed-api</artifactId> | |
<version>7.0</version> | |
<type>jar</type> | |
</artifactItem> | |
</artifactItems> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>org.codehaus.mojo</groupId> | |
<artifactId>properties-maven-plugin</artifactId> | |
<version>1.0-alpha-2</version> | |
<configuration> | |
<files> | |
<file>${basedir}/properties/${glassfish.properties.file.argument}.properties</file> | |
</files> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.glassfish.maven.plugin</groupId> | |
<artifactId>maven-glassfish-plugin</artifactId> | |
<version>2.1</version> | |
<configuration> | |
<glassfishDirectory>${GLASSFISH_HOME}</glassfishDirectory> | |
<user>${glassfish.user}</user> | |
<passwordFile>${basedir}/passwords/${glassfish.pwdfile}</passwordFile> | |
<echo>true</echo> | |
<debug>true</debug> | |
<terse>true</terse> | |
<domain> | |
<name>${glassfish.domain}</name> | |
<host>${glassfish.host}</host> | |
<adminPort>${glassfish.adminport}</adminPort> | |
</domain> | |
<components> | |
<component> | |
<name>${project.artifactId}</name> | |
<artifact>${project.build.directory}/${project.build.finalName}.war</artifact> | |
</component> | |
</components> | |
</configuration> | |
</plugin> | |
</plugins> | |
</build> | |
</project> |
When complete, right-click on the project and do a ‘Build with Dependencies…’. Make sure everything builds. The final view of the project, with all its Maven-managed dependencies should look like the two screen grabs shown below. Make sure to commit all your new code to Git.
Maven and Properties Files
In part 2, will be deploying our project to multiple GlassFish domains. Each domain’s configuration is different. We will use Java properties files to store each of the GlassFish domain’s configuration properties. The ability to use Java properties files with Maven is possible using the Mojo Project’s Properties Maven Plugin. I introduced this plugin in an earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit.
Each environment (Development, Testing, Production), represented by a GlassFish domain, has a separate properties file in the project (see the Files Tab view above). The properties files contain configuration values the Maven GlassFish Plugin will need to deploy the project’s WAR file to each GlassFish domain. Since the build and deployment configurations are required by the project, including them into our Git repository and automating their use based on the environment, are two best practices.
# contents of all three files shown here | |
# development domain properties file | |
glassfish.domain=development | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=5050 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_development | |
# testing domain properties file | |
glassfish.domain=testing | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=6060 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_testing | |
# production domain properties file | |
glassfish.domain=production | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=7070 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_production |
In our project’s particular workflow, Maven accepts a single argument (‘glassfish.properties.file.argument’), which represents the environment we want to deploy to, such as ‘development’. The property value tells Maven which properties file to read, such as ‘development.properties’. Maven replaces the keys in the pom file with the values from the ‘development.properties’ file.
The properties file also tells Maven the full path to the separate password file, containing the admin user password, such as ‘pwdfile_development’. In an actual production environment, we would store encrypted password files on a secured file path. For simplicity in our example, we have included them unencrypted, within the project’s main directory.
There are other Maven capabilities that also would achieve our deployment goals. For example, you might consider the Maven Release Plugin, as well as look at using Maven Build Profiles.
Testing the Pipeline
Although we have not built the second half of our deployment pipeline yet, we can still test the system at this early stage. All the necessary foundational elements are in place. To test the our system, right-click on the Maven Project icon in the Projects tab and select Custom -> Goals… Enter the following Maven Goals: ‘properties:read-project-properties clean install glassfish:redeploy -e’. In the Properties text box, enter the following: ‘glassfish.properties.file.argument=testing’ (see screen grab below). This will execute a number of Maven Goals and associated commands, visible in the Output tab.
With this one simple command, we are asking Maven to 1) read in our Java properties file and password file, 2) clean the project, 3) pull down all our project’s dependencies, 4) compile the project’s code, 5) execute the unit tests with JUnit, 6) assemble the WAR file, and 7) deploy it to the ‘testing’ GlassFish domain using asadmin. The terse nature of the command really demonstrates the power of Maven to manage our project and the deployment pipeline!
If successful you should see a message in the Output tab, indicating as much. Reviewing the contents of the Output tab will give you complete insight into the Maven process under the NetBeans hood. We used the ‘-e’ (echo) argument with Maven and the ‘Show Debug Output’ to further provide information to us about the process. The output contains all calls to Maven and subsequently to asadmin (GlassFish). You can learn a lot about using Maven and asadmin (GlassFish) by studying the Debug Output.
Conclusion
In the first part of this post, we learned how to create a simple Java EE web application project in NetBeans, using Maven. We learned how to integrate JUnit for automated testing, and how use Git to manage our source code.
In the second half of this post, we will learn how to configure Jenkins CI Server to retrieve our project from the remote Git repository, build, test, and assemble it into a WAR file. If these steps are successful, Jenkins will deploy our project to a GlassFish domain or multiple domains, based on the project’s stage in the software development life cycle. We will demonstrate how to automate Jenkins to achieve true continuous integration and continuous deployment.
Spring Integration with Eclipse Using Maven
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Software Development on October 21, 2013
Integrate the Spring Framework into your next Eclipse-based project using Apache Maven. Learn how to install, configure, and integrate these three leading Java development tools. All source code for this post is available on GitHub.
Introduction
Although there is a growing adoption of Java EE 6 and CDI in recent years, Spring is still a well-entrenched, open-source framework for professional Java development. According to GoPivotal’s website, “The Spring Framework provides a comprehensive programming and configuration model for modern Java-based enterprise applications. Spring focuses on the ‘plumbing’ of enterprise applications so that teams can focus on application-level business logic, without unnecessary ties to specific deployment environments.”
Similar to Spring in terms of wide-spread adoption, Eclipse is leading Java IDE, competing with Oracle’s NetBeans and JetBrain’s IntelliJ. The use of Spring within Eclipse is very common. In the following post, I will demonstrate the ease of integrating Spring with Eclipse, using Maven.
Maven is a marketed as a project management tool, centralizing a project’s build, reporting and documentation. Conveniently, Maven is tightly integrated with Eclipse. We will use Maven for one of its best known features, dependency management. Maven will take care of downloading and managing the required Spring artifacts into our Eclipse-based project.
Note there are alternatives to integrating Spring into Eclipse, using Maven. You can download and add the Spring artifacts yourself, or go full-bore with GoPivotal’s Spring Tool Suite (STS). According to their website, STS is an Eclipse-based development environment, customized for developing Spring applications.
The steps covered in this post are as follows:
- Download and install Maven
- Download and install the Eclipse IDE
- Linking the installed version of Maven to Eclipse
- Creating a new sample Maven Project
- Adding Spring dependencies to the project
- Demonstrate a simple example of Spring Beans and ApplicationContext
- Modify the project to allow execution from an external command prompt
Installing Maven
Installing Maven is simple process, requiring minimal configuration:
- Download the latest version of Maven from the Apache Maven Project website. At the time of this post, Maven is at version 3.1.1.
- Assuming you are Windows, unzip the ‘apache-maven-3.1.1’ folder and place in your ‘Program Files’ directory.
- Add the path to Maven’s bin directory to your system’s ‘PATH’ Environmental Variable.
We can test our Maven installation by opening a new Command Prompt and issuing the ‘mvn -version’ command. The command should display the installed version of Maven, Maven home directory, and other required variables, like your machine’s current version of Java and its location. To learn other Maven commands, try ‘mvn -help’.
Installing Eclipse IDE
Installing Eclipse is even easier:
- Download the latest version of Eclipse from The Eclipse Foundation website. There are several versions of Eclipse available. I chose ‘Eclipse IDE for Java EE Developers’, currently Kepler Service Release 1.
- Similar to Maven, unzip the ‘eclipse’ folder and place in your ‘Program Files’ directory.
- For ease of access, I recommend pinning the main eclispe.exe file to your Start Menu.
Linking Maven to Eclipse
The latest version of Eclipse comes pre-loaded with the ‘M2E – Maven Integration for Eclipse’ plug-in. There is no additional software installs required to use Maven from within Eclipse. Eclipse also includes an embedded runtime version of Maven (currently 3.04). According to the Eclipse website wiki, the M2E plug-in uses the embedded runtime version of Maven when running Maven builder, importing projects and updating project configuration.
Although Eclipse contains an embedded version of Maven, we can configure M2E to use our own external Maven installation when launching Maven using Run as… -> M2 Maven actions. To configure Maven to use the version of Maven we just installed:
- Go to Windows -> Preferences -> Maven -> Installations window. Note the embedded version of Maven is the only one listed and active.
- Click Add… and select the Maven folder we installed in your Program Files directory. Click OK.
- Check the box for new installation we just added instead of the embedded version. Click OK.
Sample Maven Project
To show how to integrate Spring into a project using Maven, we will create a Maven Project in Eclipse using the Maven Quickstart Archetype template. The basic project will show the use of Spring Beans and an ApplicationContext IoC container. On a scale of 1 to 10, with 10 being the most complex Spring example, this project is barely a 1! However, it will demonstrate that Spring is working in Eclipse, with minimal effort thanks to Maven.
To create the project:
- File -> New Project -> Other…
- Search on ‘maven’ in the Wizards text box and select ‘Maven Project’.
- Select the Maven Quickstart Archetype.
- Specify the Archetype parameters.
Spring Dependencies
Once the Maven Quickstart project is created, we will add the required Spring dependencies using Maven:
- Open the Maven Project Object Model (POM) file and select the Dependencies tab.
- Use the The Central Repository website to find the Dependency Information for spring-core and Spring-context artifacts (jar files).
- Add… both Spring Dependencies to the pom.xml file.
- Right-click on the project and click Maven -> Update Project…
We now have a Maven-managed Eclipse project with our Spring dependencies included. Note the root of the file paths to the jar files in the Maven Dependencies project folder is the location of our Maven Repository. This is where all the dependent artifacts (jar files) are stored. In my case, the root is ‘C:\Users\{user}\.m2\repository’. The repository location is stored in Eclipse’s Maven User Setting’s Preferences (see below).
Project Object Model File (pom.xml):
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | |
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> | |
<modelVersion>4.0.0</modelVersion> | |
<groupId>com.blogpost.maven</groupId> | |
<artifactId>maven-spring</artifactId> | |
<version>0.0.1-SNAPSHOT</version> | |
<packaging>jar</packaging> | |
<name>maven-spring</name> | |
<url>http://maven.apache.org</url> | |
<properties> | |
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> | |
</properties> | |
<dependencies> | |
<dependency> | |
<groupId>junit</groupId> | |
<artifactId>junit</artifactId> | |
<version>3.8.1</version> | |
<scope>test</scope> | |
</dependency> | |
<dependency> | |
<groupId>org.springframework</groupId> | |
<artifactId>spring-core</artifactId> | |
<version>3.2.4.RELEASE</version> | |
</dependency> | |
<dependency> | |
<groupId>org.springframework</groupId> | |
<artifactId>spring-context</artifactId> | |
<version>3.2.4.RELEASE</version> | |
</dependency> | |
</dependencies> | |
<description>Project for blog post about the use of Spring with Eclipse and Maven.</description> | |
</project> |
Sample Code
Next add the supplied Code to the project. We will add two new java classes and a Spring configuration file. We will replace the contents of main App class with our sample code. Steps are as follows:
- Add the supplied Vehicle.java and MaintainVehicle.java class files to the project, in the same classpath as the App.java class.
- Add the supplied Beans.xml Spring configuration file to the project at the ‘src/main/java’ folder.
- Open the App.java class file and replace the contents with the supplied App.java class file.
The sample Spring application is based on vehicles. There are three Spring Beans defined in the xml-based Spring configuration file, representing three different vehicles. The main App class uses an ApplicationContext IoC Container to instantiate three Vehicle POJOs from the Spring Beans defined in the Beans.xml Spring configuration. The main class then instantiates an instance of the MaintainVehicle class, passes in the Vehicle objects and calls MaintainVehicle’s two methods.
Spring Configuration File (Beans.xml):
<?xml version="1.0" encoding="UTF-8"?> | |
<beans xmlns="http://www.springframework.org/schema/beans" | |
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | |
xsi:schemaLocation="http://www.springframework.org/schema/beans | |
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> | |
<bean id="vehicle1" class="com.blogpost.maven.maven_spring.Vehicle"> | |
<property name="make" value="Mercedes-Benz" /> | |
<property name="model" value="ML550" /> | |
<property name="year" value="2010" /> | |
<property name="color" value="Silver" /> | |
<property name="type" value="SUV" /> | |
</bean> | |
<bean id="vehicle2" class="com.blogpost.maven.maven_spring.Vehicle"> | |
<property name="make" value="Jaguar" /> | |
<property name="model" value="F-Type" /> | |
<property name="year" value="2013" /> | |
<property name="color" value="Red" /> | |
<property name="type" value="Convertible" /> | |
</bean> | |
<bean id="vehicle3" class="com.blogpost.maven.maven_spring.Vehicle"> | |
<property name="make" value="Suzuki" /> | |
<property name="model" value="SVF 650" /> | |
<property name="year" value="2012" /> | |
<property name="color" value="Black" /> | |
<property name="type" value="Motorcycle" /> | |
</bean> | |
</beans> |
Main Method Class (App.java)
package com.blogpost.maven.maven_spring; | |
import org.springframework.context.ApplicationContext; | |
import org.springframework.context.support.ClassPathXmlApplicationContext; | |
public class App { | |
public static void main(String[] args) { | |
@SuppressWarnings("resource") | |
ApplicationContext context = new ClassPathXmlApplicationContext( | |
"Beans.xml"); | |
MaintainVehicle maintain = new MaintainVehicle(); | |
// vehicle1 bean | |
Vehicle obj1 = (Vehicle) context.getBean("vehicle1"); | |
System.out.printf("I drive a %s.\n", obj1.getLongDescription()); | |
System.out.printf("Is my %s tuned up? %s\n", | |
obj1.getShortDescription(), obj1.getServiced()); | |
maintain.serviceVehicle(obj1); | |
System.out.printf("Is my %s tuned up, yet? %s\n\n", obj1.getMake(), | |
obj1.getServiced()); | |
// vehicle2 bean | |
Vehicle obj2 = (Vehicle) context.getBean("vehicle2"); | |
System.out.printf("My wife drives a %s.\n", obj2.getLongDescription()); | |
System.out.printf("Is her %s clean? %s\n", obj2.getShortDescription(), | |
obj2.getWashed()); | |
maintain.washVehicle(obj2); | |
System.out.printf("Is her %s clean, now? %s\n\n", obj2.getMake(), | |
obj2.getWashed()); | |
// vehicle3 bean | |
Vehicle obj3 = (Vehicle) context.getBean("vehicle3"); | |
System.out.printf("Our son drives his %s too fast!\n", obj3.getType() | |
.toLowerCase()); | |
} | |
} |
Running the Application
If successful, the application will output a series of messages to the Console. The first few messages in red are Spring-related messages, signifying Spring is working. The next messages in black are output by the application. The messages show that the three Spring Beans are successfully instantiated and passed to the MaintainVehicle object, where it’s methods were called. If the application would only buy me that Silver Mercedes!
Running the Application from a Command Prompt
All the source code for this project is available on GitHub. Note the pom.xml contains a some extra configuration information not shown above. The extra configuration information is not necessary for running the application from within Eclipse. However, if you want to run the application from an external Command Prompt, you will need the added configuration. This extra configuration ensures that the project is correctly packaged into a jar file, with all the necessary dependencies to run. Extra configuration includes an additional logging dependency, a resource reference to the Spring configuration file, one additional property, and three maven plug-in references for compiling and packaging the jar.
To run the java application from an external Command Prompt:
- Open a new Command Prompt
- Change current directory to the project’s root directory (local GitHub repository in my case)
- Run a ‘mvn compile’ command
- Run a ‘mvn package’ command (downloads dependencies and creates jar)
- Change the current directory to the project’s target sub-directory
- Run a ‘dir’ command. You should see the project’s jar file
- Run a ‘java -jar {name-of-jar-file.jar}’ command.
You should see the same messages output in Eclipse, earlier.
Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Java Development, Software Development on May 31, 2013
Build an automated testing, continuous integration, and continuous deployment system, using Maven, Hudson, WebLogic Server, JUnit, and NetBeans. Developed with Oracle’s Pre-Built Enterprise Java Development VM. Download the complete source code from Dropbox and on GitHub.
Introduction
In this post, we will build a basic automated testing, continuous integration, and continuous deployment system, using Oracle’s Pre-Built Enterprise Java Development VM. The primary goal of the system is to automatically compile, test, and deploy a simple Java EE web application to a test environment. As this post will demonstrate, the key to a successful system is not a single application, but the effective integration of all the system’s applications into a well-coordinated and consistent workflow.
Building system such as this can be complex and time-consuming. However, Oracle’s Pre-Built Enterprise Java Development VM already has all the components we need. The Oracle VM includes NetBeans IDE for development, Apache Subversion for version control, Hudson Continuous Integration (CI) Server for build automation, JUnit and Hudson for unit test automation, and WebLogic Server for application hosting.
In addition, we will use Apache Maven, also included on the Oracle VM, to help manage our project’s dependencies, as well as the build and deployment process. Overlapping with some of Apache Ant’s build task functionality, Maven is a powerful cross-cutting tool for managing the modern software development projects. This post will only draw upon a small part of Maven’s functionality.
Demonstration Requirements
To save some time, we will use the same WebLogic Server (WLS) domain we built-in the last post, Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM. We will also use code from the sample Hello World Java EE web project from that post. If you haven’t already done so, work through the last post’s example, first.
Here is a quick list of requirements for this demonstration:
- Oracle VM
- Oracle’s Pre-Built Enterprise Java Development VM running on current version of Oracle VM VirtualBox (mine: 4.2.12)
- Oracle VM’s has the latest system updates installed (see earlier post for directions)
- WLS domain from last post created and running in Oracle VM
- Credentials supplied with Oracle VM for Hudson (username and password)
- Window’s Development Machine
- Current version of Apache Maven installed and configured (mine: 3.0.5)
- Current version of NetBeans IDE installed and configured (mine: 7.3)
- Optional: Current version of WebLogic Server installed and configured
- All environmental variables properly configured for Maven, Java, WLS, etc. (MW_HOME, M2, etc.)
The Process
The steps involved in this post’s demonstration are as follows:
- Install the WebLogic Maven Plugin into the Oracle VM’s Maven Repositories, as well as the Development machine
- Create a new Maven Web Application Project in NetBeans
- Copy the classes from the Hello World project in the last post to new project
- Create a properties file to store Maven configuration values for the project
- Add the Maven Properties Plugin to the Project’s POM file
- Add the WebLogic Maven Plugin to project’s POM file
- Add JUnit tests and JUnit dependencies to project
- Add a WebLogic Descriptor to the project
- Enable Tunneling on the new WLS domain from the last post
- Build, test, and deploy the project locally in NetBeans
- Add project to Subversion
- Optional: Upgrade existing Hudson 2.2.0 and plugins on the Oracle VM latest 3.x version
- Create and configure new Hudson CI job for the project
- Build the Hudson job to compile, test, and deploy project to WLS
WebLogic Maven Plugin
First, we need to install the WebLogic Maven Plugin (‘weblogic-maven-plugin’) onto both the Development machine’s local Maven Repository and the Oracle VM’s Maven Repository. Installing the plugin will allow us to deploy our sample application from NetBeans and Hudson, using Maven. The weblogic-maven-plugin, a JAR file, is not part of the Maven repository by default. According to Oracle, ‘WebLogic Server provides support for Maven through the provisioning of plug-ins that enable you to perform various operations on WebLogic Server from within a Maven environment. As of this release, there are two separate plug-ins available.’ In this post, we will use the weblogic-maven-plugin, as opposed to the wls-maven-plugin. Again, according to Oracle, the weblogic-maven-plugin “delivered in WebLogic Server 11g Release 1, provides support for deployment operations.”
The best way to understand the plugin install process is by reading the Using the WebLogic Development Maven Plug-In section of the Oracle Fusion Middleware documentation on Developing Applications for Oracle WebLogic Server. It goes into detail on how to install and configure the plugin.
In a nutshell, below is a list of the commands I executed to install the weblogic-maven-plugin version 12.1.1.0 on both my Windows development machine and on my Oracle VM. If you do not have WebLogic Server installed on your development machine, and therefore no access to the plugin, install it into the Maven Repository on the Oracle VM first, then copy the jar file to the development machine and follow the normal install process from that point forward.
On Windows Development Machine:
cd %MW_HOME%/wlserver/server/lib | |
java -jar wljarbuilder.jar -profile weblogic-maven-plugin | |
mkdir c:\tmp | |
copy weblogic-maven-plugin.jar c:\tmp | |
cd c:\tmp | |
jar xvf c:\tmp\weblogic-maven-plugin.jar META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml | |
mvn install:install-file -DpomFile=META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml -Dfile=c:\tmp\weblogic-maven-plugin.jar |
On the Oracle VM:
cd $MW_HOME/wlserver_12.1/server/lib | |
java -jar wljarbuilder.jar -profile weblogic-maven-plugin | |
mkdir /home/oracle/tmp | |
cp weblogic-maven-plugin.jar /home/oracle/tmp | |
cd /home/oracle/tmp | |
jar xvf weblogic-maven-plugin.jar META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml | |
mvn install:install-file -DpomFile=META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml -Dfile=weblogic-maven-plugin.jar |
To test the success of your plugin installation, you can run the following maven command on Windows or Linux:
mvn help:describe -Dplugin=com.oracle.weblogic:weblogic-maven-plugin
Sample Maven Web Application
Using NetBeans on your development machine, create a new Maven Web Application. For those of you familiar with Maven, the NetBeans’ Maven Web Application project is based on the ‘webapp-javaee6:1.5’ Archetype. NetBeans creates the project by executing a ‘archetype:generate’ Maven Goal. This is seen in the ‘Output’ tab after the project is created.
By default you may have Tomcat and GlassFish as installed options on your system. Unfortunately, NetBeans currently does not have the ability to configure a remote connection to the WLS instance running on the Oracle VM, as I understand. You do not need an instance of WLS installed on your development machine since we are going to use the copy on the Oracle VM. We will use Maven to deploy the project to WLS on the Oracle VM, later in the post.
Next, copy the two java class files from the previous blog post’s Hello World project to the new project’s source package. Alternately, download a zipped copy this post’s complete sample code from Dropbox or on GitHub.
Because we are copying a RESTful web service to our new project, NetBeans will prompt us for some REST resource configuration options. To keep this new example simple, choose the first option and uncheck the Jersey option.
JUnit Tests
Next, create a set of JUnit tests for each class by right-clicking on both classes and selecting ‘Tools’ -> ‘Create Tests’.
We will use the test classes and dependencies NetBeans just added to the project. However, we will not use the actual JUnit tests themselves that NetBeans created. To properly set-up the default JUnit tests to work with an embedded version of WLS is well beyond the scope of this post.
Overwrite the contents of the class file with the code provided from Dropbox. I have replaced the default JUnit tests with simpler versions for this demonstration. Build the file to make sure all the JUnit tests all pass.
Project Properties
Next, add a new Properties file to the project, entitled ‘maven.properties’.
Add the following key/value pairs to the properties file. These key/value pairs are referenced will be referenced the POM.xml by the weblogic-maven-plugin, added in the next step. Placing the configuration values into a Properties file is not necessary for this post. However, if you wish to deploy to multiple environments, moving environmentally-specific configurations into separate properties files, using Maven Build Profiles, and/or using frameworks such as Spring, are all best practices.
Java Properties File (maven.properties):
# weblogic-maven-plugin configuration values for Oracle VM environment | |
wls.adminurl=t3://192.168.1.88:7031 | |
wls.user=weblogic | |
wls.password=welcome1 | |
wls.upload=true | |
wls.remote=false | |
wls.verbose=true | |
wls.middlewareHome=/labs/wls1211 | |
wls.name=HelloWorldMaven |
Maven Plugins and the POM File
Next, add the WLS Maven Plugin (‘weblogic-maven-plugin’) and the Maven Properties Plugin (‘properties-maven-plugin’) to the end of the project’s Maven POM.xml file. The Maven Properties Plugin, part of the Mojo Project, allows us to substitute configuration values in the Maven POM file from a properties file. According to codehaus,org, who hosts the Mojo Project, ‘It’s main use-case is loading properties from files instead of declaring them in pom.xml, something that comes in handy when dealing with different environments.’
Project Object Model File (pom.xml):
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | |
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> | |
<modelVersion>4.0.0</modelVersion> | |
<groupId>com.blogpost</groupId> | |
<artifactId>HelloWorldMaven</artifactId> | |
<version>1.0</version> | |
<packaging>war</packaging> | |
<name>HelloWorldMaven</name> | |
<properties> | |
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> | |
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> | |
</properties> | |
<dependencies> | |
<dependency> | |
<groupId>junit</groupId> | |
<artifactId>junit</artifactId> | |
<version>4.10</version> | |
<scope>test</scope> | |
</dependency> | |
<dependency> | |
<groupId>javax</groupId> | |
<artifactId>javaee-web-api</artifactId> | |
<version>6.0</version> | |
<scope>provided</scope> | |
</dependency> | |
</dependencies> | |
<build> | |
<plugins> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-compiler-plugin</artifactId> | |
<version>3.1</version> | |
<configuration> | |
<source>1.6</source> | |
<target>1.6</target> | |
<compilerArguments> | |
<endorseddirs>${endorsed.dir}</endorseddirs> | |
</compilerArguments> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-war-plugin</artifactId> | |
<version>2.4</version> | |
<configuration> | |
<failOnMissingWebXml>false</failOnMissingWebXml> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-dependency-plugin</artifactId> | |
<version>2.8</version> | |
<executions> | |
<execution> | |
<phase>validate</phase> | |
<goals> | |
<goal>copy</goal> | |
</goals> | |
<configuration> | |
<outputDirectory>${endorsed.dir}</outputDirectory> | |
<silent>true</silent> | |
<artifactItems> | |
<artifactItem> | |
<groupId>javax</groupId> | |
<artifactId>javaee-endorsed-api</artifactId> | |
<version>6.0</version> | |
<type>jar</type> | |
</artifactItem> | |
</artifactItems> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>org.codehaus.mojo</groupId> | |
<artifactId>properties-maven-plugin</artifactId> | |
<version>1.0-alpha-2</version> | |
<executions> | |
<execution> | |
<phase>initialize</phase> | |
<goals> | |
<goal>read-project-properties</goal> | |
</goals> | |
<configuration> | |
<files> | |
<file>maven_wls_local.properties</file> | |
</files> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>com.oracle.weblogic</groupId> | |
<artifactId>weblogic-maven-plugin</artifactId> | |
<version>12.1.1.0</version> | |
<configuration> | |
<adminurl>${wls.adminurl}</adminurl> | |
<user>${wls.user}</user> | |
<password>${wls.password}</password> | |
<upload>${wls.upload}</upload> | |
<action>deploy</action> | |
<remote>${wls.remote}</remote> | |
<verbose>${wls.verbose}</verbose> | |
<source>${project.build.directory}/${project.build.finalName}.${project.packaging}</source> | |
<name>${wls.name}</name> | |
</configuration> | |
<executions> | |
<execution> | |
<phase>install</phase> | |
<goals> | |
<goal>deploy</goal> | |
</goals> | |
</execution> | |
</executions> | |
</plugin> | |
</plugins> | |
</build> | |
</project> |
WebLogic Deployment Descriptor
A WebLogic Deployment Descriptor file is the last item we need to add to the new Maven Web Application project. NetBeans has descriptors for multiple servers, including Tomcat (context.xml), GlassFish (application.xml), and WebLogic (weblogic.xml). They provide a convenient location to store specific server properties, used during the deployment of the project.
Add the ‘context-root’ tag. The value will be the name of our project, ‘HelloWorldMaven’, as shown below. According to Oracle, “the context-root element defines the context root of this standalone Web application.” The context-root of the application will form part of the URL we enter to display our application, later.
Make sure to the WebLogic descriptor file (‘weblogic.xml’) is placed in the WEB-INF folder. If not, the descriptor’s properties will not be read. If the descriptor is not read, the context-root of the deployed application will default to the project’s WAR file’s name. Instead of ‘HelloWorldMaven’ as the context-root, you would see ‘HelloWorldMaven-1.0-SNAPSHOT’.
Enable Tunneling
Before we compile, test, and deploy our project, we need to make a small change to WLS. In order to deploy our project remotely to the Oracle VM’s WLS, using the WebLogic Maven Plugin, we must enable tunneling on our WLS domain. According to Oracle, the ‘Enable Tunneling’ option “Specifies whether tunneling for the T3, T3S, HTTP, HTTPS, IIOP, and IIOPS protocols should be enabled for this server.” To enable tunneling, from the WLS Administration Console, select the ‘AdminServer’ Server, ‘Protocols’ tab, ‘General’ sub-tab.
Build and Test the Project
Right-click and select ‘Build’, ‘Clean and Build’, or ‘Build with Dependencies’. NetBeans executes a ‘mvn install’ command. This command initiates a series of Maven Goals. The goals, visible NetBean’s Output window, include ‘dependency:copy’, ‘properties:read-project-properties’, ‘compiler:compile’, ‘surefire:test’, and so forth. They move the project’s code through the Maven Build Lifecycle. Most goals are self-explanatory by their title.
The last Maven Goal to execute, if the other goals have succeeded, is the ‘weblogic:deploy’ goal. This goal deploys the project to the Oracle VM’s WLS domain we configured in our project. Recall in the POM file, we configured the weblogic-maven-plugin to call the ‘deploy’ goal whenever ‘install’, referred to as Execution Phase by Maven, is executed. If all goals complete without error, you have just compiled, tested, and deployed your first Maven web application to a remote WLS domain. Later, we will have Hudson do it for us, automatically.
Executing Maven Goals in NetBeans
A small aside, if you wish to run alternate Maven goals in NetBeans, right-click on the project and select ‘Custom’ -> ‘Goals…’. Alternately, click on the lighter green arrows (‘Re-run with different parameters’), adjacent to the ‘Output’ tab.
For example, in the ‘Run Maven’ pop-up, replace ‘install’ with ‘surefire:test’ or simply ‘test’. This will compile the project and run the JUnit tests. There are many Maven goals that can be ran this way. Use the Control key and Space Bar key combination in the Maven Goals text box to display a pop-up list of available goals.
Subversion
Now that our project is complete and tested, we will commit the project to Subversion (SVN). We will commit a copy of our source code to SVN, installed on the Oracle VM, for safe-keeping. Having our source code in SVN also allows Hudson to retrieve a copy. Hudson will then compile, test, and deploy the project to WLS.
The Repository URL, User, and Password are all supplied in the Oracle VM information, along with the other URLs and credentials.
When you import you project for the first time, you will see more files than are displayed below. I had already imported part of the project earlier while creating this post. Therefore most of my files were already managed by Subversion.
Upgrading Hudson CI Server
The Oracle VM comes with Hudson pre-installed in it’s own WLS domain, ‘hudson-ci_dev’, running on port 5001. Start the domain from within the VM by double-clicking the ‘WLS 12c – Hudson CI 5001’ icon on the desktop, or by executing the domain’s WLS start-up script from a terminal window:
/labs/wls1211/user_projects/domains/hudson-ci_dev/startWebLogic.sh
Once started, the WLS Administration Console 12c is accessible at the following URL. User your VM’s IP address or ‘localhost’ if you are within the VM.
http://[your_vm_ip_address]:5001/console/login/LoginForm.jsp
The Oracle VM comes loaded with Hudson version 2.2.0. I strongly suggest is updating Hudson to the latest version (3.0.1 at the time of this post). To upgrade, download, deploy, and started a new 3.0.1 version in the same domain on the same ‘AdminServer’ Server. I was able to do this remotely, from my development machine, using the browser-based Hudson Dashboard and WLS Administration Console. There is no need to do any of the installation from within the VM, itself.
When the upgrade is complete, stop the 2.2.0 deployment currently running in the WLS domain.
The new version of Hudson is accessible from the following URL (adjust the URL your exact version of Hudson):
http://[your_vm_ip_address]:5001/hudson-3.0.1/
It’s also important to update all the Hudson plugins. Hudson makes this easy with the Hudson Plugin Manager, accessible via the Manage Hudson’ option.
Note on the top of the Manage Hudson page, there is a warning about the server’s container not using UTF-8 to decode URLs. You can follow this post, if you want to resolve the issue by configuring Hudson differently. I did not worry about it for this post.
Building a Hudson Job
We are ready to configure Hudson to build, test, and deploy our Maven Web Application project. Return to the ‘Hudson Dashboard’, select ‘New Job’, and then ‘Build a new free-style software job’. This will open the ‘Job Configurations’ for the new job.
Start by configuring the ‘Source Code Management’ section. The Subversion Repository URL is the same as the one you used in NetBeans to commit the code. To avoid the access error seen below, you must provide the Subversion credentials to Hudson, just as you did in NetBeans.
Next, configure the Maven 3 Goals. I chose the ‘clean’ and ‘install’ goals. Using ‘clean’ insures the project is compiled each time by deleting the output of the build directory.
Optionally, you can configure Hudson to publish the JUnit test results as shown below. Be sure to save your configuration.
Start a build of the new Hudson Job, by clicking ‘Build Now’. If your Hudson job’s configurations are correct, and the new WLS domain is running, you should have a clean build. This means the project compiled without error, all tests passed, and the web application’s WAR file was deployed successfully to the new WLS domain within the Oracle VM.
WebLogic Server
To view the newly deployed Maven Web Application, log into the WebLogic Server Administration Console for the new domain. In my case, the new domain was running on port 7031, so the URL would be:
http://[your_vm_ip_address]:7031/console/login/LoginForm.jsp
You should see the deployment, in an ‘Active’ state, as shown below.
To test the deployment, open a new browser tab and go to the URL of the Servlet. In this case the URL would be:
http://[your_vm_ip_address]:7031/HelloWorldMaven/resources/helloWorld
You should see the original phrase from the previous project displayed, ‘Hello WebLogic Server!’.
To further test the system, make a simple change to the project in NetBeans. I changed the name variable’s default value from ‘WebLogic Server’ to ‘Hudson, Maven, and WLS’. Commit the change to SVN.
Return to Hudson and run a new build of the job.
After the build completes, refresh the sample Web Application’s browser window. You should see the new text string displayed. Your code change was just re-compiled, re-tested, and re-deployed by Hudson.
True Continuous Deployment
Although Hudson is now doing a lot of the work for us, the system still is not fully automated. We are still manually building our Hudson Job, in order to deploy our application. If you want true continuous integration and deployment, you need to trust the system to automatically deploy the project, based on certain criteria.
SCM polling with Hudson is one way to demonstrate continuous deployment. In ‘Job Configurations’, turn on ‘Poll SCM’ and enter Unix cron-like value(s) in the ‘Schedule’ text box. In the example below, I have indicated a polling frequency every hour (‘@hourly’). Every hour, Hudson will look for committed changes to the project in Subversion. If changes are found, Hudson w retrieves the source code, compiles, and tests. If the project compiles and passes all tests, it is deployed to WLS.
There are less resource-intense methods to react to changes than SCM polling. Push-notifications from the repository is alternate, more preferable method.
Additionally, you should configure messaging in Hudson to notify team members of new deployments and the changes they contain. You should also implement a good deployment versioning strategy, for tracking purposes. Knowing the version of deployed artifacts is critical for accurate change management and defect tracking.
Helpful Links
Configuring and Using the WebLogic Maven Plug-In for Deployment
Jenkins: Building a Software Project
Kohsuke Kawaguchi: Polling must die: triggering Jenkins builds from a git hook