Posts Tagged continuous deployment
Deploying Spring Boot Apps to AWS with Netflix Nebula and Spinnaker: Part 2 of 2
Posted by Gary A. Stafford in AWS, Build Automation, Cloud, Continuous Delivery, DevOps, Enterprise Software Development, Java Development, Software Development on May 12, 2018
In Part One of this post, we examined enterprise deployment tools and introduced two of Netflix’s open-source deployment tools, the Nebula Gradle plugins, and Spinnaker. In Part Two, we will deploy a production-ready Spring Boot application, the Election microservice, to multiple Amazon EC2 instances, behind an Elastic Load Balancer (ELB). We will use a fully automated DevOps workflow. The build, test, package, bake, deploy process will be handled by the Netflix Nebula Gradle Linux Packaging Plugin, Jenkins, and Spinnaker. The high-level process will involve the following steps:
- Configure Gradle to build a production-ready fully executable application for Unix systems (executable JAR)
- Using deb-s3 and GPG Suite, create a secure, signed APT (Debian) repository on Amazon S3
- Using Jenkins and the Netflix Nebula plugin, build a Debian package, containing the executable JAR and configuration files
- Using Jenkins and deb-s3, publish the package to the S3-based APT repository
- Using Spinnaker (HashiCorp Packer under the covers), bake an Ubuntu Amazon Machine Image (AMI), replete with the executable JAR installed from the Debian package
- Deploy an auto-scaling set of Amazon EC2 instances from the baked AMI, behind an ELB, running the Spring Boot application using both the Red/Black and Highlander deployment strategies
- Be able to repeat the entire automated build, test, package, bake, deploy process, triggered by a new code push to GitHub
The overall build, test, package, bake, deploy process will look as follows.
DevOps Architecture
Spinnaker’s modern architecture is comprised of several independent microservices. The codebase is written in Java and Groovy. It leverages the Spring Boot framework¹. Spinnaker’s configuration, startup, updates, and rollbacks are centrally managed by Halyard. Halyard provides a single point of contact for command line interaction with Spinnaker’s microservices.
Spinnaker can be installed on most private or public infrastructure, either containerized or virtualized. Spinnaker has links to a number of Quickstart installations on their website. For this demonstration, I deployed and configured Spinnaker on Azure, starting with one of the Azure Spinnaker quick-start ARM templates. The template provisions all the necessary Azure resources. For better performance, I chose upgraded the default VM to a larger Standard D4 v3, which contains 4 vCPUs and 16 GB of memory. I would recommend at least 2 vCPUs and 8 GB of memory at a minimum for Spinnaker.
Another Azure VM, in the same virtual network as the Spinnaker VM, already hosts Jenkins, SonarQube, and Nexus Repository OSS.
From Spinnaker on Azure, Debian Packages are uploaded to the APT package repository on AWS S3. Spinnaker also bakes Amazon Machine Images (AMI) on AWS. Spinnaker provisions the AWS resources, including EC2 instances, Load Balancers, Auto Scaling Groups, Launch Configurations, and Security Groups. The only resources you need on AWS to get started with Spinnaker are a VPC and Subnets. There are some minor, yet critical prerequisites for naming your VPC and Subnets.
Other external tools include GitHub for source control and Slack for notifications. I have built and managed everything from a Mac, however, all tools are platform agnostic. The Spring Boot application was developed in JetBrains IntelliJ.
Source Code
All source code for this post can be found on GitHub. The project’s README file contains a list of the Election service’s endpoints.
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
APT Repository
After setting up Spinnaker on Azure, I created an APT repository on Amazon S3, using the instructions provided by Netflix, in their Code Lab, An Introduction to Spinnaker: Hello Deployment. The setup involves creating an Amazon S3 bucket to serve as an APT (Debian) repository, creating a GPG key for signing, and using deb-s3 to manage the repository. The Code Lab also uses Aptly, a great tool, which I skipped for brevity.
GPG Key
On the Mac, I used GPG Suite to create a GPG (GNU Privacy Guard or GnuPG) automatic signing key for my APT repository. The key is required by Spinnaker to verify the Debian packages in the repository, before installation.
The Ruby Gem, deb-s3, makes management of the Debian packages easy and automatable with Jenkins. Jenkins uploads the Debian packages, using a deb-s3 command, such as the following (gist). In this post, Jenkins calls the command from the shell script, upload-deb-package.sh, which is included in the GitHub project.
deb-s3 upload \ | |
--bucket garystafford-spinnaker-repo \ | |
--access-key-id=$AWS_ACCESS_KEY_ID \ | |
--secret-access-key=$AWS_SECRET_ACCESS_KEY \ | |
--arch=amd64 \ | |
--codename=trusty \ | |
--component=main \ | |
--visibility=public \ | |
--sign=$GPG_KEY_ID \ | |
build/distributions/*.deb |
The Jenkins user requires access to the signing key, to build and upload the Debian packages. I created my GPG key on my Mac, securely copied the key to my Ubuntu-based Jenkins VM, and then imported the key for the Jenkins user. You could also create your key on Ubuntu, directly. Make sure you backup your private key in a secure location!
Nebula Packaging Plugin
Next, I set up a Gradle task in my build.gradle file to build my Debian packages using the Netflix Nebula Gradle Linux Packaging Plugin. Although Debian packaging tasks could become complex for larger application installations, this task for this post is pretty simple. I used many of the best-practices suggested by Spring for Production-grade deployments. The best-practices guide recommends file location, file modes, and file user and group ownership. I create the JAR as a fully executable JAR, meaning it is started like any other executable and does not have to be started with the standard java -jar
command.
In the task, shown below (gist), the JAR and the external configuration file (optional) are copied to specific locations during the deployment and symlinked, as required. I used the older SysVInit
system (init.d
) to enable the application to automatically starts on boot. You should probably use systemctl
for your services with Ubuntu 16.04.
task packDeb(type: Deb) { | |
description 'Creates .deb package.' | |
into '/opt/' + project.name // root directory | |
from(jar.outputs.files) { // copy *.jar | |
into 'lib' | |
fileMode 0500 | |
user 'springapp' | |
permissionGroup 'springapp' | |
} | |
from('build/resources/main/' + project.name + '.conf') { // copy .conf | |
into 'conf' | |
fileMode 0400 | |
user 'root' | |
permissionGroup 'root' | |
} | |
// symlinks jar to init.d | |
link('/etc/init.d/election', | |
'/opt/' + project.name + '/lib/' + jar.archiveName) | |
// link init.d to rc2.d | |
link('/opt/' + project.name + '/lib/' + project.name + '-' + project.version + '.conf', | |
'/opt/' + project.name + '/conf/' + project.name + '.conf') | |
// link conf to jar location | |
link('/etc/rc2.d/S02election', '/etc/init.d/election') | |
postInstall 'chattr +i ' + '/opt/' + project.name + '/lib/' + jar.archiveName | |
} |
You can use the ar
(archive) command (i.e., ar -x spring-postgresql-demo_4.5.0_all.deb
), to extract and inspect the structure of a Debian package. The data.tar.gz file, displayed below in Atom, shows the final package structure.
Base AMI
Next, I baked a base AMI for Spinnaker to use. This base AMI is used by Spinnaker to bake (re-bake) the final AMI(s) used for provisioning the EC2 instances, containing the Spring Boot Application. The Spinnaker base AMI is built from another base AMI, the official Ubuntu 16.04 LTS image. I installed the OpenJDK 8 package on the AMI, which is required to run the Java-based Election service. Lastly and critically, I added information about the location of my S3-based APT Debian package repository to the list of configured APT data sources, and the GPG key required for package verification. This information and key will be used later by Spinnaker to bake AMIʼs, using this base AMI. The set-up script, base_ubuntu_ami_setup.sh, which is included in the GitHub project.
#!/usr/bin/env sh | |
# based on ami-6dfe5010 | |
# Canonical, Ubuntu, 16.04 LTS, amd64 xenial image build on 2018-04-05 | |
# References: | |
# https://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html | |
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/building-shared-amis.html | |
# https://gist.github.com/justindowning/5921369 | |
set +x | |
# update and install java | |
sudo apt-get update -y \ | |
&& sudo apt-get upgrade -y \ | |
&& sudo apt-get install \ | |
openjdk-8-jre-headless -y | |
# a few optional ops tools | |
sudo apt-get install \ | |
tree htop glances -y | |
# user:group application will run as | |
sudo useradd springapp | |
sudo usermod -a -G springapp springapp | |
sudo usermod -s /usr/sbin/nologin springapp | |
# add s3 deb repo | |
echo "deb http://garystafford-spinnaker-repo.s3-website-us-east-1.amazonaws.com trusty main" | \ | |
sudo tee -a /etc/apt/sources.list.d/gary-stafford.list | |
curl -s https://s3.amazonaws.com/garystafford-spinnaker-repo/apt/doc/apt-key.gpg | \ | |
sudo apt-key add - | |
# clean up and secure | |
sudo passwd -l root | |
sudo shred -u /etc/ssh/*_key /etc/ssh/*_key.pub | |
[ -f /home/ubuntu/.ssh/authorized_keys ] && rm /home/ubuntu/.ssh/authorized_keys | |
sudo rm -rf /tmp/* | |
cat /dev/null > ~/.bash_history | |
shred -u ~/.*history | |
history -c | |
exit |
Jenkins
This post uses a single Jenkins CI/CD pipeline. Using a Webhook, the pipeline is automatically triggered by every git push to the GitHub project. The pipeline pulls the source code, builds the application, and performs unit-tests and static code analysis with SonarQube. If the build succeeds and the tests pass, the build artifact (JAR file) is bundled into a Debian package using the Nebula Packaging plugin, uploaded to the S3 APT repository using s3-deb, and archived locally for Spinnaker to reference. Once the pipeline is completed, on success or on failure, a Slack notification is sent. The Jenkinsfile, used for this post is available in the project on Github.
#!/usr/bin/env groovy | |
def ACCOUNT = "garystafford" | |
def PROJECT_NAME = "spring-postgresql-demo" | |
pipeline { | |
agent any | |
tools { | |
gradle 'gradle' | |
} | |
stages { | |
stage('Checkout GitHub') { | |
steps { | |
git changelog: true, poll: false, | |
branch: 'master', | |
url: "https://github.com/${ACCOUNT}/${PROJECT_NAME}" | |
} | |
} | |
stage('Build') { | |
steps { | |
sh 'gradle wrapper' | |
sh 'LOGGING_LEVEL_ROOT=INFO ./gradlew clean build -x test --info' | |
} | |
} | |
stage('Unit Test') { // unit test against in-memory h2 | |
steps { | |
withEnv(['SPRING_DATASOURCE_URL=jdbc:h2:mem:elections']) { | |
sh 'LOGGING_LEVEL_ROOT=INFO ./gradlew cleanTest test --info' | |
} | |
junit '**/build/test-results/test/*.xml' | |
} | |
} | |
stage('SonarQube Analysis') { | |
steps { | |
withSonarQubeEnv('sonarqube') { | |
sh "LOGGING_LEVEL_ROOT=INFO ./gradlew sonarqube -Dsonar.projectName=${PROJECT_NAME} --info" | |
} | |
} | |
} | |
stage('Build Debian Package') { | |
steps { | |
sh "LOGGING_LEVEL_ROOT=INFO ./gradlew packDeb --info" | |
} | |
} | |
stage('Upload Debian Package') { | |
steps { | |
withCredentials([ | |
string(credentialsId: 'GPG_KEY_ID', variable: 'GPG_KEY_ID'), | |
string(credentialsId: 'AWS_ACCESS_KEY_ID', variable: 'AWS_ACCESS_KEY_ID'), | |
string(credentialsId: 'AWS_SECRET_ACCESS_KEY', variable: 'AWS_SECRET_ACCESS_KEY')]) { | |
sh "sh ./scripts/upload-deb-package.sh ${GPG_KEY_ID}" | |
} | |
} | |
} | |
stage('Archive Debian Package') { | |
steps { | |
archiveArtifacts 'build/distributions/*.deb' | |
} | |
} | |
} | |
post { | |
success { | |
slackSend(color: '#79B12B', | |
message: "SUCCESS: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})") | |
} | |
failure { | |
slackSend(color: '#FF0000', | |
message: "FAILURE: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})") | |
} | |
} | |
} |
Below is a traditional Jenkins view of the CI/CD pipeline, with links to unit test reports, SonarQube results, build artifacts, and GitHub source code.
Below is the same pipeline viewed using the Jenkins Blue Ocean plugin.
It is important to perform sufficient testing before building the Debian package. You donʼt want to bake an AMI and deploy EC2 instances, at a cost, before finding out the application has bugs.
Spinnaker Setup
First, I set up a new Spinnaker Slack channel and a custom bot user. Spinnaker details the Slack set up in their Notifications and Events Guide. You can configure what type of Spinnaker events trigger Slack notifications.
AWS Spinnaker User
Next, I added the required Spinnaker User, Policy, and Roles to AWS. Spinnaker uses this access to query and provision infrastructure on your behalf. The Spinnaker User requires Power User level access to perform all their necessary tasks. AWS IAM set up is detailed by Spinnaker in their Cloud Providers Setup for AWS. They also describe the setup of other cloud providers. You need to be reasonably familiar with AWS IAM, including the PassRole permission to set up this part. As part of the setup, you enable AWS for Spinnaker and add your AWS account using the Halyard interface.
Spinnaker Security Groups
Next, I set up two Spinnaker Security Groups, corresponding to two AWS Security Groups, one for the load balancer and one for the Election service. The load balancer security group exposes port 80, and the Election service security group exposes port 8080.
Spinnaker Load Balancer
Next, I created a Spinnaker Load Balancer, corresponding to an Amazon Classic Load Balancer. The Load Balancer will load-balance the Election service EC2 instances. Below you see a Load Balancer, balancing a pair of active EC2 instances, the result of a Red/Black deployment.
Spinnaker can currently create both AWS Classic Load Balancers as well as Application Load Balancers (ALB).
Spinnaker Pipeline
This post uses a single, basic Spinnaker Pipeline. The pipeline bakes a new AMI from the Debian package generated by the Jenkins pipeline. After a manual approval stage, Spinnaker deploys a set of EC2 instances, behind the Load Balancer, which contains the latest version of the Election service. Spinnaker finishes the pipeline by sending a Slack notification.
Jenkins Integration
The pipeline is triggered by the successful completion of the Jenkins pipeline. This is set in the Configuration stage of the pipeline. The integration with Jenkins is managed through Spinnaker’s Igor service.
Bake Stage
Next, in the Bake stage, Spinnaker bakes a new AMI, containing the Debian package generated by the Jenkins pipeline. The stageʼs configuration contains the package name to reference.
The stageʼs configuration also includes a reference to which Base AMI to use, to bake the new AMIs. Here I have used the AMI ID of the base Spinnaker AMI, I created previously.
Deploy Stage
Next, the Deploy stage deploys the Election service, running on EC2 instances, provisioned from the new AMI, which was baked in the last stage. To configure the Deploy stage, you define a Spinnaker Server Group. According to Spinnaker, the Server Group identifies the deployable artifact, VM image type, the number of instances, autoscaling policies, metadata, Load Balancer, and a Security Group.
The Server Group also defines the Deployment Strategy. Below, I chose the Red/Black Deployment Strategy (also referred to as Blue/Green). This strategy will disable, not terminate the active Server Group. If the new deployment fails, we can manually or automatically perform a Rollback to the previous, currently disabled Server Group.
Letʼs Start Baking!
With set up complete, letʼs kick off a git push, trigger and complete the Jenkins pipeline, and finally trigger the Spinnaker pipeline. Below we see the pipelineʼs Bake stage has been started. Spinnakerʼs UI lets us view the Bakery Details. The Bakery, provided by Spinnakerʼs Rosco service, bakes the AMIs. Rosco uses HashiCorp Packer to bake the AMIs, using standard Packer templates.
Below we see Spinnaker (Rosco/Packer) locating the Base Spinnaker AMI we configured in the Pipelineʼs Bake stage. Next, we see Spinnaker sshʼing into a new EC2 instance with a temporary keypair and Security Group and starting the Election service Debian package installation.
Continuing, we see the latest Debian package, derived from the Jenkins pipelineʼs archive, being pulled from the S3-based APT repo. The package is verified using the GPG key and then installed. Lastly, we see a new AMI is created, containing the deployed Election service, which was initially built and packaged by Jenkins. Note the AWS Resource Tags created by Spinnaker, as shown in the Bakery output.
The base Spinnaker AMI and the AMIs baked by Spinnaker are visible in the AWS Console. Note the naming conventions used by Spinnaker for the AMIs, the Source AMI used to build the new APIs, and the addition of the Tags, which we saw being applied in the Bakery output above. The use of Tags indirectly allows full traceability from the deployed EC2 instance all the way back to the original code commit to git by the Developer.
Red/Black Deployments
With the new AMI baked successfully, and a required manual approval, using a Manual Judgement type pipeline stage, we can now begin a Red/Black deployment to AWS.
Using the Server Group configuration in the Deploy stage, Spinnaker deploys two EC2 instances, behind the ELB.
Below, we see the successful results of the Red/Black deployment. The single Spinnaker Cluster contains two deployed Server Groups. One group, the previously active Server Group (RED), comprised of two EC2 instances, is disabled. The ‘RED’ EC2 instances are unregistered with the load balancer but still running. The new Server Group (BLACK), also comprised of two EC2 instances, is now active and registered with the Load Balancer. Spinnaker will spread EC2 instances evenly across all Availability Zones in the US East (N. Virginia) Region.
From the AWS Console, we can observe four running instances, though only two are registered with the load-balancer.
Here we see each deployed Server Group has a different Auto Scaling Group and Launch Configuration. Note the continued use of naming conventions by Spinnaker.
There can be only one, Highlander!
Now, in the Deploy stage of the pipeline, we will switch the Server Groupʼs Strategy to Highlander. The Highlander strategy will, as you probably guessed by the name, destroy all other Server Groups in the Cluster. This is more typically used for lower environments, like Development or Test, where you are only interested in the next version of the application for testing. The Red/Black strategy is more applicable to Production, where you want the opportunity to quickly rollback to the previous deployment, if necessary.
Following a successful deployment, below, we now see the first two Server Groups have been terminated, and a third Server Group in the Cluster is active.
In the AWS Console, we can confirm the four previous EC2 instances have been successfully terminated as a result of the Highlander deployment strategy, and two new instances are running.
As well, the previous Auto Scaling Groups and Launch Configurations have been deleted from AWS by Spinnaker.
As expected, the Classic Load Balancer only contains the two most recent EC2 instances from the last Server Group deployed.
Confirming the Deployment
Using the DNS address of the load balancer, we can hit the Election service endpoints, on either of the EC2 instances. All API endpoints are listed in the Projectʼs README file. Below, from a web browser, we see the candidates resource returning candidate information, retrieved from the Electionʼs PostgreSQL RDS database Test instance.
Similarly, from Postman, we can hit the load balancer and get back election information from the elections resource, using an HTTP GET.
I intentionally left out a discussion of the service’s RDS database and how configuration management was handled with Spring Profiles and Spring Cloud Config. Both topics were out of scope for this post.
Conclusion
Although this was a brief, whirlwind overview of deployment tools, it shows the power of delivery tools like Spinnaker, when seamlessly combined with other tools, like Jenkins and the Nebula plugins. Together, these tools are capable of efficiently, repeatably, and securely deploying large numbers of containerized and non-containerized applications to a variety of private, public, and hybrid cloud infrastructure.
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
Deploying Spring Boot Apps to AWS with Netflix Nebula and Spinnaker: Part 1 of 2
Posted by Gary A. Stafford in AWS, Build Automation, Continuous Delivery, Enterprise Software Development, Java Development, Software Development on May 10, 2018
Listening to DevOps industry pundits, you might be convinced everyone is running containers in Production (or by now, serverless). Although containerization is growing at a phenomenal rate, several recent surveys¹ indicate less than 50% of enterprises are deploying containers in Production. Filter those results further with the fact, of those enterprises, only a small percentage of their total application portfolios are containerized, let alone in Production.
As a DevOps Consultant, I regularly work with corporations whose global portfolios are in the thousands of applications. Indeed, some percentage of their applications are containerized, with less running in Production. However, a majority of those applications, even those built on modern, light-weight, distributed architectures, are still being deployed to bare-metal and virtualized public cloud and private data center infrastructure, for a variety of reasons.
Enterprise Deployment
Due to the scale and complexity of application portfolios, many organizations have invested in enterprise deployment tools, either commercially available or developed in-house. The enterprise deployment tool’s primary objective is to standardize the process of securely, reliably, and repeatably packaging, publishing, and deploying both containerized and non-containerized applications to large fleets of virtual machines and bare-metal servers, across multiple, geographically dispersed data centers and cloud providers. Enterprise deployment tools are particularly common in tightly regulated and compliance-driven organizations, as well as organizations that have undertaken large amounts of M&A, resulting in vastly different application technology stacks.
Better-known examples of commercially available enterprise deployment tools include IBM UrbanCode Deploy (aka uDeploy), XebiaLabs XL Deploy, CA Automic Release Automation, Octopus Deploy, and Electric Cloud ElectricFlow. While commercial tools continue to gain market share³, many organizations are tightly coupled to their in-house solutions through years of use and fear of widespread process disruption, given current economic, security, compliance, and skills-gap sensitivities.
Deployment Tool Anatomy
Most Enterprise deployment tools are compatible with standard binary package types, including Debian (.deb) and Red Hat (RPM) Package Manager (.rpm) packages for Linux, NuGet (.nupkg) packages for Windows, and Node Package Manager (.npm) and Bower for JavaScript. There are equivalent package types for other popular languages and formats, such as Go, Python, Ruby, SQL, Android, Objective-C, Swift, and Docker. Packages usually contain application metadata, a signature to ensure the integrity and/or authenticity², and a compressed payload.
Enterprise deployment tools are normally integrated with open-source packaging and publishing tools, such as Apache Maven, Apache Ivy/Ant, Gradle, NPM, NuGet, Bundler, PIP, and Docker.
Binary packages (and images), built with enterprise deployment tools, are typically stored in private, open-source or commercial binary (artifact) repositories, such as Spacewalk, JFrog Artifactory, and Sonatype Nexus Repository. The latter two, Artifactory and Nexus, support a multitude of modern package types and repository structures, including Maven, NuGet, PyPI, NPM, Bower, Ruby Gems, CocoaPods, Puppet, Chef, and Docker.
Mature binary repositories provide many features in addition to package management, including role-based access control, vulnerability scanning, rich APIs, DevOps integration, and fault-tolerant, high-availability architectures.
Lastly, enterprise deployment tools generally rely on standard package management systems to retrieve and install cryptographically verifiable packages and images. These include YUM (Yellowdog Updater, Modified), APT (aptitude), APK (Alpine Linux), NuGet, Chocolatey, NPM, PIP, Bundler, and Docker. Packages are deployed directly to running infrastructure, or indirectly to intermediate deployable components as Amazon Machine Images (AMI), Google Compute Engine machine images, VMware machines, Docker Images, or CoreOS rkt.
Open-Source Alternative
One such enterprise with an extensive portfolio of both containerized and non-containerized applications is Netflix. To standardize their deployments to multiple types of cloud infrastructure, Netflix has developed several well-known open-source software (OSS) tools, including the Nebula Gradle plugins and Spinnaker. I discussed Spinnaker in my previous post, Managing Applications Across Multiple Kubernetes Environments with Istio, as an alternative to Jenkins for deploying container workloads to Kubernetes on Google (GKE).
As a leader in OSS, Netflix has documented their deployment process in several articles and presentations, including a post from 2016, ‘How We Build Code at Netflix.’ According to the article, the high-level process for deployment to Amazon EC2 instances involves the following steps:
- Code is built and tested locally using Nebula
- Changes are committed to a central git repository
- Jenkins job executes Nebula, which builds, tests, and packages the application for deployment
- Builds are “baked” into Amazon Machine Images (using Spinnaker)
- Spinnaker pipelines are used to deploy and promote the code change
The Nebula plugins and Spinnaker leverage many underlying, open-source technologies, including Pivotal Spring, Java, Groovy, Gradle, Maven, Apache Commons, Redline RPM, HashiCorp Packer, Redis, HashiCorp Consul, Cassandra, and Apache Thrift.
Both the Nebula plugins and Spinnaker have been battle tested in Production by Netflix, as well as by many other industry leaders after Netflix open-sourced the tools in 2014 (Nebula) and 2015 (Spinnaker). Currently, there are approximately 20 Nebula Gradle plugins available on GitHub. Notable core-contributors in the development of Spinnaker include Google, Microsoft, Pivotal, Target, Veritas, and Oracle, to name a few. A sign of its success, Spinnaker currently has over 4,600 Stars on GitHub!
Part Two: Demonstration
In Part Two, we will deploy a production-ready Spring Boot application, the Election microservice, to multiple Amazon EC2 instances, behind an Elastic Load Balancer (ELB). We will use a fully automated DevOps workflow. The build, test, package, bake, deploy process will be handled by the Netflix Nebula Gradle Linux Packaging Plugin, Jenkins, and Spinnaker. The high-level process will involve the following steps:
- Configure Gradle to build a production-ready fully executable application for Unix systems (executable JAR)
- Using deb-s3 and GPG Suite, create a secure, signed APT (Debian) repository on Amazon S3
- Using Jenkins and the Netflix Nebula plugin, build a Debian package, containing the executable JAR and configuration files
- Using Jenkins and deb-s3, publish the package to the S3-based APT repository
- Using Spinnaker (HashiCorp Packer under the covers), bake an Ubuntu Amazon Machine Image (AMI), replete with the executable JAR installed from the Debian package
- Deploy an auto-scaling set of Amazon EC2 instances from the baked AMI, behind an ELB, running the Spring Boot application using both the Red/Black and Highlander deployment strategies
- Be able to repeat the entire automated build, test, package, bake, deploy process, triggered by a new code push to GitHub
The overall build, test, package, bake, deploy process will look as follows.
References
- How We Build Code at Netflix
- Debian Packages (Armory)
- An Introduction to Spinnaker: Hello Deployment
- Spinnaker: CD from first principles to production (Google Cloud Next ’17) (YouTube)
- Nebula Plugin Overview
- Installing Spring Boot Applications
- Debian Binary Package Building HOWTO
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
¹ Recent Surveys: Forrester, Portworx, Cloud Foundry Survey
² Courtesy Wikipedia – rpm
³ XebiaLabs Kicks Off 2017 with Triple-Digit Growth in Enterprise DevOps
Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 2 of 2)
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Software Development on November 13, 2013
Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.
Introduction
In part 1, Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2), we built the first part of our basic deployment pipeline using leading open-source technologies. In part 2, we will use Jenkins CI Server and Oracle GlassFish Application Server to complete our deployment pipeline.
To review, the three main goals of our deployment pipeline are continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).
Setting up Git Server
As I mentioned in part 1, as a part of a development team using Git, you would place your project on a remote Git Server. You and your team members would each clone the repository from the Git Server to your local development environments. You and your team would commit your code changes locally, then pull, merge, and push your changes back to the remote Git Server. Jenkins will pull the project’s source code from the Git Server.
In part 1 of this post, we just created a local Git repository. In part 2, we will properly set-up our project on a remote Git Server. First, we need to export our local repository into a new, bare repository on the Git Server. The Git term, ‘bare repository’, refers to a repository that does not contain a working directory. The repository has no working copies of your source files. You only use the bare repository to clone, pull from, and push to. The bare repository contains a .git extension (i.e. ssh://user@server:/git-repos/myproject.git).
From the root of your remote Git Server repository, execute the following command, substituting the path to your local project. If your Git Server is on a separate machine that your local project repository, you will need to copy the new bare repository to the remote Git Server. This involves a few simple steps, explained in this post, and at git-scm.com.
git clone --bare {path-to-existing-local-repository}\{name-of-repository} {name-of-repository}.git |
Once you have created the repository on the remote Git Server, I would recommend you clone the remote repository to your local machine and discard your original local repository from part 1 of the post. You don’t have to do this step, but cloning fresh from the server will make sure Git is working correctly. The screen grabs below illustrate an example of cloning a new repository to my local NetBeans Project folder.
Configuring Jenkins
The diagram below illustrates the deployment pipeline from Git Server to Jenkins to GlassFish in finer detail. It begins with an initial commit to the local Git project repository and ends with the deployment of the project’s WAR file to the GlassFish domain. We will walk through it step-by-step.
Jenkins Plugins
Before we create our new Jenkins Jobs, we need to configure Jenkins properly. You will need a recent version of Jenkins installed, along with the following plugins:
- Build With Parameters Plugin
- Copy Artifact Plugin
- Jenkins GIT plugin (includes Jenkins GIT client plugin)
- Jenkins Parameterized Trigger plugin
- Maven Integration plugin
- Credentials Plugin (optional for use with Git Server if security is enabled)
- ThinBackup (optional to install supplied Jenkins jobs configuration files)
Global Security
Jenkins can be configured with or without Global Security. For this post, I have enabled Global Security, as it typical of most development environments. I chose to use ‘Jenkins’s own user database’ option for authentication. In larger development environments, authentication would normally be done against LDAP.
The user I have set up, ‘jenkins’, will be the user that Git authenticates with when connecting to Jenkins (explained later). Set up your own user and note their API Token. Since Global Security has been enabled, we will need the token later to trigger the Jenkins build from Git. Your user’s unique api token will be different than in the example below.
Jenkins Jobs
We will set up two Jenkins ‘free-style software project’ jobs, ‘GitMavenGlassFish_Build’ and ‘GitMavenGlassFish_Deploy’. We won’t be using the obvious choice, a ‘maven2/3 project’. If you’re interested, here’s why. The first job, the build job, will be responsible for pulling the source code from the Git Server. The build job, with help from Maven, will compile, test, and assemble the application code. The second job, the deployment job, will pull the artifacts from the build job and deploy them to GlassFish. The build job will trigger the deployment job, once the build job completes successfully. This is explained in detail, to follow.
Why Two Jobs?
Following good modular design and Separation of Concerns (SoC) principles, separating the build from the deployment gains us several advantages, including:
- Modularity– Ability to change deployment methodology or deployment targets, without disrupting the build and test process. For example, we might move the application hosting from GlassFish to WebLogic, or decide to use Ant instead of Maven for deployment tasks. This can happen totally independent of the build and testing processes.
- Separation/Isolation – For any reason we are unable to deploy the artifacts as part of the deployment job, we won’t impact the continuous integration and automated testing processes, which are part of the separate build job.
- Support – Support is easier by having smaller pieces of functionality to troubleshoot and maintain.
In a larger enterprise environment, you would probably encounter further separation of concerns. Unit testing, performance testing, deployment validation, and documentation generation (javadocs) are often handled by separate jobs. Jenkins represents a smaller pipeline within our larger deployment pipeline.
I intentionally left out notification for brevity. At minimum, you would want to be notified when the build or deployment jobs failed. Additionally, with continuous deployment, the deployment would trigger a notification to the stakeholders of that environment, such as the Testers. This lets them know the new software is ready to be tested. Notifications often include a list of bug fixes and feature enhancements that need to be tested. This can easily be pulled from Git into Jenkins and out to the end user.
Both Jenkins jobs definitions are available as xml files on gist.github.com. Using Jenkins’ ThinBackup Plugin, you can save both gists locally, and then restore them to your Jenkins server. The build job gist is here and the deployment job gist is here. This may save you some configuration time.
Jenkins Build Job
Both the build job and the deployment jobs require an input parameter. This property represents the targeted environment (GlassFish domain) for deployment, such as ‘testing’.. How this parameter is passed to Jenkins is discussed later in the Git Hooks section, below.
Reviewing the below screen grab of the build job’s configuration, you will observe the following steps:
- Build Request – A build request is received by the job (explained later). The request contains an input parameter indicating the ‘environment’. The parameter must be one of the choices listed in ‘Choices’.
- Maven Dependencies – Based on the pom file, Maven retrieves all the required dependencies from the remote Maven repository, if the dependencies are not already contained in the workspace’s local repository. Note the setting ‘User private Maven repository. This creates a local repository for project dependencies within the project’s workspace.
- Pull from Git – Jenkins pulls the code from the Git Server using the supplied repository configuration information. Note my Git Server does not require authentication. If it did, we would set-up and use the proper credentials.
- Build – Jenkins builds the project using the Maven command ‘clean install -e’. The pom file contains the necessary configuration information.
- Unit Test – The above Maven ‘install’ command also calls JUnit to execute the unit tests. The results of these tests are published and displayed as part of the build job’s details.
- Assemble WAR – The above Maven ‘install’ command also assembles the project’s WAR file.
- Archive Artifacts – Based on the success of the build and unit tests, Jenkins archives specific artifacts needed by the deployment job. Jenkins uses the input parameter in #1 to define which properties file and password file to archive.
- Trigger Deployment Job – Based on the success of the build and unit tests, Jenkins triggers the ‘downstream’ deployment job, passing it the same environment parameter.
Jenkins Deployment Job
Reviewing the below screen grab of the deployment job’s configuration, you will observe the following steps:
- Build Request – A build request is received from the upstream build job. The request contains the input parameter indicating ‘environment’.
- Copy Artifacts – Jenkins copies the artifacts from the build job that called the deploy job.
- Read Properties – Maven executes the command ‘mvn properties:read-project-properties glassfish:redeploy -e’. The first half of this command instructs Maven to read the appropriate properties file, as indicated by the environment parameter, ‘glassfish.properties.file.argument=${environment}’.
- POM – Maven substitutes the key ‘glassfish.properties.file.argument’ in the pom file with the environment value. This tells Maven the name of the properties file, which supplies all the remaining property values to the pom file.
- Maven Dependencies – If the dependencies are not already contained in the workspace’s local repository, Maven retrieves all the required dependencies from the remote Maven repositories, based on the pom. Note the setting ‘User private Maven repository’ checked in the screen grab below. This option instructs Jenkins to creates a local repository for project dependencies within the project’s workspace.
- Deployment – The last half of the command in #3 deploys, or more accurately redeploys the application’s WAR file to GlassFish. The ‘glassfish:redeploy’ works only if the WAR file has already been initially deployed to the GlassFish domain using the ‘glassfish:deploy’ command. For this process, I am assuming the initial deployment was already done directly through the GlassFish Administration Console, NetBeans, or command line.
Git Hooks
To achieve continuous integration, we want to automatically build and test our job after each change to our code. We have a number of choices to make this happen. The obvious choice is letting Jenkins poll the Git Server. Although polling would simplify configuration, polling is frowned upon in many environments. Even the creator of Jenkins, Kohsuke Kawaguchi, frowns upon polling in his post, ‘Polling Must Die‘.
Why is polling bad? It adds unnecessary activity and delay. Let’s say Jenkins’ polling frequency is set to every 2 minutes, but you only have an average of 5 pushes to your remote Git Server project repository per day. Based on these stats, in just one day, Jenkins will poll Git 720 times to discover only 5 pushes. That’s 144 times per push. Also, based on the polling frequency, when you do push, you could wait up to 2 minutes for Jenkins to queue the build job. The longer you wait for feedback on your changes, the greater chance your defects could be pulled down by other developers. You should expect immediate and continuous feedback.
A vastly more efficient and configurable method of continuous integration between Git and Jenkins is Git Hooks. Git Hooks allow us to execute scripts based on specific Git actions. In our case, when a developer completes a successful push to the remote Git Server project repository, we want to call Jenkins to build, test, and deploy the modified project code. Using hooks means we only call Jenkins when a successful push is completed. Furthermore, we can be assured Jenkins will immediately queue our request to build and deploy the job when a push occurs.
Post-Receive Hook
There are several types of Git Hooks. They include ‘post-commit’, ‘pre-push’, ‘update’, ‘pre-rebase, and so forth. I recommend this post on kernel.org for a good explanation of the hook types and thier purposes. Git also includes sample hook files inside the ‘hooks’ subdirectory of each new repository .git folder.
For our pipeline, we will employ the ‘post-receive’ hook. Whenever a successful push is received by Git Server’s project repository, the ‘post-receive’ hook will be called. The script commands, contained in the post-receive hook file, will be executed. Hooks can language agnostic; they can be almost any scripting language, such as Perl, Shell, Bash, or Ruby.
To create the hook, create a new file, ‘post-receive’, in the hooks sub-directory of the Git Server’s project repository. Add the below code to the file. Change the command to match your local file path. Also, change the API Token to match your user’s token from Jenkins. Note the command requires cURL to be installed on the Git Server. If installing cURL is not an option, there are other options available to execute the http post call from the hook’s script.
#!/bin/sh | |
# Call Jenkins to start build and pass environment parameter | |
# | |
echo "executing post-receive hook" | |
echo "environment=testing" | |
echo "user=jenkins" | |
# cURL POST request using jenkins user with API token | |
curl -u jenkins:{your-api-token-here} \ | |
--data "delay=0sec&environment=testing" \ | |
"{your-jenkins-server-url:port}/job/GitMavenGlassFish_Build/buildWithParameters" |
NetBeans and Git Hooks
Now some slightly bad news. As with any integration, there is always trade-offs; that is the case with NetBeans and Git. Although NetBeans works well with Git, there are a few features that have not been implemented. Unfortunately, this lack of complete integration effects NetBeans’ ability to make use of Git Hooks. Only after three hours of troubleshooting and research on the Internet, did I realize this limitation. The hooks fire fine if a git push command is executed from a command prompt or from within a Git application like Git Gui or Git Bash. However, from NetBeans, the Team -> Remote -> Push… does not cause the hooks to be called.
Git Hooks do not work with NetBeans because NetBeans does not use a command line client for Git. NetBeans uses a pure java implementation of the Git client, Java GIT, known as JGit. I understand that other IDE’s also share this limitation. There are several discussions on StackOverflow and on the NetBeans bug tracking site about the issue and workarounds.
So what does this mean? You can use NetBeans to perform all of your local tasks. However, when it comes time to push your code back to the remote Git Server repository, you must use a command prompt, Bash shell, or a command line based tool. I recommend Git Gui. Git ships with built-in GUI tools, including git-gui and gitk. It can be downloaded from git-scm.com.
Pushing changes to the remote Git Server using Git Gui instead of NetBeans may seem inconvenient at first. However, the more advanced your needs become with Git, the more you will find you need the additional functionality of Git Bash, Git Gui, and gitk. Tasks like resetting the branch to a previous revision, compressing the Git repository database, and visualizing repository history, can all be done with tools like Git Gui and gitk. I have Git Gui running when I am working in NetBeans or other IDEs; it becomes second nature.
Deploying to GlassFish
At this point we have configured the Git Server, created the Jenkins build and deploy jobs, and configured our Git hook. We are ready to test our deployment pipeline. First, make sure your GlassFish domains are running. Also, recall we are assuming that an initial deployment of the application has occurred. This might be directly through the GlassFish Administration Console, through NetBeans, or via the command line. Recall, Jenkins will be only be executing a re-deploy.
To test the system, make an innocuous change to the Project. Commit the change to your local Git repository. Following that, push the change back to the remote Git Server repository using Git Gui. If the hook fired, you will see output to the Git Gui terminal window, echoed from the post-receive hook as it executed its script.
The post-receive hook executes the cURL command, which posts an HTTP request to Jenkins via the Jenkins Remote API. You should observe is the Jenkins build job queued and running.
When the build completes, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job. The build results window also provides test results, Git Build Data, and the changes pushed to Git that triggered the CI build.
The console output from the build provides a detailed view of the build process. Using the ‘-e’ for echo with the Maven command, increases the level of output detail. You see the details of Maven copying the required dependencies from the remote repository to the local workspace repository, prior to compilation. You see the unit tests being executed. Finally, you see the WAR file assembled and the required artifacts archived.
Regarding Maven Dependencies, you will only see the dependencies copied on the first build to an empty workspace. Maven does not re-pull dependencies if they already exist in the workspace’s local repository. To see the difference, empty your workspace and build the job, then immediately rebuild the job. Compare the console outputs of both jobs. You will see a significant difference in the Maven dependency activities.
Once the build job has completed successfully, you should notice the Jenkins deployment job running, triggered by the build job. When complete, note the detail that lists the exact build job that called the deployment job, and its build number. For example, the upstream build job #45 triggered the downstream deployment job #33. This linkage between upstream and downstream jobs is retained in the job’s history.
As before, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job, and then on to the deployment job.
A review of the console output will confirm that the artifacts were copied from the build job and the WAR file was deployed to the ‘testing’ GlassFish domain.
GlassFish
If the hook fired, and both the Jenkins build and deployment jobs ran successfully, you should observe that the project’s WAR files, containing your recent change, was deployed to the testing GlassFish domain.
You can verify this by calling the application’s RESTful ‘resources/helloWorld’ URI, from your browser. Repeat the process by changing the output string, commit the change, and push. See if you see your change deployed.
Jenkins Workflows
Using our deployment pipeline, we have two distinct workflow options:
- Continuous– Use Git hooks to build, test, and deploy the WAR file to the domain(s) of choice when changes are pushed. Any time a change is pushed, a build, test, and deploy, should occur. This would be just for development at first. Once the project enters the testing phase of the SDLC, then it would include deployments to testing.
- Semi-Automated – Start the Jenkins build manually in the Jenkins browser-based Administration Console. This is more typical for a release to Production. Most teams are not comfortable extending the continuous deployment functionality into Production. Often, a deployment team will deploy the project artifacts in a controlled and staged approach. The Jenkins build and/or deployment jobs both allow this feature, along with the ability to provide the environment parameter both jobs needs.
Conclusion
In part 1, we learned how to create a simple Java EE web application project in NetBeans using Maven. We learned how to integrate JUnit for unit testing, and how use Git to manage our source code.
In part 2, we learned how to configure a remote Git Server, how to configure Jenkins CI Server to clone our project from the Git Server, build, test, and assemble it. If the build was successful, we learned how to configure Jenkins to deploy our project to a specific GlassFish domain, based on the project’s stage in the SDLC. We achieved our goals of continuous integration, automated testing, and continuous deployment.
Going Forward
To extend and enhance our deployment pipeline, you might consider adding the following features: 1) further separate the Jenkins jobs by function, 2) add build and deploy notifications, 3) add the ability to deploy to multiple environments simultaneously (i.e. development and testing), 4) add additional testing to confirm the deployment to GlassFish, 5) configure a versioning and naming scheme for the deployed artifacts, and 6) add error handling if a parameter is not received or is not one of the expected values.
Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2)
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on November 4, 2013
Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.
Introduction
In my earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit, I demonstrated a basic deployment pipeline using leading open-source technologies. In this post, we will demonstrate a similar pipeline, substituting Jenkins CI Server for Hudson, and Oracle’s GlassFish Application Server for WebLogic Server. We will use the same NetBeans Java EE ‘Hello World’ RESTful Web Service sample project.
The three main goals of our deployment pipeline will be continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).
Building a reliable deployment pipeline is complex and time-consuming. To make it as easy as possible in this post, I chose NetBeans IDE for development, Git Distributed Version Control System (DVCS) for managing our source code, Jenkins Continuous Integration (CI) Server for build automation, JUnit for automated unit testing, GlassFish for application hosting, and Apache Maven to manage our project’s dependencies. Maven will also manage the build and deployment process to GlassFish, along with Jenkins. The beauty of NetBeans is its out-of-the-box, built-in integration with Git, Maven, JUnit, and GlassFish. Likewise, Jenkins has plugin-based integration with Git, Maven, JUnit, and GlassFish. Also, Maven has plugin-based integration with GlassFish.
Maven is a powerful tool for managing modern software development projects. This post will only draw upon a small part of Maven’s functionality and plug-in architecture extensibility. Specifically, we will use the Maven GlassFish Plugin. According to the Java.net website, which host’s the plug-in project, ‘the Maven GlassFish Plugin is a Maven2 plugin allowing management of GlassFish domains and component deployments from within the Maven build life cycle.’
Requirements
To follow along with this post, I will assume you have recent versions of the following software installed and configured on your Windows OS-based computer (the process is nearly identical for Linux):
- NetBeans IDE. Current version: 7.4
- JUnit. Current version: 4.11 (included with NetBeans 7.4)
- GlassFish Server. Current version: 4.0 (included with NetBeans 7.4)
- Jenkins CI Server. Current version: 1.538
- Apache Maven. Current version: 3.1.1
- cURL. Current version: 7.33.0
- Git with Git Gui and gitk. Current version: 1.8.4.3
- Necessary system environmental variables:
M2_HOME, M2, JAVA_HOME, GLASSFISH_HOME, and PATH
GlassFish Domains
To simulate a simple deployment pipeline, we will create three GlassFish domains, simulating three common software environments, Development, Testing, and Production. A typical software project is promoted through these environments as it moves from development, to testing, and finally release to production. Each environment has distinct stakeholders with specific roles to play in the software development life cycle, including developers, testers, deployment teams, and end-users. Larger-scale, enterprise software development often includes other environments, such as Performance and Staging.
Create the domains from the command line using ‘asadmin’ commands such as the ones below. Note I have a ‘GLASSFISH_HOME’ system environment variable set up. The ports are your choice, but make sure they don’t conflict with existing installations of other applications, such as Jenkins, Tomcat, IIS, WebLogic, and so forth.
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production | |
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 6060 --instanceport 6061 testing | |
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 5050 --instanceport 5051 development |
As part of the creation process, you’re prompted for an admin account and a new password. I kept the ‘admin’ username, but added a new password for each domain created. This password is the same as one used in the separate password files (explained below).
C:\Users\gstaffor>asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production | |
Enter admin user name [Enter to accept default "admin" / no password]>admin | |
Enter the admin password [Enter to accept default of no password]> | |
Enter the admin password again> | |
Using port 7070 for Admin. | |
Using port 7071 for HTTP Instance. | |
Using default port 7676 for JMS. | |
Using default port 3700 for IIOP. | |
Using default port 8181 for HTTP_SSL. | |
Using default port 3820 for IIOP_SSL. | |
Using default port 3920 for IIOP_MUTUALAUTH. | |
Using default port 8686 for JMX_ADMIN. | |
Using default port 6666 for OSGI_SHELL. | |
Using default port 9009 for JAVA_DEBUGGER. | |
Distinguished Name of the self-signed X.509 Server Certificate is: | |
[CN={my_computer_name},OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C=US] | |
Distinguished Name of the self-signed X.509 Server Certificate is: | |
[CN={my_computer_name}-instance,OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C | |
=US] | |
Domain production created. | |
Domain production admin port is 7070. | |
Domain production admin user is "admin". | |
Command create-domain executed successfully. |
Add the GlassFish domains to NetBeans’ Services -> Server tab, and start them.
Setting Up the Project
To set up our NetBeans project, you can clone the repository on GitHub or build your own project from scratch and copy the files into the project. I will not spend a lot of time explaining the code since we have used it in earlier posts. This post is about the deployment pipeline system, not the project’s code.
If you choose to create a new project, first, create a new Maven ‘Project from Archetype’. Select the Archetype for a ‘web application using Java EE 7’ (webapp-javaee7).
I recommend you create the project inside of your local Git repository folder.
Maven will execute a series of commands to create the default NetBeans project with dependencies.
Git
As a part of a development team using Git, you place your project on a remote Git Server. You and your team members each clone the repository on the Git Server to your local development environments. You and your team commit your code changes locally, then pull, merge, and push your changes back to the Git Server. Jenkins will pull the project’s source code from the remote Git Server.
In part 2, we will properly set-up our project on the Git Server, exporting our existing repository into a new, bare repository on the Git Server. However, for brevity in part 1 of this post, we will just create a local Git repository. To start, create a new Git repository for the project. In NetBeans, select Team -> Git -> Initialize Repository… Choose the new Maven project folder.
The initial view of the Maven project should look like the below screen grabs. Note the icons and the green files show that the project is part of the Git repository.
Perform an initial commit of the project to Git to make sure everything is working.
Next, copy the supplied HelloWorldResource. java and NameStorageBean.java classes into the project. The package classpath will be refactored by NetBeans. Copy all the remaining files and folders, including the (3) files in the WEB-INF folder, properties folder with (3) properties files, and passwords folder with (3) password files.
JUnit
Next, right-click on the NameStorageBean.java class and select Tools -> Create Tests. Replace the contents of the new NameStorageBeanTest.java file’s NameStorageBeanTest class with the contents of the supplied NameStorageBeanTest.java file. These are two very simple unit tests that will show how JUnit provides automated testing capabilities.
Project Object Model (POM)
Copy the contents of the supplied pom file into the new pom file. There is a lot of configuration in the supplied pom. It will be easier to copy the supplied pom file’s contents into your project then trying to configure it from scratch.
Basically, beyond the normal boilerplate pom configuration, we have defined (3) properties, (3) dependencies, and (5) build plugins. The three dependencies are junit, jersey-servlet, and javaee-web-api. The five plugins are maven-compiler-plugin, maven-war-plugin, maven-dependency-plugin, properties-maven-plugin, and the maven-glassfish-plugin. Each plugin contains individual plug-in specific configuration. The name of the plugin should be sufficient to explain their primary purpose.
<?xml version="1.0" encoding="UTF-8"?> | |
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> | |
<modelVersion>4.0.0</modelVersion> | |
<groupId>com.blogpost</groupId> | |
<artifactId>HelloGlassFishMaven</artifactId> | |
<version>1.0-SNAPSHOT</version> | |
<packaging>war</packaging> | |
<name>HelloGlassFishMaven</name> | |
<properties> | |
<!-- Input Parameter - GlassFish properties file --> | |
<glassfish.properties.file.argument></glassfish.properties.file.argument> | |
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> | |
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> | |
</properties> | |
<dependencies> | |
<dependency> | |
<groupId>junit</groupId> | |
<artifactId>junit</artifactId> | |
<version>4.11</version> | |
</dependency> | |
<dependency> | |
<groupId>com.sun.jersey</groupId> | |
<artifactId>jersey-servlet</artifactId> | |
<version>1.13</version> | |
</dependency> | |
<dependency> | |
<groupId>javax</groupId> | |
<artifactId>javaee-web-api</artifactId> | |
<version>7.0</version> | |
<scope>provided</scope> | |
</dependency> | |
</dependencies> | |
<build> | |
<plugins> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-compiler-plugin</artifactId> | |
<version>3.1</version> | |
<configuration> | |
<source>1.7</source> | |
<target>1.7</target> | |
<compilerArguments> | |
<endorseddirs>${endorsed.dir}</endorseddirs> | |
</compilerArguments> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-war-plugin</artifactId> | |
<version>2.3</version> | |
<configuration> | |
<failOnMissingWebXml>false</failOnMissingWebXml> | |
<filteringDeploymentDescriptors>true</filteringDeploymentDescriptors> | |
<webresources> | |
<resource> | |
<directory>${basedir}/src/main/webapp/WEB-INF</directory> | |
<filtering>true</filtering> | |
<targetpath>WEB-INF</targetpath> | |
<includes> | |
<include>**/glassfish-web.xml</include> | |
</includes> | |
</resource> | |
</webresources> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-dependency-plugin</artifactId> | |
<version>2.6</version> | |
<executions> | |
<execution> | |
<phase>validate</phase> | |
<goals> | |
<goal>copy</goal> | |
</goals> | |
<configuration> | |
<outputDirectory>${endorsed.dir}</outputDirectory> | |
<silent>true</silent> | |
<artifactItems> | |
<artifactItem> | |
<groupId>javax</groupId> | |
<artifactId>javaee-endorsed-api</artifactId> | |
<version>7.0</version> | |
<type>jar</type> | |
</artifactItem> | |
</artifactItems> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>org.codehaus.mojo</groupId> | |
<artifactId>properties-maven-plugin</artifactId> | |
<version>1.0-alpha-2</version> | |
<configuration> | |
<files> | |
<file>${basedir}/properties/${glassfish.properties.file.argument}.properties</file> | |
</files> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.glassfish.maven.plugin</groupId> | |
<artifactId>maven-glassfish-plugin</artifactId> | |
<version>2.1</version> | |
<configuration> | |
<glassfishDirectory>${GLASSFISH_HOME}</glassfishDirectory> | |
<user>${glassfish.user}</user> | |
<passwordFile>${basedir}/passwords/${glassfish.pwdfile}</passwordFile> | |
<echo>true</echo> | |
<debug>true</debug> | |
<terse>true</terse> | |
<domain> | |
<name>${glassfish.domain}</name> | |
<host>${glassfish.host}</host> | |
<adminPort>${glassfish.adminport}</adminPort> | |
</domain> | |
<components> | |
<component> | |
<name>${project.artifactId}</name> | |
<artifact>${project.build.directory}/${project.build.finalName}.war</artifact> | |
</component> | |
</components> | |
</configuration> | |
</plugin> | |
</plugins> | |
</build> | |
</project> |
When complete, right-click on the project and do a ‘Build with Dependencies…’. Make sure everything builds. The final view of the project, with all its Maven-managed dependencies should look like the two screen grabs shown below. Make sure to commit all your new code to Git.
Maven and Properties Files
In part 2, will be deploying our project to multiple GlassFish domains. Each domain’s configuration is different. We will use Java properties files to store each of the GlassFish domain’s configuration properties. The ability to use Java properties files with Maven is possible using the Mojo Project’s Properties Maven Plugin. I introduced this plugin in an earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit.
Each environment (Development, Testing, Production), represented by a GlassFish domain, has a separate properties file in the project (see the Files Tab view above). The properties files contain configuration values the Maven GlassFish Plugin will need to deploy the project’s WAR file to each GlassFish domain. Since the build and deployment configurations are required by the project, including them into our Git repository and automating their use based on the environment, are two best practices.
# contents of all three files shown here | |
# development domain properties file | |
glassfish.domain=development | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=5050 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_development | |
# testing domain properties file | |
glassfish.domain=testing | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=6060 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_testing | |
# production domain properties file | |
glassfish.domain=production | |
glassfish.host=glassfish4-app-server | |
glassfish.adminport=7070 | |
glassfish.user=admin | |
glassfish.pwdfile=pwdfile_production |
In our project’s particular workflow, Maven accepts a single argument (‘glassfish.properties.file.argument’), which represents the environment we want to deploy to, such as ‘development’. The property value tells Maven which properties file to read, such as ‘development.properties’. Maven replaces the keys in the pom file with the values from the ‘development.properties’ file.
The properties file also tells Maven the full path to the separate password file, containing the admin user password, such as ‘pwdfile_development’. In an actual production environment, we would store encrypted password files on a secured file path. For simplicity in our example, we have included them unencrypted, within the project’s main directory.
There are other Maven capabilities that also would achieve our deployment goals. For example, you might consider the Maven Release Plugin, as well as look at using Maven Build Profiles.
Testing the Pipeline
Although we have not built the second half of our deployment pipeline yet, we can still test the system at this early stage. All the necessary foundational elements are in place. To test the our system, right-click on the Maven Project icon in the Projects tab and select Custom -> Goals… Enter the following Maven Goals: ‘properties:read-project-properties clean install glassfish:redeploy -e’. In the Properties text box, enter the following: ‘glassfish.properties.file.argument=testing’ (see screen grab below). This will execute a number of Maven Goals and associated commands, visible in the Output tab.
With this one simple command, we are asking Maven to 1) read in our Java properties file and password file, 2) clean the project, 3) pull down all our project’s dependencies, 4) compile the project’s code, 5) execute the unit tests with JUnit, 6) assemble the WAR file, and 7) deploy it to the ‘testing’ GlassFish domain using asadmin. The terse nature of the command really demonstrates the power of Maven to manage our project and the deployment pipeline!
If successful you should see a message in the Output tab, indicating as much. Reviewing the contents of the Output tab will give you complete insight into the Maven process under the NetBeans hood. We used the ‘-e’ (echo) argument with Maven and the ‘Show Debug Output’ to further provide information to us about the process. The output contains all calls to Maven and subsequently to asadmin (GlassFish). You can learn a lot about using Maven and asadmin (GlassFish) by studying the Debug Output.
Conclusion
In the first part of this post, we learned how to create a simple Java EE web application project in NetBeans, using Maven. We learned how to integrate JUnit for automated testing, and how use Git to manage our source code.
In the second half of this post, we will learn how to configure Jenkins CI Server to retrieve our project from the remote Git repository, build, test, and assemble it into a WAR file. If these steps are successful, Jenkins will deploy our project to a GlassFish domain or multiple domains, based on the project’s stage in the software development life cycle. We will demonstrate how to automate Jenkins to achieve true continuous integration and continuous deployment.
Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Java Development, Software Development on May 31, 2013
Build an automated testing, continuous integration, and continuous deployment system, using Maven, Hudson, WebLogic Server, JUnit, and NetBeans. Developed with Oracle’s Pre-Built Enterprise Java Development VM. Download the complete source code from Dropbox and on GitHub.
Introduction
In this post, we will build a basic automated testing, continuous integration, and continuous deployment system, using Oracle’s Pre-Built Enterprise Java Development VM. The primary goal of the system is to automatically compile, test, and deploy a simple Java EE web application to a test environment. As this post will demonstrate, the key to a successful system is not a single application, but the effective integration of all the system’s applications into a well-coordinated and consistent workflow.
Building system such as this can be complex and time-consuming. However, Oracle’s Pre-Built Enterprise Java Development VM already has all the components we need. The Oracle VM includes NetBeans IDE for development, Apache Subversion for version control, Hudson Continuous Integration (CI) Server for build automation, JUnit and Hudson for unit test automation, and WebLogic Server for application hosting.
In addition, we will use Apache Maven, also included on the Oracle VM, to help manage our project’s dependencies, as well as the build and deployment process. Overlapping with some of Apache Ant’s build task functionality, Maven is a powerful cross-cutting tool for managing the modern software development projects. This post will only draw upon a small part of Maven’s functionality.
Demonstration Requirements
To save some time, we will use the same WebLogic Server (WLS) domain we built-in the last post, Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM. We will also use code from the sample Hello World Java EE web project from that post. If you haven’t already done so, work through the last post’s example, first.
Here is a quick list of requirements for this demonstration:
- Oracle VM
- Oracle’s Pre-Built Enterprise Java Development VM running on current version of Oracle VM VirtualBox (mine: 4.2.12)
- Oracle VM’s has the latest system updates installed (see earlier post for directions)
- WLS domain from last post created and running in Oracle VM
- Credentials supplied with Oracle VM for Hudson (username and password)
- Window’s Development Machine
- Current version of Apache Maven installed and configured (mine: 3.0.5)
- Current version of NetBeans IDE installed and configured (mine: 7.3)
- Optional: Current version of WebLogic Server installed and configured
- All environmental variables properly configured for Maven, Java, WLS, etc. (MW_HOME, M2, etc.)
The Process
The steps involved in this post’s demonstration are as follows:
- Install the WebLogic Maven Plugin into the Oracle VM’s Maven Repositories, as well as the Development machine
- Create a new Maven Web Application Project in NetBeans
- Copy the classes from the Hello World project in the last post to new project
- Create a properties file to store Maven configuration values for the project
- Add the Maven Properties Plugin to the Project’s POM file
- Add the WebLogic Maven Plugin to project’s POM file
- Add JUnit tests and JUnit dependencies to project
- Add a WebLogic Descriptor to the project
- Enable Tunneling on the new WLS domain from the last post
- Build, test, and deploy the project locally in NetBeans
- Add project to Subversion
- Optional: Upgrade existing Hudson 2.2.0 and plugins on the Oracle VM latest 3.x version
- Create and configure new Hudson CI job for the project
- Build the Hudson job to compile, test, and deploy project to WLS
WebLogic Maven Plugin
First, we need to install the WebLogic Maven Plugin (‘weblogic-maven-plugin’) onto both the Development machine’s local Maven Repository and the Oracle VM’s Maven Repository. Installing the plugin will allow us to deploy our sample application from NetBeans and Hudson, using Maven. The weblogic-maven-plugin, a JAR file, is not part of the Maven repository by default. According to Oracle, ‘WebLogic Server provides support for Maven through the provisioning of plug-ins that enable you to perform various operations on WebLogic Server from within a Maven environment. As of this release, there are two separate plug-ins available.’ In this post, we will use the weblogic-maven-plugin, as opposed to the wls-maven-plugin. Again, according to Oracle, the weblogic-maven-plugin “delivered in WebLogic Server 11g Release 1, provides support for deployment operations.”
The best way to understand the plugin install process is by reading the Using the WebLogic Development Maven Plug-In section of the Oracle Fusion Middleware documentation on Developing Applications for Oracle WebLogic Server. It goes into detail on how to install and configure the plugin.
In a nutshell, below is a list of the commands I executed to install the weblogic-maven-plugin version 12.1.1.0 on both my Windows development machine and on my Oracle VM. If you do not have WebLogic Server installed on your development machine, and therefore no access to the plugin, install it into the Maven Repository on the Oracle VM first, then copy the jar file to the development machine and follow the normal install process from that point forward.
On Windows Development Machine:
cd %MW_HOME%/wlserver/server/lib | |
java -jar wljarbuilder.jar -profile weblogic-maven-plugin | |
mkdir c:\tmp | |
copy weblogic-maven-plugin.jar c:\tmp | |
cd c:\tmp | |
jar xvf c:\tmp\weblogic-maven-plugin.jar META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml | |
mvn install:install-file -DpomFile=META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml -Dfile=c:\tmp\weblogic-maven-plugin.jar |
On the Oracle VM:
cd $MW_HOME/wlserver_12.1/server/lib | |
java -jar wljarbuilder.jar -profile weblogic-maven-plugin | |
mkdir /home/oracle/tmp | |
cp weblogic-maven-plugin.jar /home/oracle/tmp | |
cd /home/oracle/tmp | |
jar xvf weblogic-maven-plugin.jar META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml | |
mvn install:install-file -DpomFile=META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml -Dfile=weblogic-maven-plugin.jar |
To test the success of your plugin installation, you can run the following maven command on Windows or Linux:
mvn help:describe -Dplugin=com.oracle.weblogic:weblogic-maven-plugin
Sample Maven Web Application
Using NetBeans on your development machine, create a new Maven Web Application. For those of you familiar with Maven, the NetBeans’ Maven Web Application project is based on the ‘webapp-javaee6:1.5’ Archetype. NetBeans creates the project by executing a ‘archetype:generate’ Maven Goal. This is seen in the ‘Output’ tab after the project is created.
By default you may have Tomcat and GlassFish as installed options on your system. Unfortunately, NetBeans currently does not have the ability to configure a remote connection to the WLS instance running on the Oracle VM, as I understand. You do not need an instance of WLS installed on your development machine since we are going to use the copy on the Oracle VM. We will use Maven to deploy the project to WLS on the Oracle VM, later in the post.
Next, copy the two java class files from the previous blog post’s Hello World project to the new project’s source package. Alternately, download a zipped copy this post’s complete sample code from Dropbox or on GitHub.
Because we are copying a RESTful web service to our new project, NetBeans will prompt us for some REST resource configuration options. To keep this new example simple, choose the first option and uncheck the Jersey option.
JUnit Tests
Next, create a set of JUnit tests for each class by right-clicking on both classes and selecting ‘Tools’ -> ‘Create Tests’.
We will use the test classes and dependencies NetBeans just added to the project. However, we will not use the actual JUnit tests themselves that NetBeans created. To properly set-up the default JUnit tests to work with an embedded version of WLS is well beyond the scope of this post.
Overwrite the contents of the class file with the code provided from Dropbox. I have replaced the default JUnit tests with simpler versions for this demonstration. Build the file to make sure all the JUnit tests all pass.
Project Properties
Next, add a new Properties file to the project, entitled ‘maven.properties’.
Add the following key/value pairs to the properties file. These key/value pairs are referenced will be referenced the POM.xml by the weblogic-maven-plugin, added in the next step. Placing the configuration values into a Properties file is not necessary for this post. However, if you wish to deploy to multiple environments, moving environmentally-specific configurations into separate properties files, using Maven Build Profiles, and/or using frameworks such as Spring, are all best practices.
Java Properties File (maven.properties):
# weblogic-maven-plugin configuration values for Oracle VM environment | |
wls.adminurl=t3://192.168.1.88:7031 | |
wls.user=weblogic | |
wls.password=welcome1 | |
wls.upload=true | |
wls.remote=false | |
wls.verbose=true | |
wls.middlewareHome=/labs/wls1211 | |
wls.name=HelloWorldMaven |
Maven Plugins and the POM File
Next, add the WLS Maven Plugin (‘weblogic-maven-plugin’) and the Maven Properties Plugin (‘properties-maven-plugin’) to the end of the project’s Maven POM.xml file. The Maven Properties Plugin, part of the Mojo Project, allows us to substitute configuration values in the Maven POM file from a properties file. According to codehaus,org, who hosts the Mojo Project, ‘It’s main use-case is loading properties from files instead of declaring them in pom.xml, something that comes in handy when dealing with different environments.’
Project Object Model File (pom.xml):
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | |
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> | |
<modelVersion>4.0.0</modelVersion> | |
<groupId>com.blogpost</groupId> | |
<artifactId>HelloWorldMaven</artifactId> | |
<version>1.0</version> | |
<packaging>war</packaging> | |
<name>HelloWorldMaven</name> | |
<properties> | |
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> | |
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> | |
</properties> | |
<dependencies> | |
<dependency> | |
<groupId>junit</groupId> | |
<artifactId>junit</artifactId> | |
<version>4.10</version> | |
<scope>test</scope> | |
</dependency> | |
<dependency> | |
<groupId>javax</groupId> | |
<artifactId>javaee-web-api</artifactId> | |
<version>6.0</version> | |
<scope>provided</scope> | |
</dependency> | |
</dependencies> | |
<build> | |
<plugins> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-compiler-plugin</artifactId> | |
<version>3.1</version> | |
<configuration> | |
<source>1.6</source> | |
<target>1.6</target> | |
<compilerArguments> | |
<endorseddirs>${endorsed.dir}</endorseddirs> | |
</compilerArguments> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-war-plugin</artifactId> | |
<version>2.4</version> | |
<configuration> | |
<failOnMissingWebXml>false</failOnMissingWebXml> | |
</configuration> | |
</plugin> | |
<plugin> | |
<groupId>org.apache.maven.plugins</groupId> | |
<artifactId>maven-dependency-plugin</artifactId> | |
<version>2.8</version> | |
<executions> | |
<execution> | |
<phase>validate</phase> | |
<goals> | |
<goal>copy</goal> | |
</goals> | |
<configuration> | |
<outputDirectory>${endorsed.dir}</outputDirectory> | |
<silent>true</silent> | |
<artifactItems> | |
<artifactItem> | |
<groupId>javax</groupId> | |
<artifactId>javaee-endorsed-api</artifactId> | |
<version>6.0</version> | |
<type>jar</type> | |
</artifactItem> | |
</artifactItems> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>org.codehaus.mojo</groupId> | |
<artifactId>properties-maven-plugin</artifactId> | |
<version>1.0-alpha-2</version> | |
<executions> | |
<execution> | |
<phase>initialize</phase> | |
<goals> | |
<goal>read-project-properties</goal> | |
</goals> | |
<configuration> | |
<files> | |
<file>maven_wls_local.properties</file> | |
</files> | |
</configuration> | |
</execution> | |
</executions> | |
</plugin> | |
<plugin> | |
<groupId>com.oracle.weblogic</groupId> | |
<artifactId>weblogic-maven-plugin</artifactId> | |
<version>12.1.1.0</version> | |
<configuration> | |
<adminurl>${wls.adminurl}</adminurl> | |
<user>${wls.user}</user> | |
<password>${wls.password}</password> | |
<upload>${wls.upload}</upload> | |
<action>deploy</action> | |
<remote>${wls.remote}</remote> | |
<verbose>${wls.verbose}</verbose> | |
<source>${project.build.directory}/${project.build.finalName}.${project.packaging}</source> | |
<name>${wls.name}</name> | |
</configuration> | |
<executions> | |
<execution> | |
<phase>install</phase> | |
<goals> | |
<goal>deploy</goal> | |
</goals> | |
</execution> | |
</executions> | |
</plugin> | |
</plugins> | |
</build> | |
</project> |
WebLogic Deployment Descriptor
A WebLogic Deployment Descriptor file is the last item we need to add to the new Maven Web Application project. NetBeans has descriptors for multiple servers, including Tomcat (context.xml), GlassFish (application.xml), and WebLogic (weblogic.xml). They provide a convenient location to store specific server properties, used during the deployment of the project.
Add the ‘context-root’ tag. The value will be the name of our project, ‘HelloWorldMaven’, as shown below. According to Oracle, “the context-root element defines the context root of this standalone Web application.” The context-root of the application will form part of the URL we enter to display our application, later.
Make sure to the WebLogic descriptor file (‘weblogic.xml’) is placed in the WEB-INF folder. If not, the descriptor’s properties will not be read. If the descriptor is not read, the context-root of the deployed application will default to the project’s WAR file’s name. Instead of ‘HelloWorldMaven’ as the context-root, you would see ‘HelloWorldMaven-1.0-SNAPSHOT’.
Enable Tunneling
Before we compile, test, and deploy our project, we need to make a small change to WLS. In order to deploy our project remotely to the Oracle VM’s WLS, using the WebLogic Maven Plugin, we must enable tunneling on our WLS domain. According to Oracle, the ‘Enable Tunneling’ option “Specifies whether tunneling for the T3, T3S, HTTP, HTTPS, IIOP, and IIOPS protocols should be enabled for this server.” To enable tunneling, from the WLS Administration Console, select the ‘AdminServer’ Server, ‘Protocols’ tab, ‘General’ sub-tab.
Build and Test the Project
Right-click and select ‘Build’, ‘Clean and Build’, or ‘Build with Dependencies’. NetBeans executes a ‘mvn install’ command. This command initiates a series of Maven Goals. The goals, visible NetBean’s Output window, include ‘dependency:copy’, ‘properties:read-project-properties’, ‘compiler:compile’, ‘surefire:test’, and so forth. They move the project’s code through the Maven Build Lifecycle. Most goals are self-explanatory by their title.
The last Maven Goal to execute, if the other goals have succeeded, is the ‘weblogic:deploy’ goal. This goal deploys the project to the Oracle VM’s WLS domain we configured in our project. Recall in the POM file, we configured the weblogic-maven-plugin to call the ‘deploy’ goal whenever ‘install’, referred to as Execution Phase by Maven, is executed. If all goals complete without error, you have just compiled, tested, and deployed your first Maven web application to a remote WLS domain. Later, we will have Hudson do it for us, automatically.
Executing Maven Goals in NetBeans
A small aside, if you wish to run alternate Maven goals in NetBeans, right-click on the project and select ‘Custom’ -> ‘Goals…’. Alternately, click on the lighter green arrows (‘Re-run with different parameters’), adjacent to the ‘Output’ tab.
For example, in the ‘Run Maven’ pop-up, replace ‘install’ with ‘surefire:test’ or simply ‘test’. This will compile the project and run the JUnit tests. There are many Maven goals that can be ran this way. Use the Control key and Space Bar key combination in the Maven Goals text box to display a pop-up list of available goals.
Subversion
Now that our project is complete and tested, we will commit the project to Subversion (SVN). We will commit a copy of our source code to SVN, installed on the Oracle VM, for safe-keeping. Having our source code in SVN also allows Hudson to retrieve a copy. Hudson will then compile, test, and deploy the project to WLS.
The Repository URL, User, and Password are all supplied in the Oracle VM information, along with the other URLs and credentials.
When you import you project for the first time, you will see more files than are displayed below. I had already imported part of the project earlier while creating this post. Therefore most of my files were already managed by Subversion.
Upgrading Hudson CI Server
The Oracle VM comes with Hudson pre-installed in it’s own WLS domain, ‘hudson-ci_dev’, running on port 5001. Start the domain from within the VM by double-clicking the ‘WLS 12c – Hudson CI 5001’ icon on the desktop, or by executing the domain’s WLS start-up script from a terminal window:
/labs/wls1211/user_projects/domains/hudson-ci_dev/startWebLogic.sh
Once started, the WLS Administration Console 12c is accessible at the following URL. User your VM’s IP address or ‘localhost’ if you are within the VM.
http://[your_vm_ip_address]:5001/console/login/LoginForm.jsp
The Oracle VM comes loaded with Hudson version 2.2.0. I strongly suggest is updating Hudson to the latest version (3.0.1 at the time of this post). To upgrade, download, deploy, and started a new 3.0.1 version in the same domain on the same ‘AdminServer’ Server. I was able to do this remotely, from my development machine, using the browser-based Hudson Dashboard and WLS Administration Console. There is no need to do any of the installation from within the VM, itself.
When the upgrade is complete, stop the 2.2.0 deployment currently running in the WLS domain.
The new version of Hudson is accessible from the following URL (adjust the URL your exact version of Hudson):
http://[your_vm_ip_address]:5001/hudson-3.0.1/
It’s also important to update all the Hudson plugins. Hudson makes this easy with the Hudson Plugin Manager, accessible via the Manage Hudson’ option.
Note on the top of the Manage Hudson page, there is a warning about the server’s container not using UTF-8 to decode URLs. You can follow this post, if you want to resolve the issue by configuring Hudson differently. I did not worry about it for this post.
Building a Hudson Job
We are ready to configure Hudson to build, test, and deploy our Maven Web Application project. Return to the ‘Hudson Dashboard’, select ‘New Job’, and then ‘Build a new free-style software job’. This will open the ‘Job Configurations’ for the new job.
Start by configuring the ‘Source Code Management’ section. The Subversion Repository URL is the same as the one you used in NetBeans to commit the code. To avoid the access error seen below, you must provide the Subversion credentials to Hudson, just as you did in NetBeans.
Next, configure the Maven 3 Goals. I chose the ‘clean’ and ‘install’ goals. Using ‘clean’ insures the project is compiled each time by deleting the output of the build directory.
Optionally, you can configure Hudson to publish the JUnit test results as shown below. Be sure to save your configuration.
Start a build of the new Hudson Job, by clicking ‘Build Now’. If your Hudson job’s configurations are correct, and the new WLS domain is running, you should have a clean build. This means the project compiled without error, all tests passed, and the web application’s WAR file was deployed successfully to the new WLS domain within the Oracle VM.
WebLogic Server
To view the newly deployed Maven Web Application, log into the WebLogic Server Administration Console for the new domain. In my case, the new domain was running on port 7031, so the URL would be:
http://[your_vm_ip_address]:7031/console/login/LoginForm.jsp
You should see the deployment, in an ‘Active’ state, as shown below.
To test the deployment, open a new browser tab and go to the URL of the Servlet. In this case the URL would be:
http://[your_vm_ip_address]:7031/HelloWorldMaven/resources/helloWorld
You should see the original phrase from the previous project displayed, ‘Hello WebLogic Server!’.
To further test the system, make a simple change to the project in NetBeans. I changed the name variable’s default value from ‘WebLogic Server’ to ‘Hudson, Maven, and WLS’. Commit the change to SVN.
Return to Hudson and run a new build of the job.
After the build completes, refresh the sample Web Application’s browser window. You should see the new text string displayed. Your code change was just re-compiled, re-tested, and re-deployed by Hudson.
True Continuous Deployment
Although Hudson is now doing a lot of the work for us, the system still is not fully automated. We are still manually building our Hudson Job, in order to deploy our application. If you want true continuous integration and deployment, you need to trust the system to automatically deploy the project, based on certain criteria.
SCM polling with Hudson is one way to demonstrate continuous deployment. In ‘Job Configurations’, turn on ‘Poll SCM’ and enter Unix cron-like value(s) in the ‘Schedule’ text box. In the example below, I have indicated a polling frequency every hour (‘@hourly’). Every hour, Hudson will look for committed changes to the project in Subversion. If changes are found, Hudson w retrieves the source code, compiles, and tests. If the project compiles and passes all tests, it is deployed to WLS.
There are less resource-intense methods to react to changes than SCM polling. Push-notifications from the repository is alternate, more preferable method.
Additionally, you should configure messaging in Hudson to notify team members of new deployments and the changes they contain. You should also implement a good deployment versioning strategy, for tracking purposes. Knowing the version of deployed artifacts is critical for accurate change management and defect tracking.
Helpful Links
Configuring and Using the WebLogic Maven Plug-In for Deployment
Jenkins: Building a Software Project
Kohsuke Kawaguchi: Polling must die: triggering Jenkins builds from a git hook