Posts Tagged Git

My Favorite Puppet, Docker, Git, and npm Code Snippets

The following is a collection of my favorite Puppet, Docker, Git, and npm commands and code snippets.

Puppet

###############################################################################
# Helpful Puppet commands and code snippets
###############################################################################
sudo puppet module list # different paths w/ or w/o sudo
puppet config print
puppet config print | grep <search_term>
sudo cat /etc/puppet/puppet.conf | grep <search_term>
sudo vi /etc/puppet/puppet.conf
sudo puppet module install <author>/<module_name>
sudo puppet module install <author>-<module_name>-<version>.tar.gz # Geppetto Export Module to File System
sudo puppet module install -i /etc/puppet/environments/production/modules <author>/<module_name>
sudo puppet module uninstall <module_name>
sudo rm -rf /etc/puppet/environments/production/module/<module_name> # use remove vs. uninstall when in env. path
sudo puppet apply <manifest>.pp
# Install Modules from GitHub onto Foreman node:
# Apache Config Example
wget -N https://github.com/garystafford/garystafford-apache_example_config/archive/v0.1.0.tar.gz && \
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force
# HAProxy Config Example
wget -N https://github.com/garystafford/garystafford-haproxy_node_config/archive/v0.1.0.tar.gz && \
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force

Docker

###############################################################################
# Helpful Docker commands and code snippets
###############################################################################
### CONTAINERS ###
docker stop $(docker ps -a -q) #stop ALL containers
docker rm -f $(docker ps -a -q) # remove ALL containers
docker rm -f $(sudo docker ps --before="container_id_here" -q) # can also filter
# exec into container
docker exec -it $(docker container ls | grep '<seach_term>' | awk '{print $1}') sh
# exec into container on windows with Git Bash
winpty docker exec -it $(docker container ls | grep '<seach_term>' | awk '{print $1}') sh
# helps with error: 'unexpected end of JSON input'
docker rm -f $(docker ps -a -q) # Remove all in one command with --force
docker exec -i -t "container_name_here" /bin/bash # Go to container command line
# to exit above use 'ctrl p', 'ctrl q' (don't exit or it will be in exited state)
docker rm $(docker ps -q -f status=exited) # remove all exited containers
### IMAGES ###
# list images and containers
docker images | grep "search_term_here"
# remove image(s) (must remove associated containers first)
docker rmi -f image_id_here # remove image(s)
docker rmi -f $(docker images -q) # remove ALL images!!!
docker rmi -f $(docker images | grep "^<none>" | awk '{print $3}') # remove all <none> images
docker rmi -f $(docker images | grep 'search_term_here' | awk '{print $1}') # i.e. 2 days ago
docker rmi -f $(docker images | grep 'search_1\|search_2' | awk '{print $1}')
### DELETE BOTH IMAGES AND CONTAINERS ###
docker images && docker ps -a
# stop and remove containers and associated images with common grep search term
docker ps -a --no-trunc | grep "search_term_here" | awk "{print $1}" | xargs -r --no-run-if-empty docker stop && \
docker ps -a --no-trunc | grep "search_term_here" | awk "{print $1}" | xargs -r --no-run-if-empty docker rm && \
docker images --no-trunc | grep "search_term_here" | awk "{print $3}" | xargs -r --no-run-if-empty docker rmi
# stops only exited containers and delete only non-tagged images
docker ps --filter 'status=Exited' -a | xargs docker stop docker images --filter "dangling=true" -q | xargs docker rmi
### DELETE NETWORKS AND VOLUMES ###
# clean up orphaned volumes
docker volume rm $(docker volume ls -qf dangling=true)
# clean up orphaned networks
docker network rm $(docker network ls -q)
### NEW IMAGES/CONTAINERS ###
# create new docker container, ie. ubuntu
docker pull ubuntu:latest # 1x pull down image
docker run -i -t ubuntu /bin/bash # drops you into new container as root
### OTHER ###
# install docker first using directions for installing latest version
# https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit
# other great tips: http://www.centurylinklabs.com/15-quick-docker-tips/
# fix fig / docker config: https://gist.github.com/RuslanHamidullin/94d95328a7360d843e52

Git

###############################################################################
# Helpful Git/GitHub commands and code snippets
###############################################################################
#### Remove/Squash All History on Master - CAUTION! ####
# https://stackoverflow.com/a/26000395/580268
git checkout --orphan latest_branch \
&& git add -A \\
&& git commit -am "Remove/squash all project history" \
&& git branch -D master \
&& git branch -m master \
&& git push -f origin master
#### List all committed files being tracked by git
git ls-tree --full-tree -r HEAD
### Display remote origin ###
git remote --verbose
### Clone single branch from repo
git clone -b --single-branch <branch> <remote_repo>
### Get all branches from a repo
git branch -r | grep -v '\->' | while read remote; do git branch --track "${remote#origin/}" "$remote"; done
git fetch --all
git pull --all
### Delete a single remote branch
git push origin --delete <branch>
### Change commit message ###
git commit --amend -m "Revised message here..."
git push -f # if already pushed
### Undo an Add File(s)
git reset <filename>
### Removing files from GitHub ###
# that should have been in .gitignore
git ls-tree -r master --name-only # list tracked files
git rm --cached <file> or git rm -r --cached <folder> #ie. git rm -r --cached .idea/
git add -A && git commit -m "Removing cached files from GitHub" && git push
# works even better!
git rm --cached -r . && git add .
### Tagging repos ###
# http://git-scm.com/book/en/v2/Git-Basics-Tagging
git tag -a v0.1.0 -m "version 0.1.0"
git push --tags
git tag # list all tags
# Finding and tagging old commit points
git log --pretty=oneline # find hash of commit
git tag -a v0.1.0 -m 'version 0.1.0' <partial_commit_hash_here>
git push --tags #origin master
git tag # list all tags
git checkout tags/v0.1.0 # check out that tagged point in commit history
# Chaning a tag to a new commit
git push origin :refs/tags/<tagname>
git tag -fa <tagname>
git push origin master --tags
# Remove a tag both locally and remote
git tag -d <tagname>; # local
git push origin ::refs/tags/<tagname> # remote
### Committing changes to GitHub ###
git add -A # add all
git commit -m "my changes..."
git push
# Combined
git add . && git commit -am "Initial commit of project" && git push
### Adding an existing project to GitHub ###
# Create repo in GitHub first
git add -A # add all
git commit -m "Initial commit of project"
git remote add origin <https://github.com/username/repo.git>
git remote -v # verify new remote
git pull origin master # pull down license and .gitignore
# might get message like to merge manually: 'Error reading...Press Enter to continue starting nano.'
git push --set-upstream origin master
git status # check if add/commit/push required?
### Branching GitHub repos ###
# https://github.com/Kunena/Kunena-Forum/wiki/Create-a-new-branch-with-git-and-manage-branches
# Run from within local repo
NEW_BRANCH=<new_branch_here>
git checkout -b ${NEW_BRANCH}
git push origin ${NEW_BRANCH}
git branch # list branches
git add -A # add all
git commit -m "my changes..."
git push --set-upstream origin ${NEW_BRANCH}
git checkout master # switch back to master
# Merge in the <new_branch_here> branch to master
# https://www.atlassian.com/git/tutorials/using-branches/git-merge
git checkout master
git merge ${NEW_BRANCH}
# deletes branch!!
git branch -d ${NEW_BRANCH} # deletes branch!!
# show pretty history
git log --graph --oneline --all --decorate --topo-order
# Rollback commit
git log # find commit hash
git revert <commit_hash>

npm

###############################################################################
# Helpful npm commands and code snippets
###############################################################################
# list top level packages w/o dependencies
npm list --depth=0
npm list --depth=0 -g
# list top level packages that are outdated w/o dependencies
npm outdated | sort
npm outdated -g | sort
# install packages
npm install -g <pkg>@<x.x.x>
npm install -g <pkg>@latest
# update packages
npm update --save
npm update --save-dev
npm update -g <pkg1> <pkg2> <pkg3>
# Update to latest dependencies, ignore and update package.json!
npm install -g npm-check-updates
npm-check-updates
npm-check-updates -u
npm update # verify new versions work with project
# prune and rebuild
npm prune
npm rebuild
npm rebuild -g
# find package dependencies
npm ls <pkg>
# alternate tool to check and update dependencies
npm install -g david
david -g
david update -g
bower list | sort
bower update
bower prune # uninstalls local extraneous packages
# lists available tasks
grunt --help

, , , , , ,

Leave a comment

Install Latest Node.js and npm in a Docker Container

Install the latest versions of Node.js and npm, into a Docker container, with or without the need for root access. Easily update both applications to the latest versions.

Install and Confirm Node and npm

Ubuntu and Node

Recently, I was setting up a new development laptop with Ubuntu 14.10 (Utopic Unicorn). As part of the setup, I needed to install all the several development tools, including Node.js and npm. Researching the current recommendations for installing Node.js and npm on Ubuntu, I found using the traditional ‘apt-get‘ command does not always install the latest versions of either application. Additionally, ‘apt-get’ makes updating those versions difficult.

After a lot of investigation, I created three different snippets of code to install the latest copies of Node.js and npm. Some of my code came from Isaac Z. Schlueter‘s series of installations Gists, and a post on StackOverflow by Pascal Hartig. Joyant and others recommended Isaac’s Gists for installing earlier versions of Node.js and npm. Other code was found in posts by DigitalOcean. Versions are as follows:

  • Version 1: using ‘apt-get install’
  • Version 2: using curl, make, and npmjs.org’s install script
  • Version 3: version 2 without requiring ‘sudo’ to use npm*

*There is some debate on the use of ‘sudo’ with some earlier versions of npm. It appears not to be recommended with the latest versions of npm.

Docker

Docker containers and virtual machines (VM) are ideal platforms for developing and testing applications, locally. I often create a Docker container or VirtualBox VM, to install and test new scripts, before running them within our software environments. To test this code, I created three separate Docker containers, based on the official 14.04 Ubuntu base image, located on Docker Hub. I then executed each version of code within a container. After installation testing, I chose version 2 for my laptop.

Displaying Docker Ubuntu Image and Containers

Docker Ubuntu Image and Containers

GitHub Gists

The three versions of install scripts on gist.github.com, perform the following tasks:

  • Creates Docker container
  • Updates Ubuntu system packages within container
  • Creates new ‘testuser’ account within container (‘testuser’)
  • Installs required software to install Node.js, if necessary (curl, make, etc.)
  • Installs Node.js and npm
  • Installs some common full-stack JavaScript npm packages
  • Verifies installation locations and contents correct
###############################################################################
# Version 1: using ‘apt-get install’
# Installs using apt-get
# Requires update to npm afterwards
# Harder to get latest copy of node
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq curl git nano
# install from nodesource using apt-get
# https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-an-ubuntu-14-04-server
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get install -yq nodejs build-essential
# fix npm - not the latest version installed by apt-get
npm install -g npm
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 2: using curl, make, and npmjs.org’s install script
# Installs using make and shell script
# Easier to update node and npm versions in future
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# install latest Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 3: version 2 without requiring ‘sudo’ to use npm
# Does NOT require sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# setting up npm for global installation without sudo
# http://stackoverflow.com/a/19379795/580268
MODULES="local" && \
echo prefix = ~/$MODULES >> ~/.npmrc && \
echo "export PATH=\$HOME/$MODULES/bin:\$PATH" >> ~/.bashrc && \
. ~/.bashrc && \
mkdir ~/$MODULES
# install Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
./configure --prefix=~/$MODULES && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# install common fullstack JavaScript packages globally
npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0

Running Code

Installing Node, npm, and New User Account

Installing Node, npm, and New User Account

Installing and Verifying npm Packages

Installing and Verifying npm Packages

, , , , , , , , , , ,

1 Comment

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part II

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View on Android Smartphone

Mobile View on Android Smartphone

Introduction

In this two-part post, we are exploring methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we installed and configured the ‘meanstack-data-samples’ project from GitHub. In part two, we will we will look at five examples of retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Project Structure

For brevity, I have tried to limit the number of files in the project. There are two main views, both driven by a single controller. The primary files, specific to data retrieval and display, are as follows:

  • Default site file (./)
    • index.html – loads all CSS and JavaScript files, and views
  • App and Routes (./app/scripts/)
    • app.js – instantiates app and defines routes (route/view/controller relationship)
  • Views (./app/views/)
    • data-bootstrap.html – uses Twitter Bootstrap
    • data-no-bootstrap.html – basically the same page, without Twitter Bootstrap
  • Controllers (./app/scripts/controllers/)
    • DataController.js (DataController) – single controller used by both views
  • Services and Factories (./app/scripts/services/)
    • meanService.js (meanService) – service returns array of object literals to DataController
    • jsonFactory.js (jsonFactory) – factory returns contents of JSON file
    • jsonFactoryResource.js (jsonFactoryResource) – factory returns contents of JSON file using resource object (new)
    • mongoFactory.js (mongoFactory) – factory returns MongoDB collection of documents
    • googleFactory.js (googleFactory) – factory call Google Web Search API
  • Models (./app/models/)
    • Components.js – mongoose constructor for the Component schema definition
  • Routes (./app/)
    • routes.js – mongoose RESTful routes
  • Data (./app/data/)
    • otherStuff.json – static JSON file loaded by jsonFactory
  • Environment Configuration (./config/environments/)
    • index.js – defines all environment configurations
    • test.js – Configuration specific to the current ‘test’ environment
  • Unit Tests (./test/spec/…)
    • Various files – all controller and services/factories unit test files are in here…
Project in JetBrains WebStorm 8.0

Project in JetBrains WebStorm 8.0

There are many more files, critical to the project’s functionality, include app.js, Gruntfile.js, bower.json, package.json, server.js, karma.conf.js, and so forth. You should understand each of these file’s purposes.

Function Returns Array

In the first example, we have the yeomanStuff() method, a member of the $scope object, within the DataController.  The yeomanStuff() method return an array object containing three strings. In JavaScript, a method is a function associated with an object.

$scope.yeomanStuff = function () {
  return [
    'yo',
    'Grunt',
    'Bower'
  ];
};
'yeomanStuff' Method of the '$scope' Object

‘yeomanStuff’ Method of the ‘$scope’ Object

The yeomanStuff() method is called from within the view by Angular’s ng-repeat directive. The directive, ng-repeat, allows us to loop through the array of strings and add them to an unordered list. We will use ng-repeat for all the examples in this post.

<ul class="list-group">
  <li class="list-group-item"
	  ng-repeat="stuff in yeomanStuff()">
	{{stuff}}
  </li>
<ul>

Method1

Although this first example is easy to implement, it is somewhat impractical. Generally, you would not embed static data into your code. This limits your ability to change the data, independent of a application’s code. In addition, the function is tightly coupled to the controller, limiting its reuse.

Service Returns Array

In the second example, we also use data embedded in our code. However, this time we have improved the architecture slightly by moving the data to an Angular Service. The meanService contains the getMeanStuff() function, which returns an array containing four object literals. Using a service, we can call the getMeanStuff() function from anywhere in our project.

angular.module('generatorMeanstackApp')
  .service('meanService', function () {
    this.getMeanStuff = function () {
      return ([
        {
          component: 'MongoDB',
          url: 'http://www.mongodb.org'
        },
        {
          component: 'Express',
          url: 'http://expressjs.com'
        },
        {
          component: 'AngularJS',
          url: 'http://angularjs.org'
        },
        {
          component: 'Node.js',
          url: 'http://nodejs.org'
        }
      ])
    };
  });

Within the DataController, we assign the array object, returned from the meanService.getMeanStuff() function, to the meanStuff object property of the  $scope object.

$scope.meanStuff = {};
try {
  $scope.meanStuff = meanService.getMeanStuff();
} catch (error) {
  console.error(error);
}
'meanStuff' Property of the '$scope' Object

‘meanStuff’ Property of the ‘$scope’ Object

The meanStuff object property is accessed from within the view, using ng-repeat. Each object in the array contains two properties, component and url. We display the property values on the page using Angular’s double curly brace expression notation (i.e. ‘{{stuff.component}}‘).

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in meanStuff">
    url}}"
       target="_blank">{{stuff.component}}
  </li>
<ul>

Method2

Promises, Promises…

The remaining methods implement an asynchronous (non-blocking) programming model, using the $http and $q services of Angular’s ng module. The services implements the asynchronous Promise and Deferred APIs. According to Chris Webb, in his excellent two-part post, Promise & Deferred objects in JavaScript: Theory and Semantics, a promise represents a value that is not yet known and a deferred represents work that is not yet finished. I strongly recommend reading Chris’ post, before continuing. I also highly recommend watching RED Ape EDU’s YouTube video, Deferred and Promise objects in Angular js. This video really clarified the promise and deferred concepts for me.

Factory Loads JSON File

In the third example, we will read data from a JSON file (‘./app/data/otherStuff.json‘) using an AngularJS Factory. The differences between a service and a factory can be confusing, and are beyond the scope of this post. Here is two great links on the differences, one on Angular’s site and one on StackOverflow.

{
  "components": [
    {
      "component": "jQuery",
      "url": "http://jquery.com"
    },
    {
      "component": "Jade",
      "url": "http://jade-lang.com"
    },
    {
      "component": "JSHint",
      "url": "http://www.jshint.com"
    },
    {
      "component": "Karma",
      "url": "http://karma-runner.github.io"
    },
    ...
  ]
}

The jsonFactory contains the getOtherStuff() function. This function uses $http.get() to read the JSON file and returns a promise of the response object. According to Angular’s site, “since the returned value of calling the $http function is a promise, you can also use the then method to register callbacks, and these callbacks will receive a single argument – an object representing the response. A response status code between 200 and 299 is considered a success status and will result in the success callback being called. ” As I mentioned, a complete explanation of the deferreds and promises, is too complex for this short post.

angular.module('generatorMeanstackApp')
  .factory('jsonFactory', function ($q, $http) {
    return {
      getOtherStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('data/otherStuff.json');

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The response object contains the data property. Angular defines the response object’s data property as a string or object, containing the response body transformed with the transform functions. One of the properties of the data property is the components array containing the seven objects. Within the DataController, if the promise is resolved successfully, the callback function assigns the contents of the components array to the otherStuff property of the $scope object.

$scope.otherStuff = {};
jsonFactory.getOtherStuff()
  .then(function (response) {
    $scope.otherStuff = response.data.components;
  }, function (error) {
    console.error(error);
  });
'otherStuff' Property of the '$scope' Object

‘otherStuff’ Property of the ‘$scope’ Object

The otherStuff property is accessed from the view, using ng-repeat, which displays individual values, exactly like the previous methods.

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in otherStuff">
    <a href="{{stuff.url}}"
       target="_blank">{{stuff.component}}</a>
  </li>
</ul>

Method3

This method of reading a JSON file is often used for configuration files. Static configuration data is stored in a JSON file, external to the actual code. This way, the configuration can be modified without requiring the main code to be recompiled and deployed. It is a technique used by the components within this very project. Take for example the bower.json files and the package.json files. Both contain configuration data, stored as JSON, used by Bower and npm to perform package management.

Factory Retrieves Data from MongoDB

In the fourth example, we will read data from a MongoDB database. There are a few more moving parts in this example than in the previous examples. Below are the documents in the components collection of the meanstack-test MongoDB database, which we will retrieve and display with this method.  The meanstack-test database is defined in the test.js environments file (discussed in part one).

'meanstack-test' Database's 'components' Collection Documents

‘meanstack-test’ Database’s ‘components’ Collection Documents

To connect to the MongoDB, we will use Mongoose. According to their website, “Mongoose provides a straight-forward, schema-based solution to modeling your application data and includes built-in type casting, validation, query building, business logic hooks and more, out of the box.” But wait, MongoDB is schemaless? It is. However, Mongoose provides a schema-based API for us to work within. Again, according to Mongoose’s website, “Everything in Mongoose starts with a Schema. Each schema maps to a MongoDB collection and defines the shape of the documents within that collection.

In our example, we create the componentSchema schema, and pass it to the Component model (the ‘M’ in MVC). The componentSchema maps to the database’s components collection.

var mongoose = require('mongoose');
var Schema = mongoose.Schema;

var componentSchema = new Schema({
  component: String,
  url: String
});

module.exports = mongoose.model('Component', componentSchema);

The routes.js file associates routes (Request URIs) and HTTP methods to Mongoose actions. These actions are usually CRUD operations. In our simple example, we have a single route, ‘/api/components‘, associated with an HTTP GET method. When an HTTP GET request is made to the ‘/api/components‘ request URI, Mongoose calls the Model.find() function, ‘Component.find()‘, with a callback function parameter. The Component.find() function returns all documents in the components collection.

var Component = require('./models/component');

module.exports = function (app) {
  app.get('/api/components', function (req, res) {
    Component.find(function (err, components) {
      if (err)
        res.send(err);

      res.json(components);
    });
  });
};

You can test these routes, directly. Below, is the results of calling the ‘/api/components‘ route in Chrome.

Response from MongoDB Using Mongoose

Response from MongoDB Using Mongoose

The mongoFactory contains the getMongoStuff() function. This function uses $http.get() to call  the ‘/api/components‘ route. The route is resolved by the routes.js file, which in turn executes the Component.find() command. The promise of an array of objects is returned by the getMongoStuff() function. Each object represents a document in the components collection.

angular.module('generatorMeanstackApp')
  .factory('mongoFactory', function ($q, $http) {
    return {
      getMongoStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('/api/components');

        httpPromise.success(function (components) {
          deferred.resolve(components);
        })
          .error(function (error) {
            console.error('Error: ' + error);
          });

        return deferred.promise;
      }
    };
  });

Within the DataController, if the promise is resolved successfully, the callback function assigns the array of objects, representing the documents in the collection, to the mongoStuff property of the $scope object.

$scope.mongoStuff = {};
mongoFactory.getMongoStuff()
  .then(function (components) {
    $scope.mongoStuff = components;
  }, function (error) {
    console.error(error);
  });
'mongoStuff' Property of the '$scope' Object

‘mongoStuff’ Property of the ‘$scope’ Object

The mongoStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item" ng-repeat="stuff in mongoStuff">
    <b>{{stuff.component}}</b>
    <div class="text-muted">{{stuff.description}}</div>
  </li>
</ul>

Method4

Factory Calls Google Search

Post Update: the Google Web Search API is no longer available as of September 29, 2014. The post’s example post will no longer return a resultset. Please migrate to the Google Custom Search API (https://developers.google.com/custom-search/). Please read ‘Calling Third-Party HTTP-based RESTful APIs from the MEAN Stack‘ post for more information on using Google’s Custom Search API.

In the last example, we will call the Google Web Search API from an AngularJS Factory. The Google Web Search API exposes a simple RESTful interface. According to Google, “in all cases, the method supported is GET and the response format is a JSON encoded result set with embedded status codes.” Google describes this method of using RESTful access to the API, as “for Flash developers, and those developers that have a need to access the Web Search API from other Non-JavaScript environment.” However, we will access it in our JavaScript-based MEAN stack application, due to the API’s ease of implementation.

Note according to Google’s site, “the Google Web Search API has been officially deprecated…it will continue to work…but the number of requests…will be limited. Therefore, we encourage you to move to Custom Search, which provides an alternative solution.Google Search, or more specifically, the Custom Search JSON/Atom API, is a newer API, but the Web Search API is easier to demonstrate in this brief post than Custom Search JSON/Atom API, which requires the use of an API key.

The googleFactory contains the getSearchResults() function. This function uses $http.jsonp() to call the Google Web Search API RESTful interface and return the promise of the JSONP-formatted (‘JSON with padding’) response. JSONP provides cross-domain access to a JSON payload, by wrapping the payload in a JavaScript function call (callback).

angular.module('generatorMeanstackApp')
  .factory('googleFactory', function ($q, $http) {
    return {
      getSearchResults: function () {
        var deferred = $q.defer(),
          host = 'https://ajax.googleapis.com/ajax/services/search/web',
          args = {
            'version': '1.0',
            'searchTerm': 'mean%20stack',
            'results': '8',
            'callback': 'JSON_CALLBACK'
          },
          params = ('?v=' + args.version + '&q=' + args.searchTerm + '&rsz=' +
            args.results + '&callback=' + args.callback),
          httpPromise = $http.jsonp(host + params);

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The getSearchResults() function uses the HTTP GET method to make an HTTP request the following RESTful URI:
https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=mean%20stack&rsz=8&callback=angular.callbacks._0

Using Google Chrome’s Developer tools, we can preview the Google Web Search JSONP-format HTTP response (abridged). Note the callback function that wraps the JSON payload.

Google Web Search Results in Chrome Browser

Google Web Search Results in Chrome Browser

Within the DataController, if the promise is resolved successfully, our callback function returns the response object. The response object contains a lot of information. We are able to limit that amount of information sent to the view by only assigning the actual search results, an array of eight objects contained in the response object, to the googleStuff property of the $scope object.

$scope.googleStuff = {};
googleFactory.getSearchResults()
  .then(function (response) {
    $scope.googleStuff = response.data.responseData.results;
  }, function (error) {
    console.error(error);
  });

Below is the full response returned by the The googleFactory. Note the path to the data we are interested in: ‘response.data.responseData.results‘.

Google Search Response Object

Google Search Response Object

Below is the filtered results assigned to the googleStuff property:

'googleStuff' Property of the '$scope' Object

‘googleStuff’ Property of the ‘$scope’ Object

The googleStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item"
      ng-repeat="stuff in googleStuff">
    <a href="{{unescapedUrl.url}}"
       target="_blank"><b>{{stuff.visibleUrl}}</b></a>

    <div class="text-muted">{{stuff.titleNoFormatting}}</div>
  </li>
</ul>

Method5

Links

, , , , , , , , , , , , , , , , , , , , ,

6 Comments

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part I

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View of Application on Android Smartphone

Mobile View of Application on Android Smartphone

Introduction

In the following two-part post, we will explore several methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we will install and configure the ‘meanstack-data-samples’ project from GitHub, which corresponds to this post. In part two, we will we will look at several methods for retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Preparation

If you need help setting up your development machine to work with the MEAN stack, refer to my last post, Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows. You will need to install all the MEAN and Yeoman components.

For this post, I am using JetBrains’ new WebStorm 8RC to build and demonstrate the project. There are several good IDE’s for building modern web applications; WebStorm is one of the current favorites of developers.

Complexity of Modern Web Applications

Building modern web applications using the MEAN stack or comparable technologies is complex. The ‘meanstack-data-samples’ project, and the projects it is based on, ‘generator-meanstack’ and ‘generator-angular’, have dozens of moving parts. In this simple project, we have MongoDBExpressJSAngularJS, Node.js, yoGrunt, BowerGitjQueryTwitter BootstrapKarmaJSHint, jQueryMongoose, and hundreds of other components, all working together. There are almost fifty Node packages and hundreds of their dependencies loaded by npm, in addition to another dozen loaded by Bower.

Installing, configuring, and managing all the parts of a modern web application requires a basic working knowledge of these technologies. Understanding how Bower and npm install and manage packages, how Grunt builds, tests, and serves the application with ExpressJS, how Yo scaffolds applications, how Karma and Jasmine run unit tests, or how Mongoose and MongoDB work together, are all essential. This brief post will primarily focus on retrieving and displaying data, not necessarily how the components all work, or work together.

Installing and Configuring the Project

Environment Variables

To start, we need to create (3) environment variables. The NODE_ENV environment variable is used to determine the environment our application is operating within. The NODE_ENV variable determines which configuration file in the project is read by the application when it starts. The configuration files contain variables, specific to that environment. There are (4) configuration files included in the project. They are ‘development’, ‘test’, ‘production’, and ‘travis’ (travis-ci.org). The NODE_ENV variable is referenced extensively throughout the project. If the NODE_ENV variable is not set, the application will default to ‘development‘.

For this post, set the NODE_ENV variable to ‘test‘. The value, ‘test‘, corresponds to the ‘test‘ configuration file (‘meanstack-data-samples\config\environments\test.js‘), shown below.

// set up =====================================
var express          = require('express');
var bodyParser       = require('body-parser');
var errorHandler     = require('errorhandler');
var favicon          = require('serve-favicon');
var logger           = require('morgan');
var cookieParser     = require('cookie-parser');
var methodOverride   = require('method-override');
var session          = require('express-session');
var path             = require('path');
var env              = process.env.NODE_ENV || 'development';

module.exports = function (app) {
    if ('test' == env) {
        console.log('environment = test');
        app.use(function staticsPlaceholder(req, res, next) {
            return next();
        });
        app.set('db', 'mongodb://localhost/meanstack-test');
        app.set('port', process.env.PORT || 3000);
        app.set('views', path.join(app.directory, '/app'));
        app.engine('html', require('ejs').renderFile);
        app.set('view engine', 'html');
        app.use(favicon('./app/favicon.ico'));
        app.use(logger('dev'));
        app.use(bodyParser());
        app.use(methodOverride());
        app.use(cookieParser('your secret here'));
        app.use(session());

        app.use(function middlewarePlaceholder(req, res, next) {
            return next();
        });

        app.use(errorHandler());
    }
};

The second environment variable is PORT. The application starts on the port indicated by the PORT variable, for example, ‘localhost:3000’. If the the PORT variable is not set, the application will default to port ‘3000‘, as specified in the each of the environment configuration files and the ‘Gruntfile.js’ Grunt configuration file.

Lastly, the CHROME_BIN environment variable is used Karma, the test runner for JavaScript, to determine the correct path to browser’s binary file. Details of this variable are discussed in detail on Karma’s site. In my case, the value for the CHROME_BIN is ‘C:\Program Files (x86)\Google\Chrome\Application\chrome.exe'. This variable is only necessary if you will be configuring Karma to use Chrome to run the tests. The browser can be changes to any browser, including PhantomJS. See the discussion at the end of this post regarding browser choice for Karma.

You can easily set all the environment variables on Windows from a command prompt, with the following commands. Remember to exit and re-open your interactive shell or command prompt window after adding the variables so they can be used.

REM cofirm the path to Chrome, change value if necessary
setx /m NODE_ENV "test"
setx /m PORT "3000"
setx /m CHROME_BIN "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

Install and Configure the Project

To install and configure the project, we start by cloning the ‘meanstack-data-samples‘ project from GitHub. We then use npm and bower to install the project’s dependencies. Once installed, we create and populate the Mongo database. We then use Grunt and Karma to unit test the project. Finally, we will use Grunt to start the Express Server and run the application. This is all accomplished with only a few individual commands. Please note, the ‘npm install’ command could take several minutes to complete, depending on your network speed; the project has many direct and indirect Node.js dependencies.

# install meanstack-data-samples project
git clone https://github.com/garystafford/meanstack-data-samples.git
cd meanstack-data-samples
npm install
bower install
mongoimport --db meanstack-$NODE_ENV --collection components components.json --drop # Unix
# mongoimport --db meanstack-%NODE_ENV% --collection components components.json --drop # Windows
grunt test
grunt server

If everything was installed correctly, running the ‘grunt test’ command should result in output similar to below:

Results of Running 'grunt test' with Chrome

Results of Running ‘grunt test’ with Chrome

If everything was installed correctly, running the ‘grunt server’ command should result in output similar to below:

Results of Running 'grunt server' to Start Application

Results of Running ‘grunt server’ to Start Application

Running the ‘grunt server’ command should start the application and open your browser to the default view, as shown below:

Displaying the Application's Google Search Results on Desktop Browser

Displaying the Application’s Google Search Results on Desktop Browser

Karma’s Browser Choice for Unit Tests

The GitHub project is currently configured to use Chrome for running Karma’s unit tests in the ‘development’ and ‘test’ environments. For the ‘travis’ environment, it uses PhantomJS. If you do not have Chrome installed on your machine, the ‘grunt test’ task will fail during the ‘karma:unit’ task portion. To change Karma’s browser preference, simply change the ‘testBrowser’ variable in the ‘./karma.conf.js’ file, as shown below.

// Karma configuration
module.exports = function (config) {
// Determines Karma's browser choice based on environment
var testBrowser = 'Chrome'; // Default browser
if (process.env.NODE_ENV === 'travis') {
testBrowser = 'PhantomJS'; // Must use for headless CI (Travis-CI)
}
console.log("Karma browser: " + testBrowser);
...
// Start these browsers, currently available:
// Chrome, ChromeCanary, Firefox, Opera,
// Safari (only Mac), PhantomJS, IE (only Windows)
browsers: [testBrowser],

I recommend installing and using  PhantomJS headless WebKit, locally. Since PhantomJS is headless, Karma runs the unit tests without having to open and close browser windows. To run this project on continuous integration servers, like Jenkins or Travis-CI, you must PhantomJS. If you decide to use PhantomJS on Windows, don’t forget add the PhantomJS executable directory path to your ‘PATH’ environment variable to, after downloading and installing the application.

 

Code Generator

As I mentioned at the start of this post, this project was based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. Optionally, to install the ‘generator-meanstack’ npm package, globally, on our system use the following command The  ‘generator-meanstack’ code generator will allow us to generate additional AngularJS components automatically, within the project, if we choose. The ‘generator-meanstack’ is not required for this post.

npm install -g generator-meanstack

 

Part II

In part two of this post, we will explore each methods of retrieving and displaying data using AngularJS, in detail.

Links

, , , , , , , , , , , , , , , , ,

2 Comments

Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows

Configure your Windows environment for developing modern web applications using the popular MEAN Stack and Yeoman suite of utilities.

MEAN Stack Using the MEAN.io Project, Running on Windows

MEAN Stack Using the MEAN.io Project Running on Windows

Introduction

It’s an exciting time to be involved in web development. There are dozens of popular open-source JavaScript frameworks, libraries, code-generators, and associated tools, exploding on to the development scene. It is now possible to use a variety of popular technology mashups to provide a complete, full-stack JavaScript application platform.

MEAN Stack

One of the JavaScript mashups gaining a lot of traction recently is the MEAN Stack. If you’re reading this post, you probably already know MEAN is an acronym for four leading technologies: MongoDB, ExpressJS, AngularJS, and Node.js. The MEAN stack provides an end-to-end JavaScript application solution. MEAN provides a NoSQL document database (Mongo), a server-side solution for JavaScript (Node/Express), and a magic client-side MV* framework (Angular).

Depending in which MEAN stack generator or pre-build project you start with, in addition to the main four technologies, you will pull down several other smaller libraries and frameworks.  These most commonly include jQuery, Twitter Bootstrap, Karma (test runner), Jade (template engine), JSHint, Underscore.js (utility-belt library), Mongoose (MongoDB object modeling tool), Passport (authentication), RequireJS (file and module loader), BreezeJS (data entity management), and so forth.

Common Tooling

If you are involved in these modern web development trends, then you are aware there is also a fairly common set of tools used by a majority of these developers, including source control, IDE, OS, and other helper-utilities. For SCM/VCS, Git is the clear winner. For an IDE, WebStormSublime Text, and Kompozer, are heavy favorites. The platform of choice for most developers most often appears to be either Mac or Linux. It’s far less common to see a demonstration of these technologies, or tutorials built on the Microsoft Windows platform.

Yeoman

Another area of commonality is help-utilities, used to make the development, building, dependency management, and deployment of modern JavaScript applications, easier. Two popular ones are Brunch and YeomanYeoman is also an acronym for a set of popular tools: yo, Grunt, and Bower. The first, yo, is best described as a scaffolding tool. Grunt is the build tool. Bower is a tool for client-side package and dependency management. We will install Yeoman, along with the MEAN Stack,  in this post.

Windows

There is no reason Windows cannot serve as your development and hosting platform for modern web development, without specifically using Microsoft’s .NET stack. In fact, with minimal set-up, you would barely know you were using Windows as opposed to Linux or Mac. In this post, I will demonstrate how to configure your Windows machine for developing these modern web applications using the MEAN Stack and Yeoman.

Here is a list of the components we will discuss:

Installations

Git

The use of Git for source control is obvious. Git is the overwhelming choice of modern developers. Git has been integrated into most major IDEs and hosting platforms. There are hooks into Git available for most leading development tools. However, there are more benefits to using Git than just SCM. Being a Linux/Mac user, I prefer to use a Unix-like shell on Windows, versus the native Windows Command Prompt. For this reason, I use Git for Windows, available from msysGit. This package includes Git SCM, Git Bash, and Git GUI. I use the Git Bash interactive shell almost exclusively for my daily interactions requiring a command prompt. I will be using the Git Bash interactive shell for this post. OpenHatch has great post and training materials available on using Git Bash with Windows.

Using Git Bash on Windows for a Unix-like Experience

Using Git Bash on Windows for a Unix-like Experience

Git for Windows provides a downloadable Windows executable file for installation. Follow the installation file’s instructions.

Git Installation Process

To test your installation of Git for Windows, call the Git binary with the ‘–version’ flag. This flag can be used to test all the components we are installing in this post. If the command returns a value, then it’s a good indication that the component is installed properly and can be called from the command prompt:

gstafford: ~/Documents/git_repos $ git --version
git version 1.9.0.msysgit.0

You can also verify Git using the ‘where’ and ‘which’ commands. The ‘where’ command will display the location of files that match the search pattern and are in the paths specified by the PATH environment variable. The ‘which’ command tells you which file gets executed when you run a command. These commands will work for most components we will install:

gstafford: ~/Documents/git_repos $ where git
C:\Program Files (x86)\Git\bin\git.exe
C:\Program Files (x86)\Git\cmd\git.cmd
C:\Program Files (x86)\Git\cmd\git.exe

gstafford: ~/Documents/git_repos $ which git
/bin/git

Ruby

The reasons for Git are obvious, but why Ruby? Yeoman, specifically yo, requires Ruby. Installing Ruby on Windows is easy. Ruby recommends using RubyInstaller for Windows. RubyInstaller downloads an executable file, making install easy. I am using Ruby 1.9.3. I had previously installed the latest 2.0.0, but had to roll-back after some 64-bit compatibility issues with other applications.

Ruby Installation Process

To test the Ruby installation, use the ‘–version’ flag again:

gstafford: ~/Documents/git_repos $ ruby --version
ruby 1.9.3p484 (2013-11-22) [i386-mingw32]

RubyGems

Optionally, you might also to install RubyGems. RubyGems allow you to add functionality to the Ruby platform, in the form of ‘Gems’. A common Gem used with the MEAN stack is Compass, the Sass-based stylesheet framework creation and maintenance of CSS. According to their website, Ruby 1.9 and newer ships with RubyGems built-in but you may need to upgrade for bug fixes or new features.

On Windows, installation of RubyGems is as simple as downloading the .zip file from the RubyGems download site. To install, RubyGems, unzip the downloaded file. From the root of the unzipped directory, run the following Ruby command:

ruby setup.rb

To confirm your installation:

gstafford: ~/Documents/git_repos $ gem --version
2.2.2

If you already have RubyGems installed, it’s recommended you update RubyGems before continuing. Use the first command, with the ‘–system’ flag, will update to the latest RubyGems. Use the second command, without the tag, if you want to update each of your individually installed Ruby Gems:

gem update --system
gem update

MongoDB

MongoDB provides a great set of installation and configuration instructions for Windows users. To install MongoDB, download the MongoDB package. Create a ‘mongodb’ folder. Mongo recommends at the root of your system drive. Unzip the MongoDB package to ‘c:\mongodb’ folder. That’s really it, there is no installer file.

Next, make a default Data Directory location, use the following two commands:

mkdir c://data && mkdir c://data/db

Unlike most other components, to call Mongo from the command prompt, I had to manually add the path to the Mongo binaries to my PATH environment variable. You can get access to your Windows environment variables using the Windows and Pause keys. Add the path ‘c:\mongodb\bin’ to end of the PATH environment variable value.

Adding Mongo to the PATH Environment Variable

Adding Mongo to the PATH Environment Variable

To test the MongoDB installation, and that the PATH variable is set correctly, close any current interactive shells or command prompt windows. Open a new shell and use the same ‘–version’ flag for Mongo’s three core components:

gstafford: ~/Documents/git_repos
$ mongo --version; mongod --version; mongos --version
MongoDB shell version: 2.4.9

db version v2.4.9
Sun Mar 09 16:26:48.730 git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c

MongoS version 2.4.9 starting: pid=15436 port=27017 64-bit 
host=localhost (--help for usage)
git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c
build sys info: windows sys.getwindowsversion(major=6, minor=1, build=7601, 
platform=2, service_pack='Service Pack 1') 
BOOST_LIB_VERSION=1_49

To start MongoDB, use the ‘mongod’ or ‘start mongod’ commands. Adding ‘start’ opens a new command prompt window, versus tying up your current shell. If you are not using the default MongoDB Data Directory (‘c://data/db’) you created in the previous step, use the ‘–dbpath’ flag, for example ‘start mongod –dbpath ‘c://alternate/path’.

MongoDB Running on Windows

MongoDB Running on Windows

.

MongoDB Default Data Directory Containing New Databases

MongoDB Default Data Directory Containing New MEAN Database

Node.js

To install Node.js, download and run the Node’s .msi installer for Windows. Along with Node.js, you will get npm (Node Package Manager). You will use npm to install all your server-side components, such as Express, yo, Grunt, and Bower.

Node Installation Process

gstafford: ~/Documents/git_repos 
$ node --version && npm --version
v0.10.26
1.4.3

Express

To install Express, the web application framework for node, use npm:

npm install -g express

The ‘-g’ flag (or, ‘–global’ flag) should be used. According to Stack Overflow, ‘if you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable:

gstafford: ~/Documents/git_repos $ express --version
3.5.0

Yeoman – yo, Grunt, and Bower

You will also use npm to install yoGrunt, and Bower. Actually, we will use npm to install the Grunt Command Line Interface (CLI). The Grunt task runner will be installed in your MEAN stack project, locally, later. Read these instructions on the Grunt website for a better explanation. Use the same basic command as with Express:

npm install -g yo grunt-cli bower

To test the installs, run the same command as before:

gstafford: ~/Documents/git_repos 
$ yo --version; grunt --version; bower --version
1.1.2
grunt-cli v0.1.13
1.2.8

If you already had Yeoman installed, confirm you have the latest versions with the ‘npm update’ command:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ npm update -g yo grunt-cli bower
npm http GET https://registry.npmjs.org/grunt-cli
npm http GET https://registry.npmjs.org/bower
npm http GET https://registry.npmjs.org/yo
npm http 304 https://registry.npmjs.org/bower
npm http 304 https://registry.npmjs.org/yo
npm http 200 https://registry.npmjs.org/grunt-cli

All of the npm installs, including Express, are installed and called from a common location on Windows:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ where express yo grunt bower
c:\Users\gstaffor\AppData\Roaming\npm\yo
c:\Users\gstaffor\AppData\Roaming\npm\yo.cmd
c:\Users\gstaffor\AppData\Roaming\npm\grunt
c:\Users\gstaffor\AppData\Roaming\npm\grunt.cmd
c:\Users\gstaffor\AppData\Roaming\npm\bower
c:\Users\gstaffor\AppData\Roaming\npm\bower.cmd

gstafford: ~/Documents/git_repos 
$ which express; which yo; which grunt; which bower
~/AppData/Roaming/npm/express
~/AppData/Roaming/npm/yo
~/AppData/Roaming/npm/grunt
~/AppData/Roaming/npm/bower

Use the command, ‘npm list –global | less’ (or, ‘npm ls -g | less’) to view all npm packages installed globally, in a tree-view. After you have generated your project (see below), check the project-specific server-side packages with the ‘npm ls’ command from within the project’s root directory. For the client-side packages, use the ‘bower ls’ command from within the project’s root directory.

If your in a hurry, or have more Windows boxes to configure you can use one npm command for all four components, above:

npm install -g express yo grunt-cli bower

MEAN Boilerplate Generators and Projects

That’s it, you’ve installed most of the core components you need to get started with the MEAN stack on Windows. Next, you will want to download one of the many MEAN boilerplate projects, or use a MEAN code generator with npm and yo. I recommend trying one or all of the following projects. They are each slightly different architecturally, but fairly stable:

Yeoman Running MEAN Generator

Yeoman Running MEAN Generator

.

MEAN Stack Using James Cryer's 'generator-mean'

MEAN Stack Using James Cryer’s ‘generator-mean’

Links

, , , , , , , , , , , , , , , , , , ,

21 Comments

Configure Chef Client on Windows for a Proxy Server

Configure Chef Client on Windows to work with a proxy server, by modifying Chef Knife’s configuration file.

Introduction

In my last two post, Configure Git for Windows and Vagrant on a Corporate Network and Easy Configuration of Git for Windows on a Corporate Network, I demonstrated how to configure Git for Windows and Vagrant to work properly on a corporate network with a proxy server. Modifying the .bashrc file and adding a few proxy-related environment variables worked fine for Git and Vagrant.

However, even though Chef Client also uses the Git Bash interactive shell to execute commands on Windows using Knife, Chef depends on Knife’s configuration file (knife.rb) for proxy settings.  In the following example, Git and Vagrant connect to the proxy server and authenticate using the proxy-related environment variables created by the ‘proxy_on’ function (described in my last post). However, Chef’s Knife command line tool fails to return the status of the online Hosted Chef server account, because the default knife.rb file contains no proxy server settings.

Knife Status Failed Due to Knife.rb Proxy Settings

Knife Status Failed Due to Knife’s Proxy Settings

For Chef to work correctly behind a proxy server, you must modify the knife.rb file, adding the necessary proxy-related settings. The good news, we can leverage the same proxy-related environment variables we already created for Git and Vagrant.

Configuring Chef Client

First, make sure you have your knife.rb file in the .chef folder, within your home directory (C:\Users\username\.chef\knife.rb’). This allows Chef to use the knife.rb file’s settings for all Chef repos on your local machine.

Chef's knife.rb Location in Home Directory

Chef’s knife.rb Location in Home Directory

Next, make sure you have the following environment variables set up on your computer: USERNAME, USERDNSDOMAIN, PASSWORD, PROXY_SERVER, and PROXY_PORT. The USERNAME and USERDNSDOMAIN should already present in the system wide environment variables on Windows. If you haven’t created the PASSWORD, PROXY_SERVER, PROXY_PORT environment variables already, based on my last post, I suggest adding them to the current user environment ( Environment Variables -> User variables, shown below) as opposed to the system wide environment (Environment Variables -> System variables). You can add the User variables manually, using Windows+Pause Keys -> Advanced system settings ->Environment Variables… -> New…

Windows 7 Environment Variables - Current User vs. System

Windows 7 Environment Variables – Current User vs. System

Alternately, you can use the ‘SETX‘ command. See commands below. When using ‘SETX’, do not use the ‘/m’ parameter, especially when setting the PASSWORD variable. According to SETX help (‘SETX /?’),  the ‘/m’ parameter specifies that the variable is set in the system wide (HKEY_LOCAL_MACHINE) environment. The default is to set the variable under the HKEY_CURRENT_USER environment (no ‘/m’). If you set your PASSWORD in the system wide environment, all user accounts on your machine could get your PASSWORD.

To see your changes with SETX, close and re-open your current command prompt window. Then, use a ‘env | grep -e PASSWORD -e PROXY’ command to view the three new environment variables.

[gist https://gist.github.com/garystafford/8233123 /]

Lastly, modify your existing knife.rb file, adding the required proxy-related settings, shown below. Notice, we use the ‘HTTP_PROXY’ and ‘HTTPS_PROXY’ environment variables set by ‘proxy_on’; no need to redefine them. Since my particular network environment requires proxy authentication, I have also included the ‘http_proxy_user’, ‘http_proxy_pass’, ‘https_proxy_user’, and ‘https_proxy_pass’ settings.

[gist https://gist.github.com/garystafford/8222755 /]

If your environment requires authentication and you fail to set these variables, you will see an error similar to the one shown below. Note the first line of the error. In this example, Chef cannot authenticate against the https proxy server. There is a ‘https_proxy’ setting, but no ‘https_proxy_user’ and ‘https_proxy_pass’ settings in the Knife configuration file.

HTTPS Proxy Authentication Credential Settings Missing from knife.rb

HTTPS Proxy Authentication Credential Settings Missing from knife.rb

Using the Code

Adding the proxy settings to the knife.rb file, Knife is able connect to the proxy server, authenticate, and complete its status check successfully. Now, Git, Vagrant, and Chef all have Internet connectivity through the proxy server, as shown below.

Knife Status Succeeds with Knife.rb Proxy Settings Added

Knife Status Succeeds with Knife.rb Proxy Settings Added

Why Include Authentication Settings?

Even with the domain, username and password, all included in the HTTP_PROXY and HTTPS_PROXY URIs, Chef still insists on using the ‘http_proxy_user’ and ‘http_proxy_pass’ or ‘https_proxy_user’ and ‘https_proxy_pass’ credential settings for proxy authentication. In my tests, if these settings are missing from Knife’s configuration file, Chef fails to authenticate with the proxy server.

, , , , , , , , , , , , , , , ,

1 Comment

Configure Git for Windows and Vagrant on a Corporate Network

Modified bashrc configuration for Git for Windows to work with both Git and Vagrant.

Basic Network

Introduction

In my last post, Easy Configuration of Git for Windows on a Corporate Network, I demonstrated how to configure Git for Windows to work when switching between working on-site, working off-site through a VPN, and working totally off the corporate network. Dealing with a proxy server was the main concern. The solution worked fine for Git. However, after further testing with Vagrant using the Git Bash interactive shell, I ran into a snag. Unlike Git, Vagrant did not seem to like the standard URI, which contained ‘domain\username’:

http(s)://domain\username:password@proxy_server:proxy_port

In a corporate environment with LDAP, qualifying the username with a domain is normal, like ‘domain\username’. But, when trying to install a Vagrant plug-in with a command such as ‘vagrant plugin install vagrant-omnibus’, I received an error similar to the following (proxy details obscured):

$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
c:/HashiCorp/Vagrant/embedded/lib/ruby/2.0.0/uri/common.rb:176: in `split':
bad URI(is not URI?): http://domain\username:password@proxy:port
(URI::InvalidURIError)...

Solution

After some research, it seems Vagrant’s ‘common.rb’ URI function does not like the ‘domain\username’ format of the original URI. To fix this problem, I modified the original ‘proxy_on’ function, removing the DOMAIN environment variable. I now suggest using the fully qualified domain name (FQDN) of the proxy server. So, instead of ‘my_proxy’, it would be ‘my_proxy.domain.tld’. The acronym ‘tld’ stands for the top-level domain (tld). Although .com is the most common one, there are over 300 top-level domains, so I don’t want assume yours is ‘.com’. The new proxy URI is as follows:

http(s)://username:password@proxy_server.domain.tld:proxy_port

Although all environments have different characteristics, I have found this change to work, with both Git and Vagrant, in my own environment. After making this change, I was able to install plug-ins and do other similar functions with Vagrant, using the Git Bash interactive shell.

$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
Installed the plugin 'vagrant-omnibus (1.2.1)'!

Change to Environment Variables

One change you will notice compared to my last post, and unrelated to the Vagrant domain issue, is a change to PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables. In the last post, I created and exported the PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables within the ‘proxy_on’ function. After further consideration, I permanently moved them to Environment Variables -> User variables. I felt this was a better solution, especially for my password. Instead of my user’s account password residing in the .bashrc file, in plain text, it’s now in my user’s environment variables. Although still not ideal, I felt my password was slightly more secure. Also, since my proxy server address rarely change when I am at work or on the VPN, I felt moving these was easier and cleaner than placing them into the .bashrc file.

The New Code

Verbose version:

# configure proxy for git while on corporate network
function proxy_on(){
# assumes $USERDOMAIN, $USERNAME, $USERDNSDOMAIN
# are existing Windows system-level environment variables
# assumes $PASSWORD, $PROXY_SERVER, $PROXY_PORT
# are existing Windows current user-level environment variables (your user)
# environment variables are UPPERCASE even in git bash
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT&quot;
export HTTPS_PROXY=$HTTP_PROXY
export FTP_PROXY=$HTTP_PROXY
export SOCKS_PROXY=$HTTP_PROXY
export NO_PROXY="localhost,127.0.0.1,$USERDNSDOMAIN"
# optional for debugging
export GIT_CURL_VERBOSE=1
# optional Self Signed SSL certs and
# internal CA certificate in an corporate environment
export GIT_SSL_NO_VERIFY=1
env | grep -e _PROXY -e GIT_ | sort
echo -e "\nProxy-related environment variables set."
}
# remove proxy settings when off corporate network
function proxy_off(){
variables=( \
"HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "SOCKS_PROXY" \
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" \
)
for i in "${variables[@]}"
do
unset $i
done
env | grep -e _PROXY -e GIT_ | sort
echo -e "\nProxy-related environment variables removed."
}
# if you are always behind a proxy uncomment below
#proxy_on
# increase verbosity of Vagrant output
export VAGRANT_LOG=INFO

Compact version:

function proxy_on(){
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT&quot;
export HTTPS_PROXY="$HTTP_PROXY" FTP_PROXY="$HTTP_PROXY" ALL_PROXY="$HTTP_PROXY" \
NO_PROXY="localhost,127.0.0.1,*.$USERDNSDOMAIN" \
GIT_CURL_VERBOSE=1 GIT_SSL_NO_VERIFY=1
echo -e "\nProxy-related environment variables set."
}
function proxy_off(){
variables=( "HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "ALL_PROXY" \
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" )
for i in "${variables[@]}"; do unset $i; done
echo -e "\nProxy-related environment variables removed."
}
# if you are always behind a proxy uncomment below
#proxy_on
# increase verbosity of Vagrant output
export VAGRANT_LOG=INFO

, , , , , , , , , , , , , ,

Leave a comment

Easy Configuration of Git for Windows on a Corporate Network

Configure Git for Windows to work when switching between working on-site, working off-site through a VPN, and working totally off the corporate network.

Basic Network

Introduction

Configuring Git to work on your corporate network can be challenging. A typical large corporate network may require Git to work behind proxy servers and firewalls, use LDAP authentication on a corporate domain, handle password expiration, deal with self-signed and internal CA certificates, and so forth. Telecommuters have the added burden of constantly switching device configurations between working on-site, working off-site through a VPN, and working totally off the corporate network at home or the local coffee shop.

There are dozens of posts on the Internet from users trying to configure Git for Windows to work on their corporate network. Many posts are oriented toward Git on Unix-based systems. Many responses only offer partial solutions without any explanation. Some responses incorrectly mix configurations for Unix-based systems with those for Windows.

Most solutions involve one of two approaches to handle proxy servers, authentication, and so forth. They are, modify Git’s .gitconfig file or set equivalent environment variables that Git will look for automatically. In my particular development situation, I spend equal amounts of time on and off a corporate network, on a Windows-based laptop. If I were always on-site, I would modify the .gitconfig file. However, since I am constantly moving on and off the network with a laptop, I chose a solution to create and destroy the environment variables, as I move on and off the corporate network.

Git for Windows

Whether you download Git from the Git website or the msysGit website, you will get the msysGit version of Git for Windows. As explained on the msysGit Wiki, msysGit is the build environment for Git for Windows. MSYS (thus the name, msysGit), is a Bourne Shell command line interpreter system, used by MinGW and originally forked from Cygwin. MinGW is a minimalist development environment for native Microsoft Windows applications.

Why do you care? By installing Git for Windows, you actually get a fairly functional Unix system running on Windows. Many of the commands you use on Unix-based systems also work on Windows, within msysGit’s Git Bash.

Setting Up Code

There are two identical versions of the post’s code, a well-commented version and a compact version.  Add either version’s contents to the .bashrc file in home directory. If you’ve worked with Linux, you are probably familiar with the .bashrc file and it’s functionality. On Unix-based systems, your home directory is ‘~/’ (/home/username), while on Windows, the equivalent directory path is ‘C:\Users\username\’.

On Windows, the .bashrc file is not created by default by Git for Windows. If you do not have a .bashrc file already, the easiest way to implement the post’s code is to download either Gist, shown below, from GitHub, rename it to .bashrc, and place it in your home directory.

After adding the code, change the PASSWORD, PROXY_SERVER, and PROXY_PORT environment variable values to match your network. Security note, this solution requires you to store you Windows user account password in plain text on your local system. This presents a certain level of security risk, as would storing it in your .gitconfig file.

The script assumes the same proxy server address for all protocols – HTTP, HTTPS, FTP, and SOCKS. If any of the proxy servers or ports are different, simply change the script’s variables.  You may also choose to add other variables and protocols, or remove them, based on your network requirements. Remember, environment variables on Windows are UPPERCASE. Even when using the interactive Git Bash shell, environment variables need to be UPPERCASED.

Lastly, as with most shells, you must exit any current interactive Git Bash shells and re-open a new interactive shell for the new functions in the .bashrc file to be available.

Verbose version:
[gist https://gist.github.com/garystafford/8128922 /]

Compact version:
[gist https://gist.github.com/garystafford/8135027 /]

Using the Code

When on-site and connected to your corporate network, or off-site and connected through a VPN, execute the ‘proxy_on’ function. When off your corporate network, execute the ‘proxy_off’ function.

Below, are a few examples of using Git to clone the popular angular.js repo from github.com (git clone https://github.com/angular/angular.js). The first example shows what happens on the corporate network when Git for Windows is not configured to work with the proxy server.

Failing to Clone with Proxy Settings Off

Failing to Clone GitHub Repo with Proxy Settings Off

The next example demonstrate successfully cloning the angular.js repo from github.com, while on the corporate network. The environment variables are set with the ‘proxy_on’ function. I have obscured the variable’s values and most of the verbose output from Git to hide confidential network-related details.

Successful Git Clone with Proxy Settings On

Successful Git Clone with Proxy Settings On

What’s My Proxy Server Address?

To setup the ‘proxy_on’ function, you need to know your proxy server’s address. One way to find this, is Control Panels -> Internet Options -> Connections -> LAN Settings. If your network requires a proxy server, it should be configured here.

LAN Settings - Proxy Server

LAN Settings – Proxy Server

However, on many corporate networks, Windows devices are configured to use a proxy auto-config (PAC) file. According to Wikipedia, a PAC file defines how web browsers and other user agents can automatically choose a network’s appropriate proxy server. The downside of a PAC file is that you cannot easily figure out what proxy server you are connected to.

LAN Settings - Using PAC file

LAN Settings – Using PAC file

To discover your proxy server with a PAC file, open a Windows command prompt and execute the following command. Use the command’s output to populate the script’s PROXY_SERVER and PORT variables.

reg query “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings” | find /i “proxyserver”

Checking Your Proxy from the Command Prompt

Checking Your Proxy from the Command Prompt

Resources

Arch Linux Wiki – Proxy Settings

Tips on Git

git(1) Manual Page

Customizing Git – Git Configuration

msysGit Wiki – Git on Windows

UNIX: Set Environment Variable

, , , , , , , , , ,

4 Comments

Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 2 of 2)

Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.

System Diagram 3a

Introduction

In part 1, Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2), we built the first part of our basic deployment pipeline using leading open-source technologies. In part 2, we will use Jenkins CI Server and Oracle GlassFish Application Server to complete our deployment pipeline.

To review, the three main goals of our deployment pipeline are continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).

Setting up Git Server

As I mentioned in part 1, as a part of a development team using Git, you would place your project on a remote Git Server. You and your team members would each clone the repository from the Git Server to your local development environments. You and your team would commit your code changes locally, then pull, merge, and push your changes back to the remote Git Server. Jenkins will pull the project’s source code from the Git Server.

In part 1 of this post, we just created a local Git repository. In part 2, we will properly set-up our project on a remote Git Server. First, we need to export our local repository into a new, bare repository on the Git Server. The Git term, ‘bare repository’, refers to a repository that does not contain a working directory. The repository has no working copies of your source files. You only use the bare repository to clone, pull from, and push to. The bare repository contains a .git extension (i.e. ssh://user@server:/git-repos/myproject.git).

From the root of your remote Git Server repository, execute the following command, substituting the path to your local project. If your Git Server is on a separate machine that your local project repository, you will need to copy the new bare repository to the remote Git Server. This involves a few simple steps, explained in this post, and at git-scm.com.

git clone --bare {path-to-existing-local-repository}\{name-of-repository} {name-of-repository}.git
Export Local Project to New Bare Repository

Export Local Project to New Bare Repository

Once you have created the repository on the remote Git Server, I would recommend you clone the remote repository to your local machine and discard your original local repository from part 1 of the post. You don’t have to do this step, but cloning fresh from the server will make sure Git is working correctly. The screen grabs below illustrate an example of cloning a new repository to my local NetBeans Project folder.

Clone New Bare Server Repository - Screen 1

Clone New Bare Server Repository – Screen 1

Clone New Bare Server Repository - Screen 2

Clone New Bare Server Repository – Screen 2

Clone New Bare Server Repository - Screen 3

Clone New Bare Server Repository – Screen 3

Configuring Jenkins

The diagram below illustrates the deployment pipeline from Git Server to Jenkins to GlassFish in finer detail. It begins with an initial commit to the local Git project repository and ends with the deployment of the project’s WAR file to the GlassFish domain. We will walk through it step-by-step.

System Diagram 3c

Jenkins Plugins

Before we create our new Jenkins Jobs, we need to configure Jenkins properly. You will need a recent version of Jenkins installed, along with the following plugins:

  1. Build With Parameters Plugin
  2. Copy Artifact Plugin
  3. Jenkins GIT plugin (includes Jenkins GIT client plugin)
  4. Jenkins Parameterized Trigger plugin
  5. Maven Integration plugin
  6. Credentials Plugin (optional for use with Git Server if security is enabled)
  7. ThinBackup (optional to install supplied Jenkins jobs configuration files)

Global Security

Jenkins can be configured with or without Global Security. For this post, I have enabled Global Security, as it typical of most development environments. I chose to use ‘Jenkins’s own user database’ option for authentication. In larger development environments, authentication would normally be done against LDAP.

Jenkins' Configure Global Security

Configuring Global Security

The user I have set up, ‘jenkins’, will be the user that Git authenticates with when connecting to Jenkins (explained later). Set up your own user and note their API Token. Since Global Security has been enabled, we will need the token later to trigger the Jenkins build from Git. Your user’s unique api token will be different than in the example below.

Jenkins User API Token

Jenkins User API Token

Jenkins Jobs

We will set up two Jenkins ‘free-style software project’ jobs, ‘GitMavenGlassFish_Build’ and ‘GitMavenGlassFish_Deploy’. We won’t be using the obvious choice, a ‘maven2/3 project’. If you’re interested, here’s why. The first job, the build job, will be responsible for pulling the source code from the Git Server. The build job, with help from Maven, will compile, test, and assemble the application code. The second job, the deployment job, will pull the artifacts from the build job and deploy them to GlassFish. The build job will trigger the deployment job, once the build job completes successfully. This is explained in detail, to follow.

Why Two Jobs?

Following good modular design and Separation of Concerns (SoC) principles, separating the build from the deployment gains us several advantages, including:

  1. Modularity– Ability to change deployment methodology or deployment targets, without disrupting the build and test process. For example, we might move the application hosting from GlassFish to WebLogic, or decide to use Ant instead of Maven for deployment tasks. This can happen totally independent of the build and testing processes.
  2. Separation/Isolation – For any reason we are unable to deploy the artifacts as part of the deployment job, we won’t impact the continuous integration and automated testing processes, which are part of the separate build job.
  3. Support – Support is easier by having smaller pieces of functionality to troubleshoot and maintain.

In a larger enterprise environment, you would probably encounter further separation of concerns. Unit testing, performance testing, deployment validation, and documentation generation (javadocs) are often handled by separate jobs. Jenkins represents a smaller pipeline within our larger deployment pipeline.

I intentionally left out notification for brevity. At minimum, you would want to be notified when the build or deployment jobs failed. Additionally, with continuous deployment, the deployment would trigger a notification to the stakeholders of that environment, such as the Testers. This lets them know the new software is ready to be tested. Notifications often include a list of bug fixes and feature enhancements that need to be tested. This can easily be pulled from Git into Jenkins and out to the end user.

Both Jenkins jobs definitions are available as xml files on gist.github.com. Using Jenkins’ ThinBackup Plugin, you can save both gists locally, and then restore them to your Jenkins server. The build job gist is here and the deployment job gist is here. This may save you some configuration time.

Jenkins Build Job

Both the build job and the deployment jobs require an input parameter. This property represents the targeted environment (GlassFish domain) for deployment, such as ‘testing’.. How this parameter is passed to Jenkins is discussed later in the Git Hooks section, below.

Reviewing the below screen grab of the build job’s configuration, you will observe the following steps:

  1. Build Request – A build request is received by the job (explained later). The request contains an input parameter indicating the ‘environment’. The parameter must be one of the choices listed in ‘Choices’.
  2. Maven Dependencies – Based on the pom file, Maven retrieves all the required dependencies from the remote Maven repository, if the dependencies are not already contained in the workspace’s local repository. Note the setting ‘User private Maven repository. This creates a local repository for project dependencies within the project’s workspace.
  3. Pull from Git – Jenkins pulls the code from the Git Server using the supplied repository configuration information. Note my Git Server does not require authentication. If it did, we would set-up and use the proper credentials.
  4. Build – Jenkins builds the project using the Maven command ‘clean install -e’. The pom file contains the necessary configuration information.
  5. Unit Test – The above Maven ‘install’ command also calls JUnit to execute the unit tests. The results of these tests are published and displayed as part of the build job’s details.
  6. Assemble WAR – The above Maven ‘install’ command also assembles the project’s WAR file.
  7. Archive Artifacts – Based on the success of the build and unit tests, Jenkins archives specific artifacts needed by the deployment job. Jenkins uses the input parameter in #1 to define which properties file and password file to archive.
  8. Trigger Deployment Job – Based on the success of the build and unit tests, Jenkins triggers the ‘downstream’ deployment job, passing it the same environment parameter.
Jenkins Build Job Configuration

Jenkins Build Job Configuration

Jenkins Deployment Job

Reviewing the below screen grab of the deployment job’s configuration, you will observe the following steps:

  1. Build Request – A build request is received from the upstream build job. The request contains the input parameter indicating ‘environment’.
  2. Copy Artifacts – Jenkins copies the artifacts from the build job that called the deploy job.
  3. Read Properties – Maven executes the command ‘mvn properties:read-project-properties glassfish:redeploy -e’. The first half of this command instructs Maven to read the appropriate properties file, as indicated by the environment parameter, ‘glassfish.properties.file.argument=${environment}’.
  4. POM – Maven substitutes the key ‘glassfish.properties.file.argument’ in the pom file with the environment value. This tells Maven the name of the properties file, which supplies all the remaining property values to the pom file.
  5. Maven Dependencies – If the dependencies are not already contained in the workspace’s local repository, Maven retrieves all the required dependencies from the remote Maven repositories, based on the pom. Note the setting ‘User private Maven repository’ checked in the screen grab below. This option instructs Jenkins to creates a local repository for project dependencies within the project’s workspace.
  6. Deployment – The last half of the command in #3 deploys, or more accurately redeploys the application’s WAR file to GlassFish. The ‘glassfish:redeploy’ works only if the WAR file has already been initially deployed to the GlassFish domain using the ‘glassfish:deploy’ command. For this process, I am assuming the initial deployment was already done directly through the GlassFish Administration Console, NetBeans, or command line.
Jenkins Deploy Job Configuration

Jenkins Deploy Job Configuration

Git Hooks

To achieve continuous integration, we want to automatically build and test our job after each change to our code. We have a number of choices to make this happen. The obvious choice is letting Jenkins poll the Git Server. Although polling would simplify configuration, polling is frowned upon in many environments. Even the creator of Jenkins, Kohsuke Kawaguchi, frowns upon polling in his post, ‘Polling Must Die‘.

Why is polling bad? It adds unnecessary activity and delay. Let’s say Jenkins’ polling frequency is set to every 2 minutes, but you only have an average of 5 pushes to your remote Git Server project repository per day. Based on these stats, in just one day, Jenkins will poll Git 720 times to discover only 5 pushes. That’s 144 times per push. Also, based on the polling frequency, when you do push, you could wait up to 2 minutes for Jenkins to queue the build job. The longer you wait for feedback on your changes, the greater chance your defects could be pulled down by other developers. You should expect immediate and continuous feedback.

A vastly more efficient and configurable method of continuous integration between Git and Jenkins is Git Hooks. Git Hooks allow us to execute scripts based on specific Git actions. In our case, when a developer completes a successful push to the remote Git Server project repository, we want to call Jenkins to build, test, and deploy the modified project code. Using hooks means we only call Jenkins when a successful push is completed. Furthermore, we can be assured Jenkins will immediately queue our request to build and deploy the job when a push occurs.

Post-Receive Hook

There are several types of Git Hooks. They include ‘post-commit’, ‘pre-push’, ‘update’, ‘pre-rebase, and so forth. I recommend this post on kernel.org for a good explanation of the hook types and thier purposes. Git also includes sample hook files inside the ‘hooks’ subdirectory of each new repository .git folder.

For our pipeline, we will employ the ‘post-receive’ hook. Whenever a successful push is received by Git Server’s project repository, the ‘post-receive’ hook will be called. The script commands, contained in the post-receive hook file, will be executed. Hooks can language agnostic; they can be almost any scripting language, such as Perl, Shell, Bash, or Ruby.

To create the hook, create a new file, ‘post-receive’, in the hooks sub-directory of the Git Server’s project repository. Add the below code to the file. Change the command to match your local file path. Also, change the API Token to match your user’s token from Jenkins. Note the command requires cURL to be installed on the Git Server. If installing cURL is not an option, there are other options available to execute the http post call from the hook’s script.

#!/bin/sh
# Call Jenkins to start build and pass environment parameter
#
echo "executing post-receive hook"
echo "environment=testing"
echo "user=jenkins"
# cURL POST request using jenkins user with API token
curl -u jenkins:{your-api-token-here} \
--data "delay=0sec&environment=testing" \
"{your-jenkins-server-url:port}/job/GitMavenGlassFish_Build/buildWithParameters"
view raw post-receive.sh hosted with ❤ by GitHub

NetBeans and Git Hooks

Now some slightly bad news. As with any integration, there is always trade-offs; that is the case with NetBeans and Git. Although NetBeans works well with Git, there are a few features that have not been implemented. Unfortunately, this lack of complete integration effects NetBeans’ ability to make use of Git Hooks. Only after three hours of troubleshooting and research on the Internet, did I realize this limitation. The hooks fire fine if a git push command is executed from a command prompt or from within a Git application like Git Gui or Git Bash. However, from NetBeans, the Team -> Remote -> Push… does not cause the hooks to be called.

Example Post-Receive Hook - Works from Command Prompt

Post-Receive Hook Working from a Windows Command Prompt

Git Hooks do not work with NetBeans because NetBeans does not use a command line client for Git. NetBeans uses a pure java implementation of the Git client, Java GIT, known as JGit. I understand that other IDE’s also share this limitation. There are several discussions on StackOverflow and on the NetBeans bug tracking site about the issue and workarounds.

So what does this mean? You can use NetBeans to perform all of your local tasks. However, when it comes time to push your code back to the remote Git Server repository, you must use a command prompt, Bash shell, or a command line based tool. I recommend Git Gui. Git ships with built-in GUI tools, including git-gui and gitk. It can be downloaded from git-scm.com.

Git Gui Graphical User Interface for Git

Git Gui Graphical User Interface for Git

Push Files Using Git GUI Instead of NetBeans

Push Files Using Git GUI Instead of NetBeans

Pushing changes to the remote Git Server using Git Gui instead of NetBeans may seem inconvenient at first. However, the more advanced your needs become with Git, the more you will find you need the additional functionality of Git Bash, Git Gui, and gitk. Tasks like resetting the branch to a previous revision, compressing the Git repository database, and visualizing repository history, can all be done with tools like Git Gui and gitk. I have Git Gui running when I am working in NetBeans or other IDEs; it becomes second nature.

Using Git Gui and gitk Used to Examine Repository

Using Git Gui and gitk to Examine and Modify the Project Repository

Deploying to GlassFish

At this point we have configured the Git Server, created the Jenkins build and deploy jobs, and configured our Git hook. We are ready to test our deployment pipeline. First, make sure your GlassFish domains are running. Also, recall we are assuming that an initial deployment of the application has occurred. This might be directly through the GlassFish Administration Console, through NetBeans, or via the command line. Recall, Jenkins will be only be executing a re-deploy.

Check and Start GlassFish Domains

Check and Start GlassFish Domains

To test the system, make an innocuous change to the Project. Commit the change to your local Git repository. Following that, push the change back to the remote Git Server repository using Git Gui. If the hook fired, you will see output to the Git Gui terminal window, echoed from the post-receive hook as it executed its script.

Push with Git Gui Triggering Jenkins Build

Push with Git Gui Triggering Jenkins Build

The post-receive hook executes the cURL command, which posts an HTTP request to Jenkins via the Jenkins Remote API. You should observe is the Jenkins build job queued and running.

Jenkins Build Job Running

Jenkins Build Job Running

When the build completes, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job. The build results window also provides test results, Git Build Data, and the changes pushed to Git that triggered the CI build.

Jenkins Build Job Results

Jenkins Build Job Results

The console output from the build provides a detailed view of the build process. Using the ‘-e’ for echo with the Maven command, increases the level of output detail. You see the details of Maven copying the required dependencies from the remote repository to the local workspace repository, prior to compilation. You see the unit tests being executed. Finally, you see the WAR file assembled and the required artifacts archived.

Regarding Maven Dependencies, you will only see the dependencies copied on the first build to an empty workspace. Maven does not re-pull dependencies if they already exist in the workspace’s local repository. To see the difference, empty your workspace and build the job, then immediately rebuild the job. Compare the console outputs of both jobs. You will see a significant difference in the Maven dependency activities.

Jenkins Build Job Console Results

Jenkins Build Job Console Results

Once the build job has completed successfully, you should notice the Jenkins deployment job running, triggered by the build job. When complete, note the detail that lists the exact build job that called the deployment job, and its build number. For example, the upstream build job #45 triggered the downstream deployment job #33. This linkage between upstream and downstream jobs is retained in the job’s history.

As before, review the Parameters menu option in the left navigation menu. It shows that the environment parameter was passed from the post-receive hook to the build job, and then on to the deployment job.

Jenkins Deployment Job Complete

Jenkins Deployment Job Complete

A review of the console output will confirm that the artifacts were copied from the build job and the WAR file was deployed to the ‘testing’ GlassFish domain.

Jenkins Deployment Job Console Output

Jenkins Deployment Job Console Output

GlassFish

If the hook fired, and both the Jenkins build and deployment jobs ran successfully, you should observe that the project’s WAR files, containing your recent change, was deployed to the testing GlassFish domain.

Application Installed on GlassFish Server Testing Domain

Application Installed on GlassFish Server Testing Domain

You can verify this by calling the application’s RESTful ‘resources/helloWorld’ URI, from your browser. Repeat the process by changing the output string, commit the change, and push. See if you see your change deployed.

Application Running on GlassFish Server Testing Domain

Application Running on GlassFish Server Testing Domain

Jenkins Workflows

Using our deployment pipeline, we have two distinct workflow options:

  1. Continuous– Use Git hooks to build, test, and deploy the WAR file to the domain(s) of choice when changes are pushed. Any time a change is pushed, a build, test, and deploy, should occur. This would be just for development at first. Once the project enters the testing phase of the SDLC, then it would include deployments to testing.
  2. Semi-Automated – Start the Jenkins build manually in the Jenkins browser-based Administration Console. This is more typical for a release to Production. Most teams are not comfortable extending the continuous deployment functionality into Production. Often, a deployment team will deploy the project artifacts in a controlled and staged approach. The Jenkins build and/or deployment jobs both allow this feature, along with the ability to provide the environment parameter both jobs needs.

Conclusion

In part 1, we learned how to create a simple Java EE web application project in NetBeans using Maven. We learned how to integrate JUnit for unit testing, and how use Git to manage our source code.

In part 2, we learned how to configure a remote Git Server, how to configure Jenkins CI Server to clone our project from the Git Server, build, test, and assemble it. If the build was successful, we learned how to configure Jenkins to deploy our project to a specific GlassFish domain, based on the project’s stage in the SDLC. We achieved our goals of continuous integration, automated testing, and continuous deployment.

Going Forward

To extend and enhance our deployment pipeline, you might consider adding the following features: 1) further separate the Jenkins jobs by function, 2) add build and deploy notifications, 3) add the ability to deploy to multiple environments simultaneously (i.e. development and testing), 4) add additional testing to confirm the deployment to GlassFish, 5) configure a versioning and naming scheme for the deployed artifacts, and 6) add error handling if a parameter is not received or is not one of the expected values.

, , , , , , , , , , , , , , , ,

9 Comments

Building a Deployment Pipeline Using Git, Maven, Jenkins, and GlassFish (Part 1 of 2)

Build an automated deployment pipeline for your Java EE applications using leading open-source technologies, including NetBeans, Git, Maven, JUnit, Jenkins, and GlassFish. All source code for this post is available on GitHub.

System Diagram 3a

Introduction

In my earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit, I demonstrated a basic deployment pipeline using leading open-source technologies. In this post, we will demonstrate a similar pipeline, substituting Jenkins CI Server for Hudson, and Oracle’s GlassFish Application Server for WebLogic Server. We will use the same NetBeans Java EE ‘Hello World’ RESTful Web Service sample project.

The three main goals of our deployment pipeline will be continuous integration, automated testing, and continuous deployment. Our objective is to automatically compile, test, assemble, and deploy our Java EE application to multiple environments, as the project progresses through the software development life cycle (SDLC).

Building a reliable deployment pipeline is complex and time-consuming. To make it as easy as possible in this post, I chose NetBeans IDE for development, Git Distributed Version Control System (DVCS) for managing our source code, Jenkins Continuous Integration (CI) Server for build automation, JUnit for automated unit testing, GlassFish for application hosting, and Apache Maven to manage our project’s dependencies. Maven will also manage the build and deployment process to GlassFish, along with Jenkins. The beauty of NetBeans is its out-of-the-box, built-in integration with Git, Maven, JUnit, and GlassFish. Likewise, Jenkins has plugin-based integration with Git, Maven, JUnit, and GlassFish. Also, Maven has plugin-based integration with GlassFish.

Maven is a powerful tool for managing modern software development projects. This post will only draw upon a small part of Maven’s functionality and plug-in architecture extensibility. Specifically, we will use the Maven GlassFish Plugin. According to the Java.net website, which host’s the plug-in project, ‘the Maven GlassFish Plugin is a Maven2 plugin allowing management of GlassFish domains and component deployments from within the Maven build life cycle.’

 Requirements

To follow along with this post, I will assume you have recent versions of the following software installed and configured on your Windows OS-based computer (the process is nearly identical for Linux):

  1. NetBeans IDE. Current version: 7.4
  2. JUnit. Current version: 4.11 (included with NetBeans 7.4)
  3. GlassFish Server. Current version: 4.0 (included  with NetBeans 7.4)
  4. Jenkins CI Server. Current version: 1.538
  5. Apache Maven. Current version: 3.1.1
  6. cURL. Current version: 7.33.0
  7. Git with Git Gui and gitk. Current version: 1.8.4.3
  8. Necessary system environmental variables:
    M2_HOME, M2, JAVA_HOME, GLASSFISH_HOME, and PATH

GlassFish Domains

To simulate a simple deployment pipeline, we will create three GlassFish domains, simulating three common software environments, Development, Testing, and Production. A typical software project is promoted through these environments as it moves from development, to testing, and finally release to production. Each environment has distinct stakeholders with specific roles to play in the software development life cycle, including developers, testers, deployment teams, and end-users. Larger-scale, enterprise software development often includes other environments, such as Performance and Staging.

Create the domains from the command line using ‘asadmin’ commands such as the ones below. Note I have a ‘GLASSFISH_HOME’ system environment variable set up. The ports are your choice, but make sure they don’t conflict with existing installations of other applications, such as Jenkins, Tomcat, IIS, WebLogic, and so forth.

asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 6060 --instanceport 6061 testing
asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 5050 --instanceport 5051 development

As part of the creation process, you’re prompted for an admin account and a new password. I kept the ‘admin’ username, but added a new password for each domain created. This password is the same as one used in the separate password files (explained below).

C:\Users\gstaffor>asadmin create-domain --domaindir "%GLASSFISH_HOME%\domains" --adminport 7070 --instanceport 7071 production
Enter admin user name [Enter to accept default "admin" / no password]>admin
Enter the admin password [Enter to accept default of no password]>
Enter the admin password again>
Using port 7070 for Admin.
Using port 7071 for HTTP Instance.
Using default port 7676 for JMS.
Using default port 3700 for IIOP.
Using default port 8181 for HTTP_SSL.
Using default port 3820 for IIOP_SSL.
Using default port 3920 for IIOP_MUTUALAUTH.
Using default port 8686 for JMX_ADMIN.
Using default port 6666 for OSGI_SHELL.
Using default port 9009 for JAVA_DEBUGGER.
Distinguished Name of the self-signed X.509 Server Certificate is:
[CN={my_computer_name},OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C=US]
Distinguished Name of the self-signed X.509 Server Certificate is:
[CN={my_computer_name}-instance,OU=GlassFish,O=Oracle Corporation,L=Santa Clara,ST=California,C
=US]
Domain production created.
Domain production admin port is 7070.
Domain production admin user is "admin".
Command create-domain executed successfully.

Add the GlassFish domains to NetBeans’ Services -> Server tab, and start them.

Create New GlassFish 4.0 Production Domain - Screen 1

Create New GlassFish 4.0 Production Domain – Screen 1

Create New GlassFish 4.0 Production Domain - Screen 2

Create New GlassFish 4.0 Production Domain – Screen 2

Create New GlassFish 4.0 Production Domain - Screen 3

Create New GlassFish 4.0 Production Domain – Screen 3

Create New GlassFish 4.0 Production Domain - Screen 4

Create New GlassFish 4.0 Production Domain – Screen 4

Setting Up the Project

To set up our NetBeans project, you can clone the repository on GitHub or build your own project from scratch and copy the files into the project. I will not spend a lot of time explaining the code since we have used it in earlier posts. This post is about the deployment pipeline system, not the project’s code.

If you choose to create a new project, first, create a new Maven ‘Project from Archetype’. Select the Archetype for a ‘web application using Java EE 7’ (webapp-javaee7).

New Maven Project - Screen 1

New Maven Project – Screen 1

New Maven Project - Screen 2

New Maven Project – Screen 2

I recommend you create the project inside of your local Git repository folder.

New Maven Project - Screen 3

New Maven Project – Screen 3

Maven will execute a series of commands to create the default NetBeans project with dependencies.

Git

As a part of a development team using Git, you place your project on a remote Git Server. You and your team members each clone the repository on the Git Server to your local development environments. You and your team commit your code changes locally, then pull, merge, and push your changes back to the Git Server. Jenkins will pull the project’s source code from the remote Git Server.

In part 2, we will properly set-up our project on the Git Server, exporting our existing repository into a new, bare repository on the Git Server. However, for brevity in part 1 of this post, we will just create a local Git repository. To start, create a new Git repository for the project. In NetBeans, select Team -> Git -> Initialize Repository… Choose the new Maven project folder.

Initialize New Git Repository

Initialize New Git Repository

The initial view of the Maven project should look like the below screen grabs. Note the icons and the green files show that the project is part of the Git repository.

Initial Projects Tab View of New Maven Project

Initial Projects Tab View of New Maven Project

Initial Files Tab View of New Maven Project

Initial Files Tab View of New Maven Project

Perform an initial commit of the project to Git to make sure everything is working.

Initial Commit of New Maven Project to Git

Initial Commit of New Maven Project to Git

Next, copy the supplied HelloWorldResource. java and NameStorageBean.java classes into the project. The package classpath will be refactored by NetBeans. Copy all the remaining files and folders, including the (3) files in the WEB-INF folder, properties folder with (3) properties files, and passwords folder with (3) password files.

JUnit

Next, right-click on the NameStorageBean.java class and select Tools -> Create Tests. Replace the contents of the new NameStorageBeanTest.java file’s NameStorageBeanTest class with the contents of the supplied NameStorageBeanTest.java file. These are two very simple unit tests that will show how JUnit provides automated testing capabilities.

Create JUnit Tests - Screen 1

Create JUnit Tests – Screen 1

Create JUnit Tests - Screen 2

Create JUnit Tests – Screen 2

Project Object Model (POM)

Copy the contents of the supplied pom file into the new pom file. There is a lot of configuration in the supplied pom. It will be easier to copy the supplied pom file’s contents into your project then trying to configure it from scratch.

Basically, beyond the normal boilerplate pom configuration, we have defined (3) properties, (3) dependencies, and (5) build plugins. The three dependencies are junit, jersey-servlet, and javaee-web-api. The five plugins are maven-compiler-plugin, maven-war-plugin, maven-dependency-plugin, properties-maven-plugin, and the maven-glassfish-plugin. Each plugin contains individual plug-in specific configuration. The name of the plugin should be sufficient to explain their primary purpose.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
<modelVersion>4.0.0</modelVersion>
<groupId>com.blogpost</groupId>
<artifactId>HelloGlassFishMaven</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<name>HelloGlassFishMaven</name>
<properties>
<!-- Input Parameter - GlassFish properties file -->
<glassfish.properties.file.argument></glassfish.properties.file.argument>
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-servlet</artifactId>
<version>1.13</version>
</dependency>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-web-api</artifactId>
<version>7.0</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
<compilerArguments>
<endorseddirs>${endorsed.dir}</endorseddirs>
</compilerArguments>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.3</version>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
<filteringDeploymentDescriptors>true</filteringDeploymentDescriptors>
<webresources>
<resource>
<directory>${basedir}/src/main/webapp/WEB-INF</directory>
<filtering>true</filtering>
<targetpath>WEB-INF</targetpath>
<includes>
<include>**/glassfish-web.xml</include>
</includes>
</resource>
</webresources>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<outputDirectory>${endorsed.dir}</outputDirectory>
<silent>true</silent>
<artifactItems>
<artifactItem>
<groupId>javax</groupId>
<artifactId>javaee-endorsed-api</artifactId>
<version>7.0</version>
<type>jar</type>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0-alpha-2</version>
<configuration>
<files>
<file>${basedir}/properties/${glassfish.properties.file.argument}.properties</file>
</files>
</configuration>
</plugin>
<plugin>
<groupId>org.glassfish.maven.plugin</groupId>
<artifactId>maven-glassfish-plugin</artifactId>
<version>2.1</version>
<configuration>
<glassfishDirectory>${GLASSFISH_HOME}</glassfishDirectory>
<user>${glassfish.user}</user>
<passwordFile>${basedir}/passwords/${glassfish.pwdfile}</passwordFile>
<echo>true</echo>
<debug>true</debug>
<terse>true</terse>
<domain>
<name>${glassfish.domain}</name>
<host>${glassfish.host}</host>
<adminPort>${glassfish.adminport}</adminPort>
</domain>
<components>
<component>
<name>${project.artifactId}</name>
<artifact>${project.build.directory}/${project.build.finalName}.war</artifact>
</component>
</components>
</configuration>
</plugin>
</plugins>
</build>
</project>
view raw pom.xml hosted with ❤ by GitHub

When complete, right-click on the project and do a ‘Build with Dependencies…’. Make sure everything builds. The final view of the project, with all its Maven-managed dependencies should look like the two screen grabs shown below. Make sure to commit all your new code to Git.

Final Projects Tab View of Project

Final Projects Tab View of Project

Final Files Tab View of Project

Final Files Tab View of Project

Maven and Properties Files

In part 2, will be deploying our project to multiple GlassFish domains. Each domain’s configuration is different. We will use Java properties files to store each of the GlassFish domain’s configuration properties. The ability to use Java properties files with Maven is possible using the Mojo Project’s Properties Maven Plugin. I introduced this plugin in an earlier post, Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit.

Each environment (Development, Testing, Production), represented by a GlassFish domain, has a separate properties file in the project (see the Files Tab view above). The properties files contain configuration values the Maven GlassFish Plugin will need to deploy the project’s WAR file to each GlassFish domain. Since the build and deployment configurations are required by the project, including them into our Git repository and automating their use based on the environment, are two best practices.

# contents of all three files shown here
# development domain properties file
glassfish.domain=development
glassfish.host=glassfish4-app-server
glassfish.adminport=5050
glassfish.user=admin
glassfish.pwdfile=pwdfile_development
# testing domain properties file
glassfish.domain=testing
glassfish.host=glassfish4-app-server
glassfish.adminport=6060
glassfish.user=admin
glassfish.pwdfile=pwdfile_testing
# production domain properties file
glassfish.domain=production
glassfish.host=glassfish4-app-server
glassfish.adminport=7070
glassfish.user=admin
glassfish.pwdfile=pwdfile_production

In our project’s particular workflow, Maven accepts a single argument (‘glassfish.properties.file.argument’), which represents the environment we want to deploy to, such as ‘development’. The property value tells Maven which properties file to read, such as ‘development.properties’. Maven replaces the keys in the pom file with the values from the ‘development.properties’ file.

The properties file also tells Maven the full path to the separate password file, containing the admin user password, such as ‘pwdfile_development’. In an actual production environment, we would store encrypted password files on a secured file path. For simplicity in our example, we have included them unencrypted, within the project’s main directory.

System Diagram 3b

There are other Maven capabilities that also would achieve our deployment goals. For example, you might consider the Maven Release Plugin, as well as look at using Maven Build Profiles.

Testing the Pipeline

Although we have not built the second half of our deployment pipeline yet, we can still test the system at this early stage. All the necessary foundational elements are in place. To test the our system, right-click on the Maven Project icon in the Projects tab and select Custom -> Goals… Enter the following Maven Goals: ‘properties:read-project-properties clean install glassfish:redeploy -e’. In the Properties text box, enter the following: ‘glassfish.properties.file.argument=testing’ (see screen grab below). This will execute a number of Maven Goals and associated commands, visible in the Output tab.

With this one simple command, we are asking Maven to 1) read in our Java properties file and password file, 2) clean the project, 3) pull down all our project’s dependencies, 4) compile the project’s code, 5) execute the unit tests with JUnit, 6) assemble the WAR file, and 7) deploy it to the ‘testing’ GlassFish domain using asadmin. The terse nature of the command really demonstrates the power of Maven to manage our project and the deployment pipeline!

Run Maven within NetBeans to Test Pipeline

Run Maven within NetBeans to Test Pipeline

If successful you should see a message in the Output tab, indicating as much. Reviewing the contents of the Output tab will give you complete insight into the Maven process under the NetBeans hood. We used the ‘-e’ (echo) argument with Maven and the ‘Show Debug Output’ to further provide information to us about the process. The output contains all calls to Maven and subsequently to asadmin (GlassFish). You can learn a lot about using Maven and asadmin (GlassFish) by studying the Debug Output.

Conclusion

In the first part of this post, we learned how to create a simple Java EE web application project in NetBeans, using Maven. We learned how to integrate JUnit for automated testing, and how use Git to manage our source code.

In the second half of this post, we will learn how to configure Jenkins CI Server to retrieve our project from the remote Git repository, build, test, and assemble it into a WAR file. If these steps are successful, Jenkins will deploy our project to a GlassFish domain or multiple domains, based on the project’s stage in the software development life cycle. We will demonstrate how to automate Jenkins to achieve true continuous integration and continuous deployment.

, , , , , , , , , , , , , , ,

9 Comments