Posts Tagged Ubuntu

Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox

 Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.

Puppet Master Agent Vagrant (3)

Introduction

Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.

Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.

A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.

In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a  Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).

Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntpgitDocker and Fig, on the agent nodes.

All the source code this project is on Github.

Vagrant

To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.

{
  "nodes": {
    "puppet.example.com": {
      ":ip": "192.168.32.5",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-master.sh"
    },
    "node01.example.com": {
      ":ip": "192.168.32.10",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    },
    "node02.example.com": {
      ":ip": "192.168.32.20",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    }
  }
}

The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.

If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.

# vi: set ft=ruby :

# Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file
# Author: Gary A. Stafford

# read vm and chef configurations from JSON files
nodes_config = (JSON.parse(File.read("nodes.json")))['nodes']

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"

  nodes_config.each do |node|
    node_name   = node[0] # name of node
    node_values = node[1] # content of node

    config.vm.define node_name do |config|
      # configures all forwarding ports in JSON array
      ports = node_values['ports']
      ports.each do |port|
        config.vm.network :forwarded_port,
          host:  port[':host'],
          guest: port[':guest'],
          id:    port[':id']
      end

      config.vm.hostname = node_name
      config.vm.network :private_network, ip: node_values[':ip']

      config.vm.provider :virtualbox do |vb|
        vb.customize ["modifyvm", :id, "--memory", node_values[':memory']]
        vb.customize ["modifyvm", :id, "--name", node_name]
      end

      config.vm.provision :shell, :path => node_values[':bootstrap']
    end
  end
end

Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.

Vagrant Machines in VM VirtualBox Manager

Vagrant Machines in VM VirtualBox Manager

The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name), such as, ‘vagrant ssh puppet.example.com‘.

Vagrant Machine Names

Vagrant Machine Names

Bootstrapping Puppet Master Server

As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.

#!/bin/sh

# Run on VM to bootstrap Puppet Master server

if ps aux | grep "puppet master" | grep -v grep 2> /dev/null
then
    echo "Puppet Master is already installed. Exiting..."
else
    # Install Puppet Master
    wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \
    sudo dpkg -i puppetlabs-release-trusty.deb && \
    sudo apt-get update -yq && sudo apt-get upgrade -yq && \
    sudo apt-get install -yq puppetmaster

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add optional alternate DNS names to /etc/puppet/puppet.conf
    sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf

    # Install some initial puppet modules on Puppet Master server
    sudo puppet module install puppetlabs-ntp
    sudo puppet module install garethr-docker
    sudo puppet module install puppetlabs-git
    sudo puppet module install puppetlabs-vcsrepo
    sudo puppet module install garystafford-fig

    # symlink manifest from Vagrant synced folder location
    ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp
fi

There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete,  ‘vagrant ssh puppet.example.com‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com‘ VM.

sudo service puppetmaster status # test that puppet master was installed
sudo service puppetmaster stop
sudo puppet master --verbose --no-daemonize
# Ctrl+C to kill puppet master
sudo service puppetmaster start
sudo puppet cert list --all # check for 'puppet' cert

According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.

Bootstrapping Puppet Agent Nodes

Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T‘). Use the command, ‘vagrant ssh node01.example.com‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.

#!/bin/sh

# Run on VM to bootstrap Puppet Agent nodes
# http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/

if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null
then
    echo "Puppet Agent is already installed. Moving on..."
else
    sudo apt-get install -yq puppet
fi

if cat /etc/crontab | grep puppet 2> /dev/null
then
    echo "Puppet Agent is already configured. Exiting..."
else
    sudo apt-get update -yq && sudo apt-get upgrade -yq

    sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \
        command='/usr/bin/puppet agent --onetime --no-daemonize --splay'

    sudo puppet resource service puppet ensure=running enable=true

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add agent section to /etc/puppet/puppet.conf
    echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null

    sudo puppet agent --enable
fi

Run the two commands below within both the ‘node01.example.com‘ and ‘node02.example.com‘ agent nodes.

sudo service puppet status # test that agent was installed
sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)

The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.

Agent Node Starting Puppet's Certificate Signing Request (CSR) Process

Agent Node Starting Puppet’s Certificate Signing Request (CSR) Process

Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.

sudo puppet cert list # should see 'node01.example.com' cert waiting for signature
sudo puppet cert sign --all # sign the agent node certs
sudo puppet cert list --all # check for signed certs
Puppet Master Completing Puppet's Certificate Signing Request (CSR) Process

Puppet Master Completing Puppet’s Certificate Signing Request (CSR) Process

Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}‘).

Configuration Run Completed on Puppet Agent Node

Configuration Run Completed on Puppet Agent Node

Below is the main site.pp manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.

node default {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git
}

node 'node01.example.com', 'node02.example.com' {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git, docker, fig
}

That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.

Helpful Links

All the source code this project is on Github.

Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html

Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options

Puppet Master Overview:
http://ci.openstack.org/puppet.html

Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html

Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i

Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html

Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet

, , , , , , , , , ,

8 Comments

Install Latest Node.js and npm in a Docker Container

Install the latest versions of Node.js and npm, into a Docker container, with or without the need for root access. Easily update both applications to the latest versions.

Install and Confirm Node and npm

Ubuntu and Node

Recently, I was setting up a new development laptop with Ubuntu 14.10 (Utopic Unicorn). As part of the setup, I needed to install all the several development tools, including Node.js and npm. Researching the current recommendations for installing Node.js and npm on Ubuntu, I found using the traditional ‘apt-get‘ command does not always install the latest versions of either application. Additionally, ‘apt-get’ makes updating those versions difficult.

After a lot of investigation, I created three different snippets of code to install the latest copies of Node.js and npm. Some of my code came from Isaac Z. Schlueter‘s series of installations Gists, and a post on StackOverflow by Pascal Hartig. Joyant and others recommended Isaac’s Gists for installing earlier versions of Node.js and npm. Other code was found in posts by DigitalOcean. Versions are as follows:

  • Version 1: using ‘apt-get install’
  • Version 2: using curl, make, and npmjs.org’s install script
  • Version 3: version 2 without requiring ‘sudo’ to use npm*

*There is some debate on the use of ‘sudo’ with some earlier versions of npm. It appears not to be recommended with the latest versions of npm.

Docker

Docker containers and virtual machines (VM) are ideal platforms for developing and testing applications, locally. I often create a Docker container or VirtualBox VM, to install and test new scripts, before running them within our software environments. To test this code, I created three separate Docker containers, based on the official 14.04 Ubuntu base image, located on Docker Hub. I then executed each version of code within a container. After installation testing, I chose version 2 for my laptop.

Displaying Docker Ubuntu Image and Containers

Docker Ubuntu Image and Containers

GitHub Gists

The three versions of install scripts on gist.github.com, perform the following tasks:

  • Creates Docker container
  • Updates Ubuntu system packages within container
  • Creates new ‘testuser’ account within container (‘testuser’)
  • Installs required software to install Node.js, if necessary (curl, make, etc.)
  • Installs Node.js and npm
  • Installs some common full-stack JavaScript npm packages
  • Verifies installation locations and contents correct
###############################################################################
# Version 1: using ‘apt-get install’
# Installs using apt-get
# Requires update to npm afterwards
# Harder to get latest copy of node
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq curl git nano
# install from nodesource using apt-get
# https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-an-ubuntu-14-04-server
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get install -yq nodejs build-essential
# fix npm - not the latest version installed by apt-get
npm install -g npm
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 2: using curl, make, and npmjs.org’s install script
# Installs using make and shell script
# Easier to update node and npm versions in future
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# install latest Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 3: version 2 without requiring ‘sudo’ to use npm
# Does NOT require sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# setting up npm for global installation without sudo
# http://stackoverflow.com/a/19379795/580268
MODULES="local" && \
echo prefix = ~/$MODULES >> ~/.npmrc && \
echo "export PATH=\$HOME/$MODULES/bin:\$PATH" >> ~/.bashrc && \
. ~/.bashrc && \
mkdir ~/$MODULES
# install Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
./configure --prefix=~/$MODULES && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# install common fullstack JavaScript packages globally
npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0

Running Code

Installing Node, npm, and New User Account

Installing Node, npm, and New User Account

Installing and Verifying npm Packages

Installing and Verifying npm Packages

, , , , , , , , , , ,

1 Comment

Dynamically Allocated Storage Issues with Ubuntu’s Cloud Images

Imagine you’ve provisioned dozens of nodes on your network using Ubuntu’s Cloud Images, expecting them to grow dynamically…

Background

According to Canonical, ‘Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows and LXC’. Ubuntu also disk images, or ‘boxes’, built specifically for Vagrant and VirtualBox. Boxes, according to Vagrant, ‘are the skeleton from which Vagrant machines are constructed. They are portable files which can be used by others on any platform that runs Vagrant to bring up a working environment‘. Ubuntu’s images are very popular with Vagrant users to build their VMs.

Assuming you have VirtualBox and Vagrant installed on your Windows, Mac OS X, or Linux system, with a few simple commands, ‘vagrant add box…’, ‘vagrant init…’, and ‘vagrant up’, you can provision a VM from one of these boxes.

Dynamically Allocated Storage

The Ubuntu Cloud Images (boxes), are Virtual Machine Disk (VMDK) format files. These VMDK files are configured for dynamically allocated storage, with a virtual size of 40 GB. That means the VMDK format file should grow to an actual size of 40 GB, as files are added. According to VirtualBox, the VM ‘will initially be very small and not occupy any space for unused virtual disk sectors, but will grow every time a disk sector is written to for the first time, until the drive reaches the maximum capacity chosen when the drive was created’.

To illustrate dynamically allocated storage, below are three freshly provisioned VirtualBox virtual machines (VM), on three different hosts, all with different operating systems. One VM is hosted on Windows 7 Enterprise, another on Ubuntu 13.10 Desktop Edition, and the last on Mac OS X 10.6.8. The VMs were all created with Vagrant from the official Ubuntu Server 13.10 (Saucy Salamander)  cloud images. The Windows and Ubuntu hosts used the 64-bit version. The Mac OS X host used the 32-bit version. According to VirtualBox Manager, on all three host platforms, the virtual size of the VMs is 40 GB and the actual size is about 1 GB.

VirtualBox Storage Settings on Windows Host

VirtualBox Storage Settings on Windows Host

VirtualBox Storage Settings on Ubuntu Host

VirtualBox Storage Settings on Ubuntu Host

VirtualBox Storage Settings on Mac OS X Host

VirtualBox Storage Settings on Mac OS X Host

So What’s the Problem?

After a significant amount of troubleshooting Chef recipe problems on two different Ubuntu-hosted VMs, the issue with the cloud images became painfully clear. Other than a single (seemingly charmed) Windows host, none of the VMs I tested on Windows-, Ubuntu-, and Mac OS X-hosts would expand beyond 4 GB. Below is the file system disk space usage report from four host’s VMs. All four were created with the most current version of Vagrant (1.4.1), and managed with the most current version of VirtualBox (4.3.6.x).

Windows-hosted 64-bit Cloud Image VM #1:

vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4       40G  1.1G   37G   3% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev           devtmpfs  241M   12K  241M   1% /dev
tmpfs          tmpfs      50M  336K   49M   1% /run
none           tmpfs     5.0M     0  5.0M   0% /run/lock
none           tmpfs     246M     0  246M   0% /run/shm
none           tmpfs     100M     0  100M   0% /run/user
/vagrant       vboxsf    233G  196G   38G  85% /vagrant

Windows-hosted 32-bit Cloud Image VM #2:

vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      4.0G 1012M  2.8G  27% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev           devtmpfs  245M  8.0K  245M   1% /dev
tmpfs          tmpfs      50M  336K   50M   1% /run
none           tmpfs     5.0M  4.0K  5.0M   1% /run/lock
none           tmpfs     248M     0  248M   0% /run/shm
none           tmpfs     100M     0  100M   0% /run/user
/vagrant       vboxsf    932G  209G  724G  23% /vagrant

Ubuntu-hosted 64-bit Cloud Image VM:

vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      4.0G  1.1G  2.7G  28% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev           devtmpfs  241M  8.0K  241M   1% /dev
tmpfs          tmpfs      50M  336K   49M   1% /run
none           tmpfs     5.0M     0  5.0M   0% /run/lock
none           tmpfs     246M     0  246M   0% /run/shm
none           tmpfs     100M     0  100M   0% /run/user
/vagrant       vboxsf     74G   65G  9.1G  88% /vagrant

Mac OS X-hosted 32-bit Cloud Image VM:

vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      4.0G 1012M  2.8G  27% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev           devtmpfs  245M   12K  245M   1% /dev
tmpfs          tmpfs      50M  336K   50M   1% /run
none           tmpfs     5.0M     0  5.0M   0% /run/lock
none           tmpfs     248M     0  248M   0% /run/shm
none           tmpfs     100M     0  100M   0% /run/user
/vagrant       vboxsf    149G   71G   79G  48% /vagrant

On the first Windows-hosted VM (the only host that actually worked), the virtual SCSI disk device (sda1), formatted ‘ext4‘, had a capacity of 40 GB. But, on the other three hosts, the same virtual device only had a capacity of 4 GB. I tested the various 32- and 64-bit Ubuntu Server 12.10 (Quantal Quetzal), 13.04 (Raring Ringtail), and 13.10 (Saucy Salamander) cloud images. They all exhibited the same issue. However, the Ubuntu 12.04.3 LTS (Precise Pangolin) worked fine on all three host OS systems.

To prove the issue was specifically with Ubuntu’s cloud images, I also tested boxes from Vagrant’s own repository, as well as other third-party providers. They all worked as expected, with no storage discrepancies. This was suggested in the only post I found on this issue, from StackExchange.

To confirm the Ubuntu-hosted VM will not expand beyond 4 GB, I also created a few multi-gigabyte files on each VM, totally 4 GB. The VMs virtual drive would not expand beyond 4 GB limit to accommodate the new files, as demonstrated below on a Ubuntu-hosted VM:

vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000
dd: writing '/tmp/big_file2.bin': No space left on device
742+0 records in
741+0 records out
777560064 bytes (778 MB) copied, 1.81098 s, 429 MB/s

vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      4.0G  3.7G  196K 100% /

The exact cause eludes me, but I tend to think the cloud images are the issue. I know they are capable of working, since the Ubuntu 12.04.3 cloud images expand to 40 GB, but the three most recent releases are limited to 4 GB. Whatever the cause, it’s a significant problem. Imagine you’ve provisioned a 100 or a 1,000 server nodes on your network from any of these cloud images, expecting them to grow to 40 GB, but really only having 10% of that potential. Worse, they have live production data on them, and suddenly run out of space.

Test Results

Below is the complete shell sessions from three hosts.

Windows-hosted 64-bit Cloud Image VM #1:

gstafford@windows-host: ~/Documents/GitHub
$ vagrant --version
Vagrant 1.4.1
gstafford@windows-host: /c/Program Files/Oracle/VirtualBox
$ VBoxManage.exe --version
4.3.6r91406
gstafford@windows-host: ~/Documents/GitHub
$ mkdir cloudimage-test-win
gstafford@windows-host: ~/Documents/GitHub
$ cd cloudimage-test-win
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win
$ vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'vagrant-vm-saucy-server'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Fixed port collision for 22 => 2222. Now on port 2200.
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2200 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
DL is deprecated, please use Fiddle
[default] Machine booted and ready!
[default] The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
cause things such as shared folders to not work properly. If you see
shared folder errors, please make sure the guest additions within the
virtual machine match the version of VirtualBox you have installed on
your host and reload your VM.
Guest Additions Version: 4.2.16
VirtualBox Version: 4.3
[default] Mounting shared folders...
[default] -- /vagrant
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win
$ vagrant ssh
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-14-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information disabled due to load higher than 1.0
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 40G 1.1G 37G 3% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 12K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 233G 196G 38G 85% /vagrant
vagrant@vagrant-ubuntu-saucy-64:/tmp$ dd if=/dev/zero of=/tmp/big_file1.bin bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 5.11716 s, 410 MB/s
vagrant@vagrant-ubuntu-saucy-64:/tmp$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 7.78449 s, 269 MB/s
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 40G 5.0G 33G 14% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 12K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 233G 196G 38G 85% /vagrant
vagrant@vagrant-ubuntu-saucy-64:/tmp$

Ubuntu-hosted 64-bit Cloud Image VM:

gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant --version
Vagrant 1.4.1
gstafford@ubuntu-host:/usr/bin$ vboxmanage --version
4.3.6r91406
gstafford@ubuntu-host:~/GitHub$ mkdir cloudimage-test-ubt
gstafford@ubuntu-host:~/GitHub$ cd cloudimage-test-ubt/
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Box 'vagrant-vm-saucy-server' was not found. Fetching box from specified URL for
the provider 'virtualbox'. Note that if the URL does not have
a box for this provider, you should interrupt Vagrant now and add
the box yourself. Otherwise Vagrant will attempt to download the
full box prior to discovering this error.
Downloading box from URL: file:/home/gstafford/GitHub/cloudimage-test-ubt/saucy-server-cloudimg-amd64-vagrant-disk1.box
Extracting box...te: 23.8M/s, Estimated time remaining: 0:00:01))
Successfully added box 'vagrant-vm-saucy-server' with provider 'virtualbox'!
[default] Importing base box 'vagrant-vm-saucy-server'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Fixed port collision for 22 => 2222. Now on port 2200.
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2200 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
[default] Machine booted and ready!
[default] The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
cause things such as shared folders to not work properly. If you see
shared folder errors, please make sure the guest additions within the
virtual machine match the version of VirtualBox you have installed on
your host and reload your VM.
Guest Additions Version: 4.2.16
VirtualBox Version: 4.3
[default] Mounting shared folders...
[default] -- /vagrant
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant ssh
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information disabled due to load higher than 1.0
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
_____________________________________________________________________
WARNING! Your environment specifies an invalid locale.
This can affect your user experience significantly, including the
ability to manage packages. You may install the locales by running:
sudo apt-get install language-pack-en
or
sudo locale-gen en_US.UTF-8
To see all available language packs, run:
apt-cache search "^language-pack-[a-z][a-z]$"
To disable this message for all users, run:
sudo touch /var/lib/cloud/instance/locale-check.skip
_____________________________________________________________________
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1.1G 2.7G 28% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 8.0K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file1.bin bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 4.72951 s, 443 MB/s
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000
dd: writing '/tmp/big_file2.bin': No space left on device
742+0 records in
741+0 records out
777560064 bytes (778 MB) copied, 1.81098 s, 429 MB/s
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 3.7G 196K 100% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 8.0K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant
vagrant@vagrant-ubuntu-saucy-64:~$

Mac OS X-hosted 32-bit Cloud Image VM:

gstafford@mac-development:cloudimage-test-osx $ vagrant box add saucycloud32 http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-i386-vagrant-disk1.box
Downloading box from URL: http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-i386-vagrant-disk1.box
Extracting box...te: 1383k/s, Estimated time remaining: 0:00:01)
Successfully added box 'saucycloud32' with provider 'virtualbox'!
gstafford@mac-development:cloudimage-test-osx $ vagrant init saucycloud32
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
gstafford@mac-development:cloudimage-test-osx $ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'saucycloud32'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
[default] Machine booted and ready!
[default] The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
prevent things such as shared folders from working properly. If you see
shared folder errors, please make sure the guest additions within the
virtual machine match the version of VirtualBox you have installed on
your host and reload your VM.
Guest Additions Version: 4.2.16
VirtualBox Version: 4.3
[default] Mounting shared folders...
[default] -- /vagrant
gstafford@mac-development:cloudimage-test-osx $ vagrant ssh
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic i686)
* Documentation: https://help.ubuntu.com/
System information disabled due to load higher than 1.0
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1012M 2.8G 27% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 245M 12K 245M 1% /dev
tmpfs tmpfs 50M 336K 50M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 248M 0 248M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 149G 71G 79G 48% /vagrant
vagrant@vagrant-ubuntu-saucy-32:~$ exit
gstafford@mac-development:MacOS$ VBoxManage list vms --long hdds
UUID: ee72161f-25c5-4714-ab28-6ee9929500e8
Parent UUID: base
State: locked write
Type: normal (base)
Location: /Users/gstafford/VirtualBox VMs/cloudimage-test-osx_default_1389280075070_65829/box-disk1.vmdk
Storage format: VMDK
Format variant: dynamic default
Capacity: 40960 MBytes
Size on disk: 1031 MBytes
In use by VMs: cloudimage-test-osx_default_1389280075070_65829 (UUID: 51df4527-99af-48be-92cd-ad73110be88c)

Resources

Ubuntu Cloud Images for Vagrant

Fourth Extended Filesystem (ext4)

Similar Issue on StackOverflow

VBoxManage Command-Line Interface

Ubuntu Releases

, , , , , , , , ,

3 Comments

Updating Ubuntu Linux to the Latest JDK

Introduction

If you are Java Developer, new to the Linux environment, installing and configuring Java updates can be a bit daunting. In the following post, we will update a VirtualBox VM running Canonical’s popular Ubuntu Linux operating system. The VM currently contains an earlier version of Java. We will update the VM to the latest release of the Java.

All code for this post is available as Gists on GitHub.com, including a complete install script, explained at the end of this post.

Current Version of Java?

First, we will use the ‘update-alternatives –display java’ command to review all the versions of Java currently installed on the VM. We can have multiple copies installed, but only one will be configured and active. We can verify the active version using the ‘java -version’ command.

# preview java alternatives
update-alternatives --display java
# check current version of java
java -version
01 - Check Current Version of Java

Check Current Version of Java

In the above example, the 1.7.0_17 JDK version of Java is configured and active. That version is located in the ‘/usr/lib/jvm/jdk1.7.0_17’ subdirectory. There are two other Java versions also installed but not active, an Oracle 1.7.0_17 JRE version and an older 1.7.0_04 JDK version. These three versions are referred to as ‘alternatives’, thus the ‘alternatives’ command. By selecting an alternative version of Java, we control which java.exe binary executable file the system calls when the ‘java’ command is executed. In a many software development environments, you may need different versions of Java, depending on different client project’s level of technology.

Alternatives

According to About.com, alternatives ‘make it possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. A generic name in the filesystem is shared by all files providing interchangeable functionality. The alternatives system and the system administrator together determine which actual file is referenced by this generic name. The generic name is not a direct symbolic link to the selected alternative. Instead, it is a symbolic link to a name in the alternatives directory, which in turn is a symbolic link to the actual file referenced.’

We can see this system at work by changing to our ‘/usr/bin’ directory. This directory contains the majority of binary executables on the system. Executing an ‘ls -Al /usr/bin/* | grep -e java -e jar -e appletviewer -e mozilla-javaplugin.so’ command, we see that each Java executable is actually a symbolic link to the alternatives directory, not a binary executable file.

# view java-related executables
ls -Al /usr/bin/* | \
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so
# view all the commands which support alternatives
update-alternatives --get-selections
# view all java-related executables which support alternatives
update-alternatives --get-selections | \
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so
Java Symbolic Links to Alternatives Directory

Java Symbolic Links to Alternatives Directory

To find out all the commands which support alternatives, you can use the ‘update-alternatives –get-selections’ command. We can use a similar command to get just the Java commands, ‘update-alternatives –get-selections | grep -e java -e jar -e appletview -e mozilla-javaplugin.so’.

Java-Related Executable Alternatives

Java-Related Executable Alternatives (view after update)

Computer Architecture?

Next, we need to determine the computer processor architecture of the VM. The architecture determines which version of Java to download. The machine that hosts our VM may have a 64-bit architecture (also known as x86-64, x64, and amd64), while the VM might have a 32-bit architecture (also known as IA-32 or x86). Trying to install 64-bit versions of software onto 32-bit VMs is a common mistake.

The VM’s architecture was originally displayed with the ‘java -version’ command, above. To confirm the 64-bit architecture we can use either the ‘uname -a’ or ‘arch’ command.

# determine the processor architecture
uname -a
arch
02 - Find Your Processor Type

Find Your Processor’s Architecture

JRE or JDK?

One last decision. The Java Runtime Environment (JRE) purpose is to run Java applications. The JRE covers most end-user’s needs. The Java Development Kit (JDK) purpose is to develop Java applications. The JDK includes a complete JRE, plus tools for developing, debugging, and monitoring Java applications. As developers, we will choose to install the JDK.

Download Latest Version

In the above screen grab, you see our VM is running a 64-bit version of Ubuntu 12.04.3 LTS (Precise Pangolin). Therefore, we will download the most recent version of the 64-bit Linux JDK. We could choose either Oracle’s commercial version of Java or the OpenJDK version. According to Oracle, the ‘OpenJDK is an open-source implementation of the Java Platform, Standard Edition (Java SE) specifications’. We will choose the latest commercial version of Oracle’s JDK. At the time of this post, that is JDK 7u45 (aka 1.7.0_45-b18).

The Linux file formats available for download, are a .rpm (Red Hat Package Manager) file and a .tar.gz file (aka tarball). For this post, we will download the tarball, the ‘jdk-7u45-linux-x64.tar.gz’ file.

Current Java JDK Downloads

Current JDK Downloads

Extract Tarball

We will use the command ‘sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm’, to extract the files directly to the ‘/usr/lib/jvm’ folder. This folder contains all previously installed versions of Java. Once the tarball’s files are extracted, we should see a new directory containing the new version of Java, ‘jdk1.7.0_45’, in the ‘/usr/lib/jvm’ directory.

# extract new version of java from the downloaded tarball
cd ~/Downloads/Java
ls
sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm/
# change directories to the location of the extracted version of java
cd /usr/lib/jvm/
ls
04 - Versions of Java on the VM

Versions of Java on the VM

Installation

There are two configuration modes available in the alternatives system, manual and automatic mode. According to die.net‘when a link group is in manual mode, the alternatives system will not (automatically) make any changes to the system administrator’s settings’. When a link group is in automatic mode, the alternatives system ensures that the links in the group point to the highest priority alternatives appropriate for the group’.

We will first install and configure the new version of Java in manual mode. To install the new version of Java, we run ‘update-alternatives –install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4’. Note the last parameter, ‘4’, the priority. Why ‘4’? If we chose to use automatic mode, as we will a little later, we want our new Java version to have the highest numeric priority. In automatic mode, the system looks at the priority to determine which version of Java it will run. In the post’s first screen grab, note each of the three installed Java versions had different priorities: 1, 2, and 3. If we want to use automatic mode later, we must set a higher priority on our new version of Java, ‘4’ or greater. We will set it now as part of the install, so we can use it later in automatic mode.

Configuration

First, to configure (activate) the new Java version in alternatives manual mode, we will run ‘update-alternatives –config java’. We are prompted to choose from the list of alternatives for Java executables (java.exe), which has now grown from three to four choices. Choose the new JDK version of Java, which we just installed.

That’s it. The system has been instructed to use the version we selected as the Java alternative. Now, when we run the ‘java’ command, the system will access the newly configured JDK version. To verify, we rerun the ‘java -version’ command. The version of Java has changed since the first time we ran the command (see first screen grab).

# find a higher priority number
update-alternatives --display java
# install new version of java
sudo update-alternatives --install /usr/bin/java java \
/usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4
# configure new version of java
update-alternatives --config java
# select the new version when prompted
05 - Install and Configure New Version of Java

Install and Configure New Version of Java

Now let’s switch to automatic mode. To switch to automatic mode, we use ‘update-alternatives –auto java’. Notice how the mode has changed in the screen grab below and our version with the highest priority is selected.

06c - Switching to Auto

Switching to Auto

Other Java Command Alternatives

We will repeat this process for the other Java-related executables. Many are part of the JDK Tools and Utilities. Java-related executables include the Javadoc Tool (javadoc), Java Web Start (javaws), Java Compiler (javac), and Java Archive Tool (jar), javap, javah, appletviewer, and the Java Plugin for Linux. Note if you are updating a headless VM, you would not have a web browser installed. Therefore it would not be necessary to configuring the mozilla-javaplugin.

# install and configure other java executable alternatives in manual mode
# install and update Java Web Start (javaws)
update-alternatives --display javaws
sudo update-alternatives --install /usr/bin/javaws javaws \
/usr/lib/jvm/jdk1.7.0_45/jre/bin/javaws 20000
update-alternatives --config javaws
# install and update Java Compiler (javac)
update-alternatives --display javac
sudo update-alternatives --install /usr/bin/javac javac \
/usr/lib/jvm/jdk1.7.0_45/bin/javac 20000
update-alternatives --config javac
# install and update Java Archive Tool (jar)
update-alternatives --display jar
sudo update-alternatives --install /usr/bin/jar jar \
/usr/lib/jvm/jdk1.7.0_45/bin/jar 20000
update-alternatives --config jar
# jar signing and verification tool (jarsigner)
update-alternatives --display jarsigner
sudo sudo update-alternatives --install /usr/bin/jarsigner jarsigner \
/usr/lib/jvm/jdk1.7.0_45/bin/jarsigner 20000
update-alternatives --config jarsigner
# install and update Java Archive Tool (javadoc)
update-alternatives --display javadoc
sudo update-alternatives --install /usr/bin/javadoc javadoc \
/usr/lib/jvm/jdk1.7.0_45/bin/javadoc 20000
update-alternatives --config javadoc
# install and update Java Plugin for Linux (mozilla-javaplugin.so)
update-alternatives --display mozilla-javaplugin.so
sudo update-alternatives --install \
/usr/lib/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so \
/usr/lib/jvm/jdk1.7.0_45/jre/lib/amd64/libnpjp2.so 20000
update-alternatives --config mozilla-javaplugin.so
# install and update Java disassembler (javap)
update-alternatives --display javap
sudo update-alternatives --install /usr/bin/javap javap \
/usr/lib/jvm/jdk1.7.0_45/bin/javap 20000
update-alternatives --config javap
# install and update file that creates C header files and C stub (javah)
update-alternatives --display javah
sudo update-alternatives --install /usr/bin/javah javah \
/usr/lib/jvm/jdk1.7.0_45/bin/javah 20000
update-alternatives --config javah
# jexec utility executes command inside the jail (jexec)
update-alternatives --display jexec
sudo update-alternatives --install /usr/bin/jexec jexec \
/usr/lib/jvm/jdk1.7.0_45/lib/jexec 20000
update-alternatives --config jexec
# install and update file to run applets without a web browser (appletviewer)
update-alternatives --display appletviewer
sudo update-alternatives --install /usr/bin/appletviewer appletviewer \
/usr/lib/jvm/jdk1.7.0_45/bin/appletviewer 20000
update-alternatives --config appletviewer
06b - Configure Java Plug-in for Linux

Configure Java Plug-in for Linux

Verify Java Version

Verify Java Version is Latest

JAVA_HOME

Many applications which need the JDK/JRE to run, look up the JAVA_HOME environment variable for the location of the Java compiler/interpreter. The most common approach is to hard-wire the JAVA_HOME variable to the current JDK directory. User the ‘sudo nano ~/.bashrc’ command to open the bashrc file. Add or modify the following line, ‘export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45’. Remember to also add the java executable path to the PATH variable, ‘export PATH=$PATH:$JAVA_HOME/bin’. Lastly, execute a ‘bash –login’ command for the changes to be visible in the current session.

Alternately, we could use a symbolic link to ‘default-java’. There are several good posts on the Internet on using the ‘default-java’ link.

# open bashrc
sudo nano ~/.bashrc
#add these lines to bashrc file
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
export PATH=$PATH:$JAVA_HOME/bin
# for the changes to be visible in the current session
bash --login

Complete Scripted Example

Below is a complete script to install or update the latest JDK on Ubuntu. Simply download the tarball to your home directory, along with the below script, available on GitHub. Execute the script with a ‘sh install_java_complete.sh’. All java executables will be installed and configured as alternatives in automatic mode. The JAVA_HOME and PATH environment variables will also be set.

#!/bin/bash
#################################################################
# author: Gary A. Stafford
# date: 2013-12-17
# source: http://www.programmaticponderings.com
# description: complete code for install and config on Ubuntu
# of jdk 1.7.0_45 in alteratives automatic mode
#################################################################
########## variables start ##########
# priority value
priority=20000
# path to binaries directory
binaries=/usr/bin
# path to libraries directory
libraries=/usr/lib
# path to new java version
javapath=$libraries/jvm/jdk1.7.0_45
# path to downloaded java version
java_download=~/jdk-7u45-linux-x64.tar.gz
########## variables end ##########
cd $libraries
[ -d jvm ] || sudo mkdir jvm
cd ~/
# change permissions on jvm subdirectory
sudo chmod +x $libraries/jvm/
# extract new version of java from the downloaded tarball
if [ -f $java_download ]; then
sudo tar -zxf $java_download -C $libraries/jvm
else
echo "Cannot locate Java download. Check 'java_download' variable."
echo 'Exiting script.'
exit 1
fi
# install and config java web start (java)
sudo update-alternatives --install $binaries/java java \
$javapath/jre/bin/java $priority
sudo update-alternatives --auto java
# install and config java web start (javaws)
sudo update-alternatives --install $binaries/javaws javaws \
$javapath/jre/bin/javaws $priority
sudo update-alternatives --auto javaws
# install and config java compiler (javac)
sudo update-alternatives --install $binaries/javac javac \
$javapath/bin/javac $priority
sudo update-alternatives --auto javac
# install and config java archive tool (jar)
sudo update-alternatives --install $binaries/jar jar \
$javapath/bin/jar $priority
sudo update-alternatives --auto jar
# jar signing and verification tool (jarsigner)
sudo update-alternatives --install $binaries/jarsigner jarsigner \
$javapath/bin/jarsigner $priority
sudo update-alternatives --auto jarsigner
# install and config java tool for generating api documentation in html (javadoc)
sudo update-alternatives --install $binaries/javadoc javadoc \
$javapath/bin/javadoc $priority
sudo update-alternatives --auto javadoc
# install and config java disassembler (javap)
sudo update-alternatives --install $binaries/javap javap \
$javapath/bin/javap $priority
sudo update-alternatives --auto javap
# install and config file that creates c header files and c stub (javah)
sudo update-alternatives --install $binaries/javah javah \
$javapath/bin/javah $priority
sudo update-alternatives --auto javah
# jexec utility executes command inside the jail (jexec)
sudo update-alternatives --install $binaries/jexec jexec \
$javapath/lib/jexec $priority
sudo update-alternatives --auto jexec
# install and config file to run applets without a web browser (appletviewer)
sudo update-alternatives --install $binaries/appletviewer appletviewer \
$javapath/bin/appletviewer $priority
sudo update-alternatives --auto appletviewer
# install and config java plugin for linux (mozilla-javaplugin.so)
if [ -f '$libraries/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so' ]; then
sudo update-alternatives --install \
$libraries/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so \
$javapath/jre/lib/amd64/libnpjp2.so $priority
sudo update-alternatives --auto mozilla-javaplugin.so
else
echo 'Mozilla Firefox not found. Java Plugin for Linux will not be installed.'
fi
echo
########## add JAVA_HOME start ##########
grep -q 'JAVA_HOME' ~/.bashrc
if [ $? -ne 0 ]; then
echo 'JAVA_HOME not found. Adding JAVA_HOME to ~/.bashrc'
echo >> ~/.bashrc
echo 'export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45' >> ~/.bashrc
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> ~/.bashrc
else
echo 'JAVA_HOME found. No changes required to ~/.bashrc.'
fi
########## add JAVA_HOME start ##########
echo
echo '*** Java update script completed successfully ***'
echo
# confirm alternative for java-related executables
update-alternatives --get-selections | \
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so
# confirm JAVA_HOME environment variable is set
echo "To confirm JAVA_HOME, execute 'bash --login' and then 'echo \$JAVA_HOME'"

Below is a test of the script on a fresh Vagrant VM of an Ubuntu Cloud Image of Ubuntu Server 13.10 (Saucy Salamander). Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The script was able to successfully install and configure the JDK, as well as the  JAVA_HOME and PATH environment variables.

Test New Install on Vagrant Ubuntu VM

Test New Install on Vagrant Ubuntu VM

Deleting Old Versions?

Before deciding to completely delete previously installed versions of Java from the ‘/usr/lib/jvm’ directory, ensure there are no links to those versions from OS and application configuration files. Many applications, such as NetBeans, eclipse, soapUI, and WebLogic Server, may contain their own Java configurations. If they don’t use the JAVA_HOME variable, they should be updated to reflect the current active Java version when possible.

Resources

Ubuntu Linux: Install Latest Oracle Java 7

update-alternatives(8) – Linux man page

Configuring different JDKs with alternatives

Ubuntu Documentation: Java

, , , , , , , , , , , ,

1 Comment

Setting Up the Nexus 7 for Development with ADT on Ubuntu

Recently, I purchased a Google Nexus 7. Excited to being development with this Android 4.2 device, I first needed to prepare my development environment. The process of configuring my Ubuntu Linux-based laptop to debug directly on the Nexus 7 was fairly easy once I identified all necessary steps. I thought I would share my process for those who are trying to do the same. Note the following steps assume you already have Java installed on you development computer. Also, that your Nexus 7 is connected via USB.

Configuration

1) Install ADT Bundle: Download and extract the Android Developer Tools (ADT) Bundle from http://developer.android.com/sdk/index.html. I installed the Linux 64-bit version for use on my Ubuntu 12.10 laptop. The bundle, according to the website, includes ‘the essential Android SDK components and a version of the Eclipse IDE with built-in ADT to streamline your Android app development.’

2) Install IA32 Libraries: Install the ia32 shared libraries using Synaptic Package Manager, or the following command, ‘sudo apt-get install ia32-libs’. I received some initial errors with ADT, until I found a post on Stack Overflow that suggested loading the libraries; they did the trick.

3) Configure Nexus 7 to Auto-Mount: To auto-mount the Nexus 7 on your computer, you need to make some changes to your computer’s system configuration. Follow this post to ‘configure your Ubuntu computer to directly access your Nexus 7 exported filesystem in MTP mode as soon as you plug it to a USB port.’, ‘http://bernaerts.dyndns.org/linux/247-ubuntu-automount-nexus7-mtp‘. It looked a bit intimidating at first, but it actually turned out to be pretty easy and only took a few minutes.

Mounting Nexus 7 on Desktop

Nexus 7 Device Mounted on Desktop via USB

4) Setup ADT to Debug Nexus 7: To debug on the Nexus 7 from ADT via USB, according to this next post, ‘you need to add a udev rules file that contains a USB configuration for each type of device you want to use for development.’ Follow the steps in this post, ‘http://developer.android.com/tools/device.html‘ to create the rules file, similar to step 3.

Adding the Android Debug Rule (step 4)

Adding the Android Debug Rule (step 4)

5) Setup Nexus 7 for USB Debugging: The post mentioned in step 4 also discusses setting up you Nexus 7 to enable USB debugging. The post instructs you to ‘go to Settings > About phone and tap ‘Build number seven times. Return to the previous screen to find Developer options.’ From there, check the ‘USB debugging’ box. There are any more options you may wish to experiment with, later.

Displaying the Nexus 7 Developer Options (step 5)

Displaying the Nexus 7 Developer Options (step 5)

Confirming USB Debugging is Connected on Nexus 7

Confirming USB Debugging is Connected on Nexus 7

6) Update you Software: After these first five steps, I suggest updating your software and restarting your computer. Run the following command to make sure your system is up-to-date, ‘sudo apt-get update && sudo apt-get upgrade’. I always run this before and after any software installations to make sure my system is current.

After completing step 3, the Nexus 7 device should appear on your Launcher. However, it is not until steps 4 and 5 that it will appear in ADT.

Demonstration

Below is a simple demonstration of  debugging an Android application on the Nexus 7 from ADT, via USB. Although ADT allows you to configure an Android Virtual Device (AVD), direct debugging on the Nexus 7 is substantially faster. The Nexus 7 AVD emulation was incredibly slow, even on my 64-bit, Core i5 laptop.

Creating a Simple 'Hello World' Application in ADT

Creating a Simple ‘Hello World’ Application in ADT

Choosing the Nexus 7 from Android Device Chooser

Choosing the Nexus 7 from Android Device Chooser

Debugging the Hello World Application on Nexus 7

Debugging the Hello World Application on Nexus 7

, , , , , , , , ,

2 Comments

Object Tracking on the Raspberry Pi with C++, OpenCV, and cvBlob

Use C++ with OpenCV and cvBlob to perform image processing and object tracking on the Raspberry Pi, using a webcam.

Source code and compiled samples are now available on GitHub. The below post describes the original code on the ‘Master’ branch. As of  May 2014, there is a revised and improved version of the project on the ‘rev05_2014’ branch, on GitHub. The README.md details the changes and also describes how to install OpenCV, cvBlob, and all dependencies!

Introduction

As part of a project with a local FIRST Robotics Competition (FRC) Team, I’ve been involved in developing a Computer Vision application for use on the Raspberry Pi. Our FRC team’s goal is to develop an object tracking and target acquisition application that could be run on the Raspberry Pi, as opposed to the robot’s primary embedded processor, a National Instrument’s NI cRIO-FRC II. We chose to work in C++ for its speed, We also decided to test two popular open-source Computer Vision (CV) libraries, OpenCV and cvBlob.

Due to its single ARM1176JZF-S 700 MHz ARM processor, a significant limitation of the Raspberry Pi is the ability to perform complex operations in real-time, such as image processing. In an earlier post, I discussed Motion to detect motion with a webcam on the Raspberry Pi. Although the Raspberry Pi was capable of running Motion, it required a greatly reduced capture size and frame-rate. And even then, the Raspberry Pi’s ability to process the webcam’s feed was very slow. I had doubts it would be able to meet the processor-intense requirements of this project.

Development for the Raspberry Pi

Using C++ in NetBeans 7.2.1 on Ubuntu 12.04.1 LTS and 12.10, I wrote several small pieces of code to demonstrate the Raspberry Pi’s ability to perform basic image processing and object tracking. Parts of the follow code are based on several OpenCV and cvBlob code examples, found in my research. Many of those examples are linked on the end of this article. Examples of cvBlob are especially hard to find.

Project in NetBeans

Project in NetBeans

The Code

There are five files: ‘main.cpp’, ‘testfps.cpp (testfps.h)’, and ‘testcvblob.cpp (testcvblob.h)’. The main.cpp file’s main method calls the test methods in the other two files. The cvBlob library only works with the pre-OpenCV 2.0. Therefore, I wrote all the code using the older objects and methods. The code is not written using the latest OpenCV 2.0 conventions. For example, cvBlob uses 1.0’s ‘IplImage’ image type instead 2.0’s newer ‘CvMat’ image type. My next projects is to re-write the cvBlob code to use OpenCV 2.0 conventions and/or find a newer library. The cvBlob library offered so many advantages, I felt not using the newer OpenCV 2.0 features was still worthwhile.

Main Program Method (main.cpp)

/*
* File: main.cpp
* Author: Gary Stafford
* Description: Program entry point
* Created: February 3, 2013
*/
#include <stdio.h>
#include <sstream>
#include <stdlib.h>
#include <iostream>
#include "testfps.hpp"
#include "testcvblob.hpp"
using namespace std;
int main(int argc, char* argv[]) {
int captureMethod = 0;
int captureWidth = 0;
int captureHeight = 0;
if (argc == 4) { // user input parameters with call
captureMethod = strtol(argv[1], NULL, 0);
captureWidth = strtol(argv[2], NULL, 0);
captureHeight = strtol(argv[3], NULL, 0);
} else { // user did not input parameters with call
cout << endl << "Demonstrations/Tests: " << endl;
cout << endl << "(1) Test OpenCV - Show Webcam" << endl;
cout << endl << "(2) Test OpenCV - No Webcam" << endl;
cout << endl << "(3) Test cvBlob - Show Image" << endl;
cout << endl << "(4) Test cvBlob - No Image" << endl;
cout << endl << "(5) Test Blob Tracking - Show Webcam" << endl;
cout << endl << "(6) Test Blob Tracking - No Webcam" << endl;
cout << endl << "Input test # (1-6): ";
cin >> captureMethod;
// test 3 and 4 don't require width and height parameters
if (captureMethod != 3 && captureMethod != 4) {
cout << endl << "Input capture width (pixels): ";
cin >> captureWidth;
cout << endl << "Input capture height (pixels): ";
cin >> captureHeight;
cout << endl;
if (!captureWidth > 0) {
cout << endl << "Width value incorrect" << endl;
return -1;
}
if (!captureHeight > 0) {
cout << endl << "Height value incorrect" << endl;
return -1;
}
}
}
switch (captureMethod) {
case 1:
TestFpsShowVideo(captureWidth, captureHeight);
case 2:
TestFpsNoVideo(captureWidth, captureHeight);
break;
case 3:
DetectBlobsShowStillImage();
break;
case 4:
DetectBlobsNoStillImage();
break;
case 5:
DetectBlobsShowVideo(captureWidth, captureHeight);
break;
case 6:
DetectBlobsNoVideo(captureWidth, captureHeight);
break;
default:
break;
}
return 0;
}
view raw main.cpp hosted with ❤ by GitHub

Tests 1-2 (testcvblob.hpp)

// -*- C++ -*-
/*
* File: testcvblob.hpp
* Author: Gary Stafford
* Created: February 3, 2013
*/
#ifndef TESTCVBLOB_HPP
#define TESTCVBLOB_HPP
int DetectBlobsNoStillImage();
int DetectBlobsShowStillImage();
int DetectBlobsNoVideo(int captureWidth, int captureHeight);
int DetectBlobsShowVideo(int captureWidth, int captureHeight);
#endif /* TESTCVBLOB_HPP */
view raw testcvblob.hpp hosted with ❤ by GitHub

Tests 1-2 (testcvblob.cpp)

/*
* File: testcvblob.cpp
* Author: Gary Stafford
* Description: Track blobs using OpenCV and cvBlob
* Created: February 3, 2013
*/
#include <cv.h>
#include <highgui.h>
#include <cvblob.h>
#include "testcvblob.hpp"
using namespace cvb;
using namespace std;
// Test 3: OpenCV and cvBlob (w/ webcam feed)
int DetectBlobsNoStillImage() {
/// Variables /////////////////////////////////////////////////////////
CvSize imgSize;
IplImage *image, *segmentated, *labelImg;
CvBlobs blobs;
unsigned int result = 0;
///////////////////////////////////////////////////////////////////////
image = cvLoadImage("colored_balls.jpg");
if (image == NULL) {
return -1;
}
imgSize = cvGetSize(image);
cout << endl << "Width (pixels): " << image->width;
cout << endl << "Height (pixels): " << image->height;
cout << endl << "Channels: " << image->nChannels;
cout << endl << "Bit Depth: " << image->depth;
cout << endl << "Image Data Size (kB): "
<< image->imageSize / 1024 << endl << endl;
segmentated = cvCreateImage(imgSize, 8, 1);
cvInRangeS(image, CV_RGB(155, 0, 0), CV_RGB(255, 130, 130), segmentated);
labelImg = cvCreateImage(cvGetSize(image), IPL_DEPTH_LABEL, 1);
result = cvLabel(segmentated, labelImg, blobs);
cvFilterByArea(blobs, 500, 1000000);
cout << endl << "Blob Count: " << blobs.size();
cout << endl << "Pixels Labeled: " << result << endl << endl;
cvReleaseBlobs(blobs);
cvReleaseImage(&labelImg);
cvReleaseImage(&segmentated);
cvReleaseImage(&image);
return 0;
}
// Test 4: OpenCV and cvBlob (w/o webcam feed)
int DetectBlobsShowStillImage() {
/// Variables /////////////////////////////////////////////////////////
CvSize imgSize;
IplImage *image, *frame, *segmentated, *labelImg;
CvBlobs blobs;
unsigned int result = 0;
bool quit = false;
///////////////////////////////////////////////////////////////////////
cvNamedWindow("Processed Image", CV_WINDOW_AUTOSIZE);
cvMoveWindow("Processed Image", 750, 100);
cvNamedWindow("Image", CV_WINDOW_AUTOSIZE);
cvMoveWindow("Image", 100, 100);
image = cvLoadImage("colored_balls.jpg");
if (image == NULL) {
return -1;
}
imgSize = cvGetSize(image);
cout << endl << "Width (pixels): " << image->width;
cout << endl << "Height (pixels): " << image->height;
cout << endl << "Channels: " << image->nChannels;
cout << endl << "Bit Depth: " << image->depth;
cout << endl << "Image Data Size (kB): "
<< image->imageSize / 1024 << endl << endl;
frame = cvCreateImage(imgSize, image->depth, image->nChannels);
cvConvertScale(image, frame, 1, 0);
segmentated = cvCreateImage(imgSize, 8, 1);
cvInRangeS(image, CV_RGB(155, 0, 0), CV_RGB(255, 130, 130), segmentated);
cvSmooth(segmentated, segmentated, CV_MEDIAN, 7, 7);
labelImg = cvCreateImage(cvGetSize(frame), IPL_DEPTH_LABEL, 1);
result = cvLabel(segmentated, labelImg, blobs);
cvFilterByArea(blobs, 500, 1000000);
cvRenderBlobs(labelImg, blobs, frame, frame,
CV_BLOB_RENDER_BOUNDING_BOX | CV_BLOB_RENDER_TO_STD, 1.);
cvShowImage("Image", frame);
cvShowImage("Processed Image", segmentated);
while (!quit) {
char k = cvWaitKey(10)&0xff;
switch (k) {
case 27:
case 'q':
case 'Q':
quit = true;
break;
}
}
cvReleaseBlobs(blobs);
cvReleaseImage(&labelImg);
cvReleaseImage(&segmentated);
cvReleaseImage(&frame);
cvReleaseImage(&image);
cvDestroyAllWindows();
return 0;
}
// Test 5: Blob Tracking (w/ webcam feed)
int DetectBlobsNoVideo(int captureWidth, int captureHeight) {
/// Variables /////////////////////////////////////////////////////////
CvCapture *capture;
CvSize imgSize;
IplImage *image, *frame, *segmentated, *labelImg;
int picWidth, picHeight;
CvTracks tracks;
CvBlobs blobs;
CvBlob* blob;
unsigned int result = 0;
bool quit = false;
///////////////////////////////////////////////////////////////////////
capture = cvCaptureFromCAM(-1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, captureWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, captureHeight);
cvGrabFrame(capture);
image = cvRetrieveFrame(capture);
if (image == NULL) {
return -1;
}
imgSize = cvGetSize(image);
cout << endl << "Width (pixels): " << image->width;
cout << endl << "Height (pixels): " << image->height << endl << endl;
frame = cvCreateImage(imgSize, image->depth, image->nChannels);
while (!quit && cvGrabFrame(capture)) {
image = cvRetrieveFrame(capture);
cvConvertScale(image, frame, 1, 0);
segmentated = cvCreateImage(imgSize, 8, 1);
cvInRangeS(image, CV_RGB(155, 0, 0), CV_RGB(255, 130, 130), segmentated);
//Can experiment either or both
cvSmooth(segmentated, segmentated, CV_MEDIAN, 7, 7);
cvSmooth(segmentated, segmentated, CV_GAUSSIAN, 9, 9);
labelImg = cvCreateImage(cvGetSize(frame), IPL_DEPTH_LABEL, 1);
result = cvLabel(segmentated, labelImg, blobs);
cvFilterByArea(blobs, 500, 1000000);
cvRenderBlobs(labelImg, blobs, frame, frame, 0x000f, 1.);
cvUpdateTracks(blobs, tracks, 200., 5);
cvRenderTracks(tracks, frame, frame, 0x000f, NULL);
picWidth = frame->width;
picHeight = frame->height;
if (cvGreaterBlob(blobs)) {
blob = blobs[cvGreaterBlob(blobs)];
cout << "Blobs found: " << blobs.size() << endl;
cout << "Pixels labeled: " << result << endl;
cout << "center-x: " << blob->centroid.x
<< " center-y: " << blob->centroid.y
<< endl;
cout << "offset-x: " << ((picWidth / 2)-(blob->centroid.x))
<< " offset-y: " << (picHeight / 2)-(blob->centroid.y)
<< endl;
cout << "\n";
}
char k = cvWaitKey(10)&0xff;
switch (k) {
case 27:
case 'q':
case 'Q':
quit = true;
break;
}
}
cvReleaseBlobs(blobs);
cvReleaseImage(&labelImg);
cvReleaseImage(&segmentated);
cvReleaseImage(&frame);
cvReleaseImage(&image);
cvDestroyAllWindows();
cvReleaseCapture(&capture);
return 0;
}
// Test 6: Blob Tracking (w/o webcam feed)
int DetectBlobsShowVideo(int captureWidth, int captureHeight) {
/// Variables /////////////////////////////////////////////////////////
CvCapture *capture;
CvSize imgSize;
IplImage *image, *frame, *segmentated, *labelImg;
CvPoint pt1, pt2, pt3, pt4, pt5, pt6;
CvScalar red, green, blue;
int picWidth, picHeight, thickness;
CvTracks tracks;
CvBlobs blobs;
CvBlob* blob;
unsigned int result = 0;
bool quit = false;
///////////////////////////////////////////////////////////////////////
cvNamedWindow("Processed Video Frames", CV_WINDOW_AUTOSIZE);
cvMoveWindow("Processed Video Frames", 750, 400);
cvNamedWindow("Webcam Preview", CV_WINDOW_AUTOSIZE);
cvMoveWindow("Webcam Preview", 200, 100);
capture = cvCaptureFromCAM(1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, captureWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, captureHeight);
cvGrabFrame(capture);
image = cvRetrieveFrame(capture);
if (image == NULL) {
return -1;
}
imgSize = cvGetSize(image);
cout << endl << "Width (pixels): " << image->width;
cout << endl << "Height (pixels): " << image->height << endl << endl;
frame = cvCreateImage(imgSize, image->depth, image->nChannels);
while (!quit && cvGrabFrame(capture)) {
image = cvRetrieveFrame(capture);
cvFlip(image, image, 1);
cvConvertScale(image, frame, 1, 0);
segmentated = cvCreateImage(imgSize, 8, 1);
//Blue paper
cvInRangeS(image, CV_RGB(49, 69, 100), CV_RGB(134, 163, 216), segmentated);
//Green paper
//cvInRangeS(image, CV_RGB(45, 92, 76), CV_RGB(70, 155, 124), segmentated);
//Can experiment either or both
cvSmooth(segmentated, segmentated, CV_MEDIAN, 7, 7);
cvSmooth(segmentated, segmentated, CV_GAUSSIAN, 9, 9);
labelImg = cvCreateImage(cvGetSize(frame), IPL_DEPTH_LABEL, 1);
result = cvLabel(segmentated, labelImg, blobs);
cvFilterByArea(blobs, 500, 1000000);
cvRenderBlobs(labelImg, blobs, frame, frame, CV_BLOB_RENDER_COLOR, 0.5);
cvUpdateTracks(blobs, tracks, 200., 5);
cvRenderTracks(tracks, frame, frame, CV_TRACK_RENDER_BOUNDING_BOX, NULL);
red = CV_RGB(250, 0, 0);
green = CV_RGB(0, 250, 0);
blue = CV_RGB(0, 0, 250);
thickness = 1;
picWidth = frame->width;
picHeight = frame->height;
pt1 = cvPoint(picWidth / 2, 0);
pt2 = cvPoint(picWidth / 2, picHeight);
cvLine(frame, pt1, pt2, red, thickness);
pt3 = cvPoint(0, picHeight / 2);
pt4 = cvPoint(picWidth, picHeight / 2);
cvLine(frame, pt3, pt4, red, thickness);
cvShowImage("Webcam Preview", frame);
cvShowImage("Processed Video Frames", segmentated);
if (cvGreaterBlob(blobs)) {
blob = blobs[cvGreaterBlob(blobs)];
pt5 = cvPoint(picWidth / 2, picHeight / 2);
pt6 = cvPoint(blob->centroid.x, blob->centroid.y);
cvLine(frame, pt5, pt6, green, thickness);
cvCircle(frame, pt6, 3, green, 2, CV_FILLED, 0);
cvShowImage("Webcam Preview", frame);
cvShowImage("Processed Video Frames", segmentated);
cout << "Blobs found: " << blobs.size() << endl;
cout << "Pixels labeled: " << result << endl;
cout << "center-x: " << blob->centroid.x
<< " center-y: " << blob->centroid.y
<< endl;
cout << "offset-x: " << ((picWidth / 2)-(blob->centroid.x))
<< " offset-y: " << (picHeight / 2)-(blob->centroid.y)
<< endl;
cout << "\n";
}
char k = cvWaitKey(10)&0xff;
switch (k) {
case 27:
case 'q':
case 'Q':
quit = true;
break;
}
}
cvReleaseBlobs(blobs);
cvReleaseImage(&labelImg);
cvReleaseImage(&segmentated);
cvReleaseImage(&frame);
cvReleaseImage(&image);
cvDestroyAllWindows();
cvReleaseCapture(&capture);
return 0;
}
view raw testcvblob.cpp hosted with ❤ by GitHub

Tests 2-6 (testfps.hpp)

// -*- C++ -*-
/*
* File: testfps.hpp
* Author: Gary Stafford
* Created: February 3, 2013
*/
#ifndef TESTFPS_HPP
#define TESTFPS_HPP
int TestFpsNoVideo(int captureWidth, int captureHeight);
int TestFpsShowVideo(int captureWidth, int captureHeight);
#endif /* TESTFPS_HPP */
view raw testfps.hpp hosted with ❤ by GitHub

Tests 2-6 (testfps.cpp)

/*
* File: testfps.cpp
* Author: Gary Stafford
* Description: Test the fps of a webcam using OpenCV
* Created: February 3, 2013
*/
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
#include "testfps.hpp"
using namespace std;
// Test 1: OpenCV (w/ webcam feed)
int TestFpsNoVideo(int captureWidth, int captureHeight) {
IplImage* frame;
CvCapture* capture = cvCreateCameraCapture(-1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, captureWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, captureHeight);
time_t start, end;
double fps, sec;
int counter = 0;
char k;
time(&start);
while (1) {
frame = cvQueryFrame(capture);
time(&end);
++counter;
sec = difftime(end, start);
fps = counter / sec;
printf("FPS = %.2f\n", fps);
if (!frame) {
printf("Error");
break;
}
k = cvWaitKey(10)&0xff;
switch (k) {
case 27:
case 'q':
case 'Q':
break;
}
}
cvReleaseCapture(&capture);
return 0;
}
// Test 2: OpenCV (w/o webcam feed)
int TestFpsShowVideo(int captureWidth, int captureHeight) {
IplImage* frame;
CvCapture* capture = cvCreateCameraCapture(-1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, captureWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, captureHeight);
cvNamedWindow("Webcam Preview", CV_WINDOW_AUTOSIZE);
cvMoveWindow("Webcam Preview", 300, 200);
time_t start, end;
double fps, sec;
int counter = 0;
char k;
time(&start);
while (1) {
frame = cvQueryFrame(capture);
time(&end);
++counter;
sec = difftime(end, start);
fps = counter / sec;
printf("FPS = %.2f\n", fps);
if (!frame) {
printf("Error");
break;
}
cvShowImage("Webcam Preview", frame);
k = cvWaitKey(10)&0xff;
switch (k) {
case 27:
case 'q':
case 'Q':
break;
}
}
cvDestroyWindow("Webcam Preview");
cvReleaseCapture(&capture);
return 0;
}
view raw testfps.cpp hosted with ❤ by GitHub

 

Compiling Locally on the Raspberry Pi

After writing the code, the first big challenge was cross-compiling the native C++ code, written on Intel IA-32 and 64-bit x86-64 processor-based laptops, to run on the Raspberry Pi’s ARM architecture. After failing to successfully cross-compile the C++ source code using crosstools-ng, mostly due to my lack of cross-compiling experience, I resorted to using g++ to compile the C++ source code directly on the Raspberry Pi.

First, I had to properly install the various CV libraries and the compiler on the Raspberry Pi, which itself is a bit daunting.

Compiling OpenCV 2.4.3, from the source-code, on the Raspberry Pi took an astounding 8 hours. Even though compiling the C++ source code takes longer on the Raspberry Pi, I could be assured the complied code would run locally. Below are the commands that I used to transfer and compile the C++ source code on my Raspberry Pi.

Copy and Compile Commands

scp *.jpg *.cpp *.h {your-pi-user}@{your.ip.address}:your/file/path/
ssh {your-pi-user}@{your.ip.address}
cd ~/your/file/path/
g++ `pkg-config opencv cvblob --cflags --libs` testfps.cpp testcvblob.cpp main.cpp -o FpsTest -v
./FpsTest

Compiling Program on Raspberry Pi

Compiling Program on Raspberry Pi

Special Note About cvBlob on ARM

At first I had given up on cvBlob working on the Raspberry Pi. All the cvBlob tests I ran, no matter how simple, continued to hang on the Raspberry Pi after working perfectly on my laptop. I had narrowed the problem down to the ‘cvLabel’ method, but was unable to resolve. However, I recently discovered a documented bug on the cvBlob website. It concerned cvBlob and the very same ‘cvLabel’ method on ARM-based devices (ARM = Raspberry Pi!). After making a minor modification to cvBlob’s ‘cvlabel.cpp’ source code, as directed in the bug post, and re-compiling on the Raspberry Pi, the test worked perfectly.

Testing OpenCV and cvBlob

The code contains three pairs of tests (six total), as follows:

  1. OpenCV (w/ live webcam feed)
    Determine if OpenCV is installed and functioning properly with the complied C++ code. Capture a webcam feed using OpenCV, and display the feed and frame rate (fps).
  2. OpenCV (w/o live webcam feed)
    Same as Test #1, but only print the frame rate (fps). The computer doesn’t need display the video feed to process the data. More importantly, the webcam’s feed might unnecessarily tax the computer’s processor and GPU.
  3. OpenCV and cvBlob (w/ live webcam feed)
    Determine if OpenCV and cvBlob are installed and functioning properly with the complied C++ code. Detect and display all objects (blobs) in a specific red color range, contained in a static jpeg image.
  4. OpenCV and cvBlob (w/o live webcam feed)
    Same as Test #3, but only print some basic information about the static image and number of blobs detected. Again, the computer doesn’t need display the video feed to process the data.
  5. Blob Tracking (w/ live webcam feed)
    Detect, track, and display all objects (blobs) in a specific blue color range, along with the largest blob’s positional data. Captured with a webcam, using OpenCV and cvBlob.
  6. Blob Tracking (w/o live webcam feed)
    Same as Test #5, but only display the largest blob’s positional data. Again, the computer doesn’t need the display the webcam feed, to process the data. The feed taxes the computer’s processor unnecessarily, which is being consumed with detecting and tracking the blobs. The blob’s positional data it sent to the robot and used by its targeting system to position its shooting platform.

The Program

There are two ways to run this program. First, from the command line you can call the application and pass in three parameters. The parameters include:

  1. Test method you want to run (1-6)
  2. Width of the webcam capture window in pixels
  3. Height of the webcam capture window in pixels.

An example would be ‘./TestFps 2 640 480’ or ‘./TestFps 5 320 240’.

The second method to run the program and not pass in any parameters. In that case, the program will prompt you to input the test number and other parameters on-screen.

Input Options for Application

Input Options for Application

Test 1: Laptop versus Raspberry Pi

Test 1: Displaying Webcam Feed using OpenCV (laptop)

Test 1: Displaying Webcam Feed using OpenCV (laptop)

Test 1: Displaying Webcam Feed using OpenCV (Raspberry Pi)

Test 1: Displaying Webcam Feed using OpenCV (Raspberry Pi)

Test 3: Laptop versus Raspberry Pi

Test 3: Detecting Red Color Range in Static Image using OpenCV and cvBlob (laptop)

Test 3: Detecting Red Color Range in Static Image using OpenCV and cvBlob (laptop)

Test 3: Detecting Red Color Range in Static Image using OpenCV and cvBlob (Raspberry Pi)

Test 3: Detecting Red Color Range in Static Image using OpenCV and cvBlob (Raspberry Pi)

Test 5: Detecting Objects within Blue Color Range using OpenCV and cvBlob (laptop)

Test 5: Detecting Objects within Blue Color Range using OpenCV and cvBlob (laptop)

Test 5: Laptop versus Raspberry Pi

Test 5: Detecting Objects within Blue Color Range using OpenCV and cvBlob (Raspberry Pi)

Test 5: Detecting Objects within Blue Color Range using OpenCV and cvBlob (Raspberry Pi)

The Results

Each test was first run on two Linux-based laptops, with Intel 32-bit and 64-bit architectures, and with two different USB webcams. The laptops were used to develop and test the code, as well as provide a baseline for application performance. Many factors can dramatically affect the application’s ability do image processing. They include the computer’s processor(s), RAM, HDD, GPU, USB, Operating System, and the webcam’s video capture size, compression ratio, and frame-rate. There are significant differences in all these elements when comparing an average laptop to the Raspberry Pi.

Frame-rates on the Intel processor-based Ubuntu laptops easily performed at or beyond the maximum 30 fps rate of the webcams, at 640 x 480 pixels. On a positive note, the Raspberry Pi was able to compile and execute the tests of OpenCV and cvBlob (see bug noted at end of article). Unfortunately, at least in my tests, the Raspberry Pi could not achieve more than 1.5 – 2 fps at most, even in the most basic tests, and at a reduced capture size of 320 x 240 pixels. This can be seen in the first and second screen-grabs of Test #1, above. Although, I’m sure there are ways to improve the code and optimize the image capture, the results were much to slow to provide accurate, real-time data to the robot’s targeting system.

Links of Interest

Static Test Images Free from: http://www.rgbstock.com/

Great Website for OpenCV Samples: http://opencv-code.com/

Another Good Website for OpenCV Samples: http://opencv-srf.blogspot.com/2010/09/filtering-images.html

cvBlob Code Sample: https://code.google.com/p/cvblob/source/browse/samples/red_object_tracking.cpp

Detecting Blobs with cvBlob: http://8a52labs.wordpress.com/2011/05/24/detecting-blobs-using-cvblobs-library/

Best Post/Script to Install OpenCV on Ubuntu and Raspberry Pi: http://jayrambhia.wordpress.com/2012/05/02/install-opencv-2-3-1-and-simplecv-in-ubuntu-12-04-precise-pangolin-arch-linux/

Measuring Frame-rate with OpenCV: http://8a52labs.wordpress.com/2011/05/19/frames-per-second-in-opencv/

OpenCV and Raspberry Pi: http://mitchtech.net/raspberry-pi-opencv/

, , , , , , , , , , , , , , , , , , , ,

52 Comments

Extending the Life of Older Windows Laptops with Linux

Like thousands of technology-driven consumers, our family recently upgraded an older laptop. As a Christmas gift, my daughter received an Intel Core i3 Windows laptop. The new laptop replaced her well-worn Intel Core 2 laptop. That old Core 2 was my previous development laptop. It had been upgraded from Windows XP to Windows 7 Pro. It also had its RAM increased and a bigger hard drive installed. But after a few years of heavy use, it seemed painfully slow, bloated with endless software installs and Windows updates. It was certainly too under-powered to support an upgrade to Windows 8.

With four family members usually upgrading their laptops every 3-4 years, my daughter’s old laptop was destined to join a growing stack of ‘retired’ electronics in our basement — until today. In just a few hours, I transformed that laptop into ‘mean, lean, open-source-breathing, Ubuntu-powered machine’. Okay, well maybe not that dramatic, but a signification increase in performance and lifespan non-the-less, at no cost and with minimal effort.

I am a huge fan of Oracle VM VirtualBox. I have three other machines running Linux VMs on Windows. However, a VM is no substitute for a dedicated system, especially on a laptop. There are increased system requirements to run the host’s OS, along with VirtualBox, and the VM’s OS. At the same time, there are often limitations on the VM’s OS. I had wanted a dedicated Ubuntu laptop for a while, and this was my chance.

Fresh Install of Ubuntu on Previous Windows-based Dell Laptop

Fresh Install of Ubuntu on Previous Windows-based Dell Laptop

Performance with Ubuntu

My thoughts after a month with the laptop? First the cons. Some issues transcend the operating system. Although, the battery life has improved slightly with Linux, the battery is approaching end-of-life and needs to be replaced. Also, the laptop continues to run hotter than it should. However, it doesn’t suffer the occasional shutdowns from overheating and constant loss of wireless that it did with Windows. Lastly, the screen is starting to lose it brightness, but is still usable.

The pros? Limited memory is not as much an issue with Ubuntu as is with Windows. Ubuntu starts up in a fraction of the time it took Windows. All the laptop’s components worked out of the box on Ubuntu with no driver issues. All the applications I use on Windows are also available on Ubuntu, or there are good alternatives. Lastly, using free services such as Dropbox, Ubuntu One, and Google Drive, I can author and store all my documents in the Cloud, and transfer them seamlessly between Linux and Windows when necessary. What’s not to love?

New System Performance

New System Performance

Overall, I’m satisfied enough to use it as my personal laptop. I carry the laptop with me daily for Java, C++, and Python development, emailing, blogging, and web-surfing. Next time your temped to retire an old laptop, consider extending its life and saving yourself the cost of a new laptop with Linux.

, , , , , , ,

1 Comment

Build Automation – Calling GlassFish’s asadmin and Apache Ant Directly

Automating deployment of applications from NetBeans to GlassFish is easy using Apache Ant and GlassFish’s asadmin utility. Calling these two applications directly, without requiring the complete file path, can be a real time-savings. With Ubuntu (Linux), like with Windows OS, this can be done by adding their file paths to the $PATH environment variable.

Below is an example of adding both asadmin and Ant to the .bashrc file in your home directory. To open the .bashrc file, open the Terminal and enter ‘sudo gedit ~/.bashrc‘. You will be prompted for your password. When the .bashrc file opens, enter the following text at the end of the .bashrc file. Make sure you change the file paths to match your local system if they are different.

export ANT_HOME=./netbeans-7.2/java/ant
export ASADMIN_HOME=./glassfish-3.1.2.2/glassfish
export PATH=$PATH:$ASADMIN_HOME/bin:$ANT_HOME/bin

Close the .bashrc file and type ‘asadmin’ at the Terminal window prompt. You should see the response below. Type ‘exit’ to get out of asadmin. Next, type ‘ant’. Again, you should see the response below. This means both applications are now available directly, on any file path or from within any application, like Jenkins or Hudson.

Adding GlassFish's asadmin and Apache Ant to $Path Environmental Variable

Adding GlassFish’s asadmin and Apache Ant to $Path Environmental Variable

You can also add these variables in other ways. Here are links to other posts, which go into much more detail, and show methods to add these for all users, in addition to just yourself:

, , , , , , , ,

Leave a comment

Returning JSONP from Java EE RESTful Web Services Using jQuery, Jersey, and GlassFish – Part 2 of 2

Create a Jersey-specific Java EE RESTful web service, and an HTML-based client to call the service and display JSONP. Test and deploy the service and the client to different remote instances of GlassFish.

Background

In part 1 of this series, we created a Jersey-specific RESTful web service from a database using NetBeans. The service returns JSONP in addition to JSON and XML. The service was deployed to a GlassFish domain, running on a Windows box. On this same box is the SQL Server instance, running the Adventure Works database, from which the service obtains data, via the entity class.

Objectives

In part two of this series, we will create a simple web client to consume and display the JSONP returned by the RESTful web service. There are many options available for creating a service consumer (client) depending on your development platform and project requirements. We will keep it simple, no complex, complied code, just HTML and JavaScript with jQuery, the well-known JavaScript library.

We will host the client on a separate GlassFish domain, running on an Ubuntu Linux VM using Oracle’s VM VirtualBox. This is a different machine than the service was installed on. When opened by the end-user in a web browser, the client files, including the JavaScript file that calls the service, are downloaded to the end-users machine. This will simulate a typical cross-domain situation where a client application needs to consume RESTful web services from a remote source. This is not allowed by the same origin policy, but overcome by returning JSONP to the client, which wraps the JSON payload in a function call.

Here are the high-level steps we will walk-through in part two:

  1. Create a simple HTML client using jQuery and ajax to call the RESTful web service;
  2. Add jQuery functionality to parse and display the JSONP returned by the service;
  3. Deploy the client to a separate remote instance of GlassFish using Apache Ant;
  4. Test the client’s ability to call the service across domains and display JSONP.

Creating the RESTful Web Service Client

New NetBeans Web Application Project
Create a new Java Web Application project in NetBeans. Name the project ‘JerseyRESTfulClient’. The choice of GlassFish server and domain where the project will be deployed is unimportant. We will use Apache Ant to deploy the client when we finish the building the project. By default, I chose my local instance of GlassFish, for testing purposes.

01a - Create a New Web Application Project in NetBeans

Create a New Web Application Project in NetBeans

01b - Create a New Web Application Project in NetBeans

Name and Location of New Web Application Project

01c - Create a New Web Application Project in NetBeans

Server and Settings of New Web Application Project

01d - Create a New Web Application Project in NetBeans

Optional Frameworks to Include in New Web Application Project

01e - Create a New Web Application Project in NetBeans

View of New Web Application Project in NetBeans

Adding Files to Project
The final client project will contains four new files:

  1. employees.html – HTML web page that displays a list of employees;
  2. employees.css – CSS information used to by employees.html;
  3. employees.js – JavaScript code used to by employees.html;
  4. jquery-1.8.2.min.js – jQuery 1.8.2 JavaScript library, minified.

First, we need to download and install jQuery. At the time of this post, jQuery 1.8.2 was the latest version. I installed the minified version (jquery-1.8.2.min.js) to save space.

Next, we will create the three new files (employees.html, employees.css, and employees.js), using the code below. When finished, we need to place all four files into the ‘Web Pages’ folder. The final project should look like:

03a - Final Client Project View

Final Client Project View

HTML
The HTML file is the smallest of the three files. The HTML page references the CSS file, the JavaScript file, and the jQuery library file. The CSS file provides the presentation (look and feel) and JavaScript file, using jQuery, dynamically provides much of the content that the HTML page normally would contain.

<!DOCTYPE html>
<html>
    <head>
        <title>Employee List</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
        <link type="text/css" rel="stylesheet" href="employees.css" />
        <script src="jquery-1.8.2.min.js" type="text/javascript"></script>
        <script src="employees.js" type="text/javascript"></script>
    </head>
    <body>
        <div id="pageTitle">Employee List</div>
        <div id="employeeList"></div>
    </body>
</html>

Cascading Style Sheets (CSS)
The CSS file is also pretty straight-forward. The pageTitle and employeeList id selectors and type selectors are used directly by the HTML page. The class selectors are all applied to the page by jQuery, in the JavaScript file.

body {
    font-family: sans-serif;
    font-size: small;
    padding-left: 6px;
}

span {
    padding: 6px;
    display: inline-block;
}

div {
    border-bottom: lightgray solid 1px;
}

#pageTitle {
    font-size: medium;
    font-weight: bold;
    padding: 12px 0px 12px 0px;
    border: none;
}

#employeeList {
    float: left;
    border: gray solid 1px;
}

.empId {
    width: 50px;
    text-align: center;
    border-right: lightgray solid 1px;
}

.name {
    width: 200px;
    border-right: lightgray solid 1px;
}

.jobTitle {
    width: 250px;
}

.header {
    font-weight: bold;
    border-bottom: gray solid 1px;
}

.even{
    background-color: rgba(0, 255, 128, 0.09);
}

.odd {
    background-color: rgba(0, 255, 128, 0.05);
}

.last {
    border-bottom: none;
}

jQuery and JavaScript
The JavaScript file is where all the magic happens. There are two primary functions. First, getEmployees, which calls the jQuery.ajax() method. According jQuery’s website, the jQuery Ajax method performs an asynchronous HTTP (Ajax) request. In this case, it calls our RESTful web service and returns JSONP. The jQuery Ajax method uses an HTTP GET method to request the following service resource (URI):

http://[your-service's-glassfish-server-name]:[your-service's-glassfish-domain-port]/JerseyRESTfulService/webresources/entities.vemployee/{from}/{to}/jsonp?callback={callback}.

The base (root) URI of the service in the URI above is as follows:

http://[server]:[port]/JerseyRESTfulService/webresources/entities.vemployee/

This is followed by a series of elements (nodes), {from}/{to}/jsonp, which together form a reference to a specific method in our service. As explained in the first post of this series, we include the /jsonp element to indicate we want to call the new findRangeJsonP method to return JSONP, as opposed to findRange method that returns JSON or XML. We pass the {from} path parameter a value of ‘0’ and the {to} path parameter a value of ‘10’.

Lastly, the method specifies the callback function name for the JSONP request, parseResponse, using the jsonpCallback setting. This value will be used instead of the random name automatically generated by jQuery. The callback function name is appended to the end of the URI as a query parameter. The final URL is as follows:

http://[server]:[port]/JerseyRESTfulService/webresources/entities.vemployee/0/10/jsonp?callback=parseResponse.

Note the use of the jsonpCallback setting is not required, or necessarily recommended by jQuery. Without it, jQuery generate a unique name as it will make it easier to manage the requests and provide callbacks and error handling. This example will work fine if you exclude the jsonpCallback: "parseResponse" setting.

getEmployees = function () {
    $.ajax({
        cache: true,
        url: restfulWebServiceURI,
        data: "{}",
        type: "GET",
        jsonpCallback: "parseResponse",
        contentType: "application/javascript",
        dataType: "jsonp",
        error: ajaxCallFailed,
        failure: ajaxCallFailed,
        success: parseResponse
    });
};

Once we have successfully returned the JSONP, the jQuery Ajax method calls the parseResponse(data) function, passing the JSON to the data argument. The parseResponse function iterates through the employee objects using the jQuery.each() method. Each field of data is surrounding with span and div tags, and concatenated to the employeeList string variable. The string is appended to the div tag with the id of ‘employeeList’, using jQuery’s .append() method. The result is an HTML table-like grid of employee names, ids, and job title, displayed on the employees.html page.

Lastly, we call the colorRows() function. This function uses jQuery’s .addClass(className) to assign CSS classes to objects in the DOM. The classes are added to stylize the grid with alternating row colors and other formatting.

parseResponse = function (data) {
    var employee = data.vEmployee;

    var employeeList = "";

    employeeList = employeeList.concat("<div class='header'>" +
        "<span class='empId'>Id</span>" +
        "<span class='name'>Employee Name</span>" +
        "<span class='jobTitle'>Job Title</span>" +
        "</div>");

    $.each(employee, function(index, employee) {
        employeeList = employeeList.concat("<div class='employee'>" +
            "<span class='empId'>" +
            employee.businessEntityID +
            "</span><span class='name'>" +
            employee.firstName + " " + employee.lastName +
            "</span><span class='jobTitle'>" +
            employee.jobTitle +
            "</span></div>");
    });

    $("#employeeList").empty();
    $("#employeeList").append(employeeList);
    colorRows();
};

Here are the complete JavaScript file contents:

// Immediate function
(function () {
    "use strict";
    
    var restfulWebServiceURI, getEmployees, ajaxCallFailed, colorRows, parseResponse;
    
    restfulWebServiceURI = "http://[your-service's-server-name]:[your-service's-port]/JerseyRESTfulService/webresources/entities.vemployee/0/10/jsonp";
    
    // Execute after the DOM is fully loaded
    $(document).ready(function () {
        getEmployees();
    });

    // Retrieve Employee List as JSONP
    getEmployees = function () {
        $.ajax({
            cache: true,
            url: restfulWebServiceURI,
            data: "{}",
            type: "GET",
            jsonpCallback: "parseResponse",
            contentType: "application/javascript",
            dataType: "jsonp",
            error: ajaxCallFailed,
            failure: ajaxCallFailed,
            success: parseResponse
        });          
    };
    
    // Called if ajax call fails
    ajaxCallFailed = function (jqXHR, textStatus) { 
        console.log("Error: " + textStatus);
        console.log(jqXHR);
        $("#employeeList").empty();
        $("#employeeList").append("Error: " + textStatus);
    };
            
    // Called if ajax call is successful
    parseResponse = function (data) {
        var employee = data.vEmployee;   
        
        var employeeList = "";
        
        employeeList = employeeList.concat("<div class='header'>" +
            "<span class='empId'>Id</span>" + 
            "<span class='name'>Employee Name</span>" + 
            "<span class='jobTitle'>Job Title</span>" + 
            "</div>"); 
        
        $.each(employee, function(index, employee) {
            employeeList = employeeList.concat("<div class='employee'>" +
                "<span class='empId'>" +
                employee.businessEntityID + 
                "</span><span class='name'>" +
                employee.firstName + " " + employee.lastName +
                "</span><span class='jobTitle'>" +
                employee.jobTitle + 
                "</span></div>");
        });
        
        $("#employeeList").empty();
        $("#employeeList").append(employeeList);
        colorRows();
    };
    
    // Styles the Employee List
    colorRows = function(){
        $("#employeeList .employee:odd").addClass("odd");
        $("#employeeList .employee:even").addClass("even");
        $("#employeeList .employee:last").addClass("last");
    };
} ());

Deployment to GlassFish
To deploy the RESTful web service client to GlassFish, run the following Apache Ant target. The target first calls the clean and dist targets to build the .war file, Then, the target calls GlassFish’s asadmin deploy command. It specifies the remote GlassFish server, admin port, admin user, admin password (in the password file), secure or insecure connection, the name of the container, and the name of the .war file to be deployed. Note that the server is different for the client than it was for the service in part 1 of the series.

<target name="glassfish-deploy-remote" depends="clean, dist"
        description="Build distribution (WAR) and deploy to GlassFish">
    <exec failonerror="true" executable="cmd" description="asadmin deploy">
        <arg value="/c" />
        <arg value="asadmin --host=[your-client's-glassfish-server-name] 
            --port=[your-client's-glassfish-domain-admin-port]
            --user=admin --passwordfile=pwdfile --secure=false
            deploy --force=true --name=JerseyRESTfulClient
            --contextroot=/JerseyRESTfulClient dist\JerseyRESTfulClient.war" />
    </exec>
</target>

Although the client application does not require any Java code, JSP pages, or Servlets, I chose to use NetBeans’ Web Application project template to create the client and chose to create a .war file to make deployment to GlassFish easier. You could just install the four client files (jQuery, HTML, CSS, and JavaScript) on Apache, IIS, or any other web server as a simple HTML site.

08c - Deploy RESTful Web Service Client to Remote GlassFish Server

Deploy Client Application to Remote GlassFish Domain Using Ant Target

Once the application is deployed to GlassFish, you should see the ‘JerseyRESTfulClient’ listed under the Applications tab within the remote server domain.

08d - Deploy RESTful Web Service Client to Remote GlassFish Server

Client Application Deployed to Remote GlassFish Domain

We will call the client application from our browser. The client application, whose files are downloaded and are now local on our machine, will in turn will call the service. The URL to call the client is: http://[your-client's-glassfish-server-name]:[your-client's-glassfish-domain-port]/JerseyRESTfulClient/employees.html (see call-out 1, in the screen-grab, below).

Using Firefox with Firebug, we can observe a few important items once the results are displayed (see the screen-grab, below):

  1. The four client files (jQuery, HTML, CSS, and JavaScript) are cached after the first time the client URL loads, but the jQuery Ajax service call is never cached (call-out 2);
  2. All the client application files are loaded from one domain, while the service is called from another domain (call-out 3);
  3. The ‘parseRequest’ callback function in the JSONP response payload, wraps the JSON data (call-out 4).
Employee List Displayed by Client Application in Firefox (showing Raw Response in Firebug)

Employee List Displayed by Client Application in Firefox

The JSONP returned by the service to the client (abridged for length):

parseResponse({"vEmployee":[{"addressLine1":"4350 Minute Dr.","businessEntityID":"1","city":"Newport Hills","countryRegionName":"United States","emailAddress":"ken0@adventure-works.com","emailPromotion":"0","firstName":"Ken","jobTitle":"Chief Executive Officer","lastName":"Sánchez","middleName":"J","phoneNumber":"697-555-0142","phoneNumberType":"Cell","postalCode":"98006","stateProvinceName":"Washington"},{"addressLine1":"7559 Worth Ct.","businessEntityID":"2","city":"Renton","countryRegionName":"United States","emailAddress":"terri0@adventure-works.com","emailPromotion":"1","firstName":"Terri","jobTitle":"Vice President of Engineering","lastName":"Duffy","middleName":"Lee","phoneNumber":"819-555-0175","phoneNumberType":"Work","postalCode":"98055","stateProvinceName":"Washington"},{...}]})

The JSON passed to the parseResponse(data) function’s data argument (abridged for length):

{"vEmployee":[{"addressLine1":"4350 Minute Dr.","businessEntityID":"1","city":"Newport Hills","countryRegionName":"United States","emailAddress":"ken0@adventure-works.com","emailPromotion":"0","firstName":"Ken","jobTitle":"Chief Executive Officer","lastName":"Sánchez","middleName":"J","phoneNumber":"697-555-0142","phoneNumberType":"Cell","postalCode":"98006","stateProvinceName":"Washington"},{"addressLine1":"7559 Worth Ct.","businessEntityID":"2","city":"Renton","countryRegionName":"United States","emailAddress":"terri0@adventure-works.com","emailPromotion":"1","firstName":"Terri","jobTitle":"Vice President of Engineering","lastName":"Duffy","middleName":"Lee","phoneNumber":"819-555-0175","phoneNumberType":"Work","postalCode":"98055","stateProvinceName":"Washington"},{...}]}

Firebug also allows us to view the JSON in a more structured and object-oriented view:

Employee List Displayed by Client Application in Firefox (showing JSON in Firebug)

Firefox Showing formatted JSON Data Using Firebug

Conclusion

We have successfully built and deployed a RESTful web service to one GlassFish domain, capable of returning JSONP. We have also built and deployed an HTML client to another GlassFish domain, capable of calling the service and displaying the JSONP. The service and client in this example have very minimal functionality. However, the service can easily be scaled to include multiple entities and RESTful services. The client’s capability can be expanded to perform a full array of CRUD operations on the database, through the RESTful web service(s).

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

4 Comments

Returning JSONP from Java EE RESTful Web Services Using jQuery, Jersey, and GlassFish – Part 1 of 2

Create a Jersey-specific Java EE RESTful web service and an HTML-based client to call the service and display JSONP. Test and deploy the service and the client to different remote instances of GlassFish.

Background

According to Wikipedia, JSONP (JSON with Padding) is a complement to the base JSON (JavaScript Object Notation) data format. It provides a method to request data from a server in a different domain, something prohibited by typical web browsers because of the same origin policy.

Jersey is the open source, production quality, JAX-RS (JSR 311) Reference Implementation for building RESTful Web services on the Java platform according to jersey.java.net. Jersey is a core component of GlassFish.

What do these two things have in common? One of the key features of Jersey is its ability to return JSONP.  According to Oracle’s documentation, using Jersey, if an instance is returned by a resource method and the most acceptable media type is one of application/javascript, application/x-javascript, text/ecmascript, application/ecmascript or text/jscript then the object that is contained by the instance is serialized as JSON (if supported, using the application/json media type) and the result is wrapped around a JavaScript callback function, whose name by default is “callback”. Otherwise, the object is serialized directly according to the most acceptable media type. This means that an instance can be used to produce the media types application/json, application/xml in addition to application.

There is plenty of opinions on the Internet about the pros and cons of using JSONP over other alternatives to get around the same origin policy. Regardless of the cons, JSONP, with the help of Jersey, provides the ability to call a RESTful web service from a remote server, without a lot of additional coding or security considerations.

Objectives

Similar to GlassFish, Jersey is also tightly integrated into NetBeans. NetBeans provides the option to use Jersey-specific features when creating RESTful web services. According to documentation, NetBeans will generate a web.xml deployment descriptor and to register the RESTful services in that deployment descriptor instead of generating an application configuration class. In this post, we will create Jersey-specific RESTful web service from a database using NetBeans. The service will return JSONP in addition to JSON and XML.

In addition to creating the RESTful web service, in part 2 of this series, we will create a simple web client to display the JSONP returned by the service. There are many options available for creating clients, depending on your development platform and project requirements. We will keep it simple – no complex compiled code, just simple JavaScript using Ajax and jQuery, the well-known JavaScript library.

We will host the RESTful web service on one GlassFish domain, running on a Windows box, along with the SQL Server database. We will host the client on a second GlassFish domain, running on an Ubuntu Linux VM using Oracle’s VM VirtualBox. This is a different machine than the service was installed on. When opened by the end-user in a web browser, the client files, including the JavaScript file that calls the service, are downloaded to the end-users machine. This will simulate a typical cross-domain situation where a client application needs to consume RESTful web services from a remote source. This is not allowed by the same origin policy, but overcome by returning JSONP to the client, which wraps the JSON payload in a function call.

Demonstration

Here are the high-level steps we will walk-through in this two-part series of posts:

  1. In a new RESTful web service web application project,
    1. Create an entity class from the Adventure Works database using EclipseLink;
    2. Create a Jersey-specific RESTful web service using the entity class using Jersey and JAXB;
    3. Add a new method to service, which leverages Jersey and Jackson’s abilities to return JSONP;
    4. Deploy the RESTful web service to a remote instance of GlassFish, using Apache Ant;
    5. Test the RESTful web service using cURL.
  2. In a new RESTful web service client web application project,
    1. Create a simple HTML client using jQuery and Ajax to call the RESTful web service;
    2. Add jQuery functionality to parse and display the JSONP returned by the service;
    3. Deploy the client to a separate remote instance of GlassFish using Apache Ant;
    4. Test the client’s ability to call the service across domains and display JSONP.

To demonstrate the example in this post, I have the follow applications installed, configured, and running in my development environment:

For the database we will use the Microsoft SQL Server 2008 R2 Adventure Works database I’ve used in the past few posts. For more on the Adventure Works database, see my post, ‘Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 1/3’. Not using SQL Server? Once you’ve created your data source, most remaining steps in this post are independent of the database you choose, be it MySQL, Oracle, Microsoft SQL Server, Derby, etc.

For a full explanation of the use of Jersey and Jackson JSON Processor, for non-Maven developers, as this post demonstrates, see this link to the Jersey 1.8 User Guide. It discusses several relevant topics to this article: Java Architecture for XML Binding (JAXB), JSON serialization, and natural JSON notation (or, convention). See this link from the User Guide, for more on natural JSON notation. Note this example does not implement natural JSON notation functionality.

Creating the RESTful Web Service

New NetBeans Web Application Project
Create a new Java Web Application project in NetBeans. Name the project. I named mine ‘JerseyRESTfulService’. The choice of GlassFish server and domain where the project will be deployed is unimportant. We will use Apache Ant to deploy the service when we finish the building the project. By default, I chose my local instance of GlassFish, for testing purposes.

01a - Create a New Web ApplicationProject in NetBeans

Create a New Web Application Project in NetBeans

01b - Create a New Web ApplicationProject in NetBeans

Name and Location of New Web Application Project

01c - Create a New Web Application Project in NetBeans

Server and Settings of New Web Application Project

01d - Create a New Web Application Project in NetBeans

Optional Frameworks to Include in New Web Application Project

01e - Create a New Web Application Project in NetBeans

View of New Web Application Project in NetBeans

Create Entity Class from Database
Right-click on the project again and select ‘New’ -> ‘Other…’. From the list of Categories, select ‘Persistence’. From the list of Persistence choices, choose ‘Entity Classes from Database’. Click Next.

02a - Create Entity Classes from the Database

Create Entity Classes from the Database

Before we can choose which database table we want from the Adventure Works database to create entity class, we must create a connection to the database – a SQL Server Data Source. Click on the Data Source drop down and select ‘New Data Source…’. Give a Java Naming and Directory Interface (JNDI) name for the data source. I called mine ‘AdventureWorks_HumanResources’. Click on the ‘Database Connection’ drop down menu, select ‘New Database Connection…’.

02b - Create Entity Classes from the Database

Select Database Tables for Entity Classes (No Data Source Exists Yet)

02c - Create Entity Classes from the Database

Create and Name a New Data Source

This starts the ‘New Connection Wizard’. The first screen, ‘Locate Driver’, is where we point NetBeans to the Microsoft JDBC Driver 4.0 for SQL Server Driver. Locate the sqljdbc4.jar file.

02d - Create Entity Classes from the Database

Add the Microsoft JDBC Driver 4.0 for SQL Server Jar File

On the next screen, ‘Customize the Connection’, input the required SQL Server information. The host is the machine your instance of SQL Server is installed on, such as ‘localhost’. The instance is the name of the SQL Server instance in which the Adventure Works database is installed, such as ‘Development’. Once you complete the form, click ‘Test Connection’. If it doesn’t succeed, check your settings, again. Keep in mind, ‘localhost’ will only work if your SQL Server instance is local to your GlassFish server instance where the service will be deployed. If it is on a separate server, make sure to use that server’s IP address or domain name.

02e - Create Entity Classes from the Database

Configure New Database Connection

As I mentioned in an earlier post, the SQL Server Data Source forces you to select a single database schema. On the ‘Choose Database Schema’ screen, select the ‘HumanResources’ schema. The database tables you will be able to reference from you entity classes are limited to just this schema, when using this data source. To reference other schemas, you will need to create more data sources.

02f - Create Entity Classes from the Database

Select Human Resources Database Schema

Back in the ‘New Entity Classes from Database’ window, you will now have the ‘AdventureWorks’ data source selected as the Data Source. After a few seconds of processing, all ‘Available Tables’ within the ‘HumanResources’ schema are displayed. Choose the ‘vEmployee(view)’. A database view is a virtual database table. Note the Entity ID message. We will need to do an extra step later on, to use the entity class built from the database view.

02g - Create Entity Classes from the Database

Choice of Database Tables and Views from Human Resources Schema

02h - Create Entity Classes from the Database

Choose the ‘vEmployee(view)’ Database View

On the next screen, ‘Entity Classes’, in the ‘New Entity Classes from Database’ window, select or create the Package to place the individual entity classes into. I chose to call mine ‘entities’.

02i-create-entity-classes-from-the-database

Select/Create the Package Location for the Entity Class

On the next screen, ‘Mapping Options’, choose ‘Fully Qualified Database Table Names’. Without this option selected, I have had problems trying to make the RESTful web services function properly. This is also the reason I chose to create the entity classes first, and then create the RESTful web services, separately. NetBeans has an option that combines these two tasks into a single step, by choosing ‘RESTful Web Services from Database’. However, the ‘Fully Qualified Database Table Names’ option is not available on the equivalent screen, using that process (at least in my version of NetBeans 7.2). I prefer the two-step approach.

02j - Create Entity Classes from the Database

Select the ‘Fully Qualified Database Table Names’ Mapping Options

Click finished. You have successfully created the SQL Server data source and entity classes.

02k - Create Entity Classes from the Database

Project View of New VEmployee Entity Class

If you recall, I mentioned a problem with the entity class we created from the database view. To avoid an error when you build and deploy your project to GlassFish, we need to make a small change to the VEmployee.java entity class. Entity classes need a unique identifier, a primary key (or, Entity ID) identified. Since this entity class was built from database view, as opposed to database table, it lacks a primary key. To fix, annotate the businessEntityID field with @Id. This indicates that businessEntityID is the primary key (Entity ID) for this class. The field, businessEntityID, must contain unique values, for this to work properly. NetBeans will make the suggested correction for you, if you allow it.

02l - Create Entity Classes from the Database

Fix the Entity Class’s Missing Primary Key (Entity ID)

02m - Create Entity Classes from the Database

Fix the Entity Class’s Missing Primary Key (Entity ID)

02n - Create Entity Classes from the Database

Entity Class With Primary Key (Entity ID)

The JPA Persistence Unit is found in the ‘persistence.xml’ file in the ‘Configuration Files’ folder. This file describes the Persistence Unit (PU). The PU serves to register the project’s persistable entity class, which are referred to by JPA as ‘managed classes’.

02o - Create Entity Classes from the Database

View of New JPA Persistence Unit

The data source we created, which will be deployed to GlassFish, is referred to as a JDBC Resource and JDBC Connection Pool. This information is stored in the ‘glassfish-resources.xml’.

02p - Create Entity Classes from the Database

View of New JDBC Resource and JDBC Connection Pool

Create RESTful Web Service
Now that have a SQL Server Data Source and our entity class, we will create the RESTful web service. Right-click on the project and select ‘New’ -> ‘Other…’ -> ‘Persistence’ -> ‘RESTful Web Services from ‘Entity Classes’. You will see the entity class we just created, from which to choose. Add the entity class.

04a - Create RESTful Web Services from Entity Classes

Create RESTful Web Services from Entity Classes

04b - Create RESTful Web Services from Entity Classes

Choose from List of Available Entity Classes

04c - Create RESTful Web Services from Entity Classes

Choose the VEmployee Entity Class

On the next screen, select or create the Resource Package to store the service class in; I called mine ‘service’. Select the ‘Use Jersey Specific Features’ option.

04d - Create RESTful Web Services from Entity Classes

Select/Create the Service’s Package Location and Select the Option to ‘Use Jersey Specific Features’

That’s it. You now have a Jersey-specific RESTful web service and the corresponding Enterprise Bean and Façade service class in the project.

04e - Create RESTful Web Services from Entity Classes

Project View of New RESTful Web Service and Associated Files

NetBeans provides an easy way to test the RESTful web services, locally. Right-click on the ‘RESTful Web Services’ project folder within the main project, and select ‘Test RESTful Web Services’. Select the first option, ‘Locally Generated Test Client’, in the ‘Configure REST Test Client’ pop-up window. NetBeans will use the locally configured GlassFish instance to deploy and test the service.

NetBeans opens a web browser window and display the RESTful URIs (Universal Resource Identifier) for the service in a tree structure. There is a parent URI, ‘entities.vemployee’. Selecting this URI will return all employees from the vEmployee database view. The ‘entities.vemployee’ URI has additional children URIs grouped under it, including ‘{id}’, ‘count’, and ‘{from/to}’, each mapped to separate methods in the service class.

Click on the ‘{id}’ URI. Choose the HTTP ‘GET()’ request method from the drop-down, enter ‘1’  for ‘id’, and click the ‘Test’ button. The service should return a status of ‘200 (OK)’, along with xml output containing information on all the Adventure Works employees. Change the MIME type to ‘application/json’. This should return the same result, formatted as JSON. Congratulation, the RESTful web services have just returned data to your browser from the SQL Server Adventure Works database, using the entity classes and data source you created.

Are they URIs or URLs? I found this excellent post that does a very good job explaining the difference between the URL (how to get there) and the URI (the resource), which is part of the URL.

04f - Create RESTful Web Services from Entity Classes

Test the RESTful Web Service Locally in NetBeans (XML  Response Shown)

04g - Create RESTful Web Services from Entity Classes

Test the RESTful Web Service Locally in NetBeans (JSON Response Shown)

Using Jersey for JSONP
GlassFish comes with the jersey-core.jar installed. In order to deliver JSONP, we also need to import and use com.sun.jersey.api.json.JSONWithPadding package from jersey-json.jar. I downloaded and installed version 1.8. You can download the jar from several locations. I chose to download it from www.java2.com. You can also download from the download.java.net Maven2 repository.

03b - Installing Jersey JSON

Add the Jersey JSON Jar File to the Project

The com.sun.jersey.api.json.JSONWithPadding package has dependencies two Jackson JSON Processor jars. You will also need to download the necessary Jackson JSON Processor jars. They are the jackson-core-asl-1.9.8.jar and jackson-mapper-asl-1.9.8.jar. At the time of this post, I downloaded the latest 1.9.8 versions from the grepcode.com Maven2 repository.

03e - Installing Jackson JSON Processor

Add the two Jackson JSON Processor Jar Files to the Project

Create New JSONP Method

NetBeans creates several default methods in the VEmployeeFacadeREST class. One of those is the findRange method. The method accepts two integer parameters, from and to. The parameter values are extracted from the URL (JAX-RS @Path annotation). The parameters are called path parameters (@PathParam). The method returns a List of VEmployee objects (List<VEmployee>). The findRange method can return two MIME types, XML and JSON (@Produces). The List<VEmployee> is serialized in either format and returned to the caller.

@GET
@Path("{from}/{to}")
@Produces({"application/xml", "application/json"})
public List<VEmployee> findRange(@PathParam("from") Integer from, @PathParam("to") Integer to) {
    return super.findRange(new int[]{from, to});
}

Neither XML nor JSON will do, we want to return JSONP. Well, using the JSONWithPadding class we can do just that. We will copy and re-write the findRange method to return JSONP. The new findRangeJsonP method looks similar to the findRange. However instead of returning a List<VEmployee>, the new method returns an instance of the JSONWithPadding class. Since List<E> extends Collection<E>, we make the same call as the first method, then cast the List<VEmployee> to Collection<VEmployee>. We then wrap the Collection in a GenericEntity<T>, which extends Object. The GenericEntity<T> represents a response entity of a generic type T. This is used to instantiate a new instance of the JSONWithPadding class, using the JSONWithPadding(Object jsonSource, String callbackName) constructor. The JSONWithPadding instance, which contains serialized JSON wrapped with the callback function, is returned to the client.

@GET
@Path("{from}/{to}/jsonp")
@Produces({"application/javascript"})
public JSONWithPadding findRangeJsonP(@PathParam("from") Integer from,
        @PathParam("to") Integer to, @QueryParam("callback") String callback) {
    Collection<VEmployee> employees = super.findRange(new int[]{from, to});
    return new JSONWithPadding(new GenericEntity<Collection<VEmployee>>(employees) {
    }, callback);
}

We have added a two new parts to the ‘from/to’ URL. First, we added ‘/jsonp’ to the end to signify the new findRangeJsonP method is to be called, instead of the original findRange method. Secondly, we added a new ‘callback’ query parameter (@QueryParam). The ‘callback’ parameter will pass in the name of the callback function, which will then be returned with the JSONP payload. The new URL format is as follows:

http://[your-service's-glassfish-server-name]:[your-service's-glassfish-domain-port]/JerseyRESTfulService/webresources/entities.vemployee/{from}/{to}/jsonp?callback={callback}

06a - Adding Jersey JSONP Method

Add the Following Jersey JSONP Method to the RESTful Web Service Class

06b - Adding Jersey JSONP Method

Adding the Method Requires Importing the ‘JSONWithPadding’ Library

Deployment to GlassFish
To deploy the RESTful web service to GlassFish, run the following Apache Ant target. The target first calls the clean and dist targets to build the .war file, Then, the target calls GlassFish’s asadmin deploy command. It specifies the remote GlassFish server, admin port, admin user, admin password (in the password file), secure or insecure connection, the name of the container, and the name of the .war file to be deployed. Note that the server is different for the service than it will be for the client in part 2 of the series.

<target name="glassfish-deploy-remote" depends="clean, dist"
        description="Build distribution (WAR) and deploy to GlassFish">
    <exec failonerror="true" executable="cmd" description="asadmin deploy">
        <arg value="/c" />
        <arg value="asadmin --host=[your-service's-glassfish-server-name] 
            --port=[your-service's-glassfish-domain-admin-port]
            --user=admin --passwordfile=pwdfile --secure=false
            deploy --force=true --name=JerseyRESTfulService
            --contextroot=/JerseyRESTfulServicedist\JerseyRESTfulService.war" />
    </exec>
</target>
Deploy RESTful Web Service to Remote GlassFish Server

Deploy RESTful Web Service to Remote GlassFish Server Using Apache Ant Target

In GlassFish, you should see the several new elements: 1) JerseyRESTfulService Application, 2) AdventureWorks_HumanResources JDBC Resource, 3) microsoft_sql_AdventureWorks_aw_devPool JDBC Connection Pool. These are the elements that were deployed by Ant. Also note, 4) the RESTful web service class, VEmployeeFacadeREST, is an EJB StatelessSessionBean.

08b - Deploy RESTful Web Service to Remote GlassFish Server

RESTful Web Service Deployed to Remote GlassFish Server

Test the Service with cURL
What is the easiest way to test our RESTful web service without a client? Answer, cURL, the free open-source URL tool. According to the website, “curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos…), file transfer resume, proxy tunneling and a busload of other useful tricks.

To use cURL, download and unzip the cURL package to your system’s Programs directory. Add the cURL directory path to your system’s PATH environmental variable. Better yet, create a CURL_HOME environmental variable and add that reference to the PATH variable, as I did. Adding the the cURL directory path to PATH allows you to call the cURL.exe application, directly from the command line.

07b - Test New Method with cURL

Add the cURL Directory Path to the ‘PATH’ Environmental Variable

With cURL installed, we can call the RESTful web service from the command line. To test the service’s new method, call it with the following cURL command:

curl -i -H "Accept: application/x-javascript" -X GET http://[your-service's-glassfish-server-name]:[your-service's-glassfish-domain-port]/JerseyRESTfulService/webresources/entities.vemployee/1/3/jsonp?callback=parseResponse

07c - Test New Method with cURL

Using cURL to Call RESTful Web Service and Return JSONP

Using cURL is great for testing the RESTful web service. However, the command line results are hard to read. I recommend copy the cURL results into NotePad++ with the JSON Viewer Plugin. Like the NotePad++ XML plugin, the JSON plugin will format the JSONP and provide a tree view of the data structure.

05c - Notepad++ JSON Viewer

Notepad++ Displaying JSONP Using the JSON Viewer Plugin

Conclusion

Congratulations! You have created and deployed a RESTful web service with a method capable of returning JSONP. In part 2 of this series, we will create a client to call the RESTful web service and display the JSONP response payload. There are many options available for creating clients, depending on your development platform and project requirements. We will keep it simple – no complex, compiled code, just simple JavaScript using Ajax and jQuery, the well-known JavaScript library.

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

21 Comments