Posts Tagged Ubuntu
Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on December 14, 2014
Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.
Introduction
Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.
Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.
A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.
In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).
Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntp, git, Docker and Fig, on the agent nodes.
All the source code this project is on Github.
Vagrant
To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.
{ "nodes": { "puppet.example.com": { ":ip": "192.168.32.5", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-master.sh" }, "node01.example.com": { ":ip": "192.168.32.10", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" }, "node02.example.com": { ":ip": "192.168.32.20", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" } } }
The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up
‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.
If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.
# vi: set ft=ruby : # Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file # Author: Gary A. Stafford # read vm and chef configurations from JSON files nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu/trusty64" nodes_config.each do |node| node_name = node[0] # name of node node_values = node[1] # content of node config.vm.define node_name do |config| # configures all forwarding ports in JSON array ports = node_values['ports'] ports.each do |port| config.vm.network :forwarded_port, host: port[':host'], guest: port[':guest'], id: port[':id'] end config.vm.hostname = node_name config.vm.network :private_network, ip: node_values[':ip'] config.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] vb.customize ["modifyvm", :id, "--name", node_name] end config.vm.provision :shell, :path => node_values[':bootstrap'] end end end
Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.
The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name
), such as, ‘vagrant ssh puppet.example.com
‘.
Bootstrapping Puppet Master Server
As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.
#!/bin/sh # Run on VM to bootstrap Puppet Master server if ps aux | grep "puppet master" | grep -v grep 2> /dev/null then echo "Puppet Master is already installed. Exiting..." else # Install Puppet Master wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \ sudo dpkg -i puppetlabs-release-trusty.deb && \ sudo apt-get update -yq && sudo apt-get upgrade -yq && \ sudo apt-get install -yq puppetmaster # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add optional alternate DNS names to /etc/puppet/puppet.conf sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf # Install some initial puppet modules on Puppet Master server sudo puppet module install puppetlabs-ntp sudo puppet module install garethr-docker sudo puppet module install puppetlabs-git sudo puppet module install puppetlabs-vcsrepo sudo puppet module install garystafford-fig # symlink manifest from Vagrant synced folder location ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp fi
There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete, ‘vagrant ssh puppet.example.com
‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com
‘ VM.
sudo service puppetmaster status # test that puppet master was installed sudo service puppetmaster stop sudo puppet master --verbose --no-daemonize # Ctrl+C to kill puppet master sudo service puppetmaster start sudo puppet cert list --all # check for 'puppet' cert
According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.‘
Bootstrapping Puppet Agent Nodes
Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T
‘). Use the command, ‘vagrant ssh node01.example.com
‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.
#!/bin/sh # Run on VM to bootstrap Puppet Agent nodes # http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/ if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null then echo "Puppet Agent is already installed. Moving on..." else sudo apt-get install -yq puppet fi if cat /etc/crontab | grep puppet 2> /dev/null then echo "Puppet Agent is already configured. Exiting..." else sudo apt-get update -yq && sudo apt-get upgrade -yq sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \ command='/usr/bin/puppet agent --onetime --no-daemonize --splay' sudo puppet resource service puppet ensure=running enable=true # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add agent section to /etc/puppet/puppet.conf echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null sudo puppet agent --enable fi
Run the two commands below within both the ‘node01.example.com
‘ and ‘node02.example.com
‘ agent nodes.
sudo service puppet status # test that agent was installed sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)
The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.”
Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.
sudo puppet cert list # should see 'node01.example.com' cert waiting for signature sudo puppet cert sign --all # sign the agent node certs sudo puppet cert list --all # check for signed certs
Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp
manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}
‘).
Below is the main site.pp
manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.
node default { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git } node 'node01.example.com', 'node02.example.com' { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git, docker, fig }
That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.
Helpful Links
All the source code this project is on Github.
Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html
Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options
Puppet Master Overview:
http://ci.openstack.org/puppet.html
Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html
Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i
Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html
Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet
Install Latest Node.js and npm in a Docker Container
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Client-Side Development, DevOps, Enterprise Software Development on November 17, 2014
Dynamically Allocated Storage Issues with Ubuntu’s Cloud Images
Posted by Gary A. Stafford in Build Automation, Enterprise Software Development on December 22, 2013

Background
According to Canonical, ‘Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows and LXC’. Ubuntu also disk images, or ‘boxes’, built specifically for Vagrant and VirtualBox. Boxes, according to Vagrant, ‘are the skeleton from which Vagrant machines are constructed. They are portable files which can be used by others on any platform that runs Vagrant to bring up a working environment‘. Ubuntu’s images are very popular with Vagrant users to build their VMs.
Assuming you have VirtualBox and Vagrant installed on your Windows, Mac OS X, or Linux system, with a few simple commands, ‘vagrant add box…’, ‘vagrant init…’, and ‘vagrant up’, you can provision a VM from one of these boxes.
Dynamically Allocated Storage
The Ubuntu Cloud Images (boxes), are Virtual Machine Disk (VMDK) format files. These VMDK files are configured for dynamically allocated storage, with a virtual size of 40 GB. That means the VMDK format file should grow to an actual size of 40 GB, as files are added. According to VirtualBox, the VM ‘will initially be very small and not occupy any space for unused virtual disk sectors, but will grow every time a disk sector is written to for the first time, until the drive reaches the maximum capacity chosen when the drive was created’.
To illustrate dynamically allocated storage, below are three freshly provisioned VirtualBox virtual machines (VM), on three different hosts, all with different operating systems. One VM is hosted on Windows 7 Enterprise, another on Ubuntu 13.10 Desktop Edition, and the last on Mac OS X 10.6.8. The VMs were all created with Vagrant from the official Ubuntu Server 13.10 (Saucy Salamander) cloud images. The Windows and Ubuntu hosts used the 64-bit version. The Mac OS X host used the 32-bit version. According to VirtualBox Manager, on all three host platforms, the virtual size of the VMs is 40 GB and the actual size is about 1 GB.
So What’s the Problem?
After a significant amount of troubleshooting Chef recipe problems on two different Ubuntu-hosted VMs, the issue with the cloud images became painfully clear. Other than a single (seemingly charmed) Windows host, none of the VMs I tested on Windows-, Ubuntu-, and Mac OS X-hosts would expand beyond 4 GB. Below is the file system disk space usage report from four host’s VMs. All four were created with the most current version of Vagrant (1.4.1), and managed with the most current version of VirtualBox (4.3.6.x).
Windows-hosted 64-bit Cloud Image VM #1:
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 40G 1.1G 37G 3% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 12K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 233G 196G 38G 85% /vagrant
Windows-hosted 32-bit Cloud Image VM #2:
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1012M 2.8G 27% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 245M 8.0K 245M 1% /dev
tmpfs tmpfs 50M 336K 50M 1% /run
none tmpfs 5.0M 4.0K 5.0M 1% /run/lock
none tmpfs 248M 0 248M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 932G 209G 724G 23% /vagrant
Ubuntu-hosted 64-bit Cloud Image VM:
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1.1G 2.7G 28% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 8.0K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant
Mac OS X-hosted 32-bit Cloud Image VM:
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1012M 2.8G 27% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 245M 12K 245M 1% /dev
tmpfs tmpfs 50M 336K 50M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 248M 0 248M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 149G 71G 79G 48% /vagrant
On the first Windows-hosted VM (the only host that actually worked), the virtual SCSI disk device (sda1), formatted ‘ext4‘, had a capacity of 40 GB. But, on the other three hosts, the same virtual device only had a capacity of 4 GB. I tested the various 32- and 64-bit Ubuntu Server 12.10 (Quantal Quetzal), 13.04 (Raring Ringtail), and 13.10 (Saucy Salamander) cloud images. They all exhibited the same issue. However, the Ubuntu 12.04.3 LTS (Precise Pangolin) worked fine on all three host OS systems.
To prove the issue was specifically with Ubuntu’s cloud images, I also tested boxes from Vagrant’s own repository, as well as other third-party providers. They all worked as expected, with no storage discrepancies. This was suggested in the only post I found on this issue, from StackExchange.
To confirm the Ubuntu-hosted VM will not expand beyond 4 GB, I also created a few multi-gigabyte files on each VM, totally 4 GB. The VMs virtual drive would not expand beyond 4 GB limit to accommodate the new files, as demonstrated below on a Ubuntu-hosted VM:
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000 dd: writing '/tmp/big_file2.bin': No space left on device 742+0 records in 741+0 records out 777560064 bytes (778 MB) copied, 1.81098 s, 429 MB/s vagrant@vagrant-ubuntu-saucy-64:~$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 4.0G 3.7G 196K 100% /
The exact cause eludes me, but I tend to think the cloud images are the issue. I know they are capable of working, since the Ubuntu 12.04.3 cloud images expand to 40 GB, but the three most recent releases are limited to 4 GB. Whatever the cause, it’s a significant problem. Imagine you’ve provisioned a 100 or a 1,000 server nodes on your network from any of these cloud images, expecting them to grow to 40 GB, but really only having 10% of that potential. Worse, they have live production data on them, and suddenly run out of space.
Test Results
Below is the complete shell sessions from three hosts.
Windows-hosted 64-bit Cloud Image VM #1:
Ubuntu-hosted 64-bit Cloud Image VM:
Mac OS X-hosted 32-bit Cloud Image VM:
Resources
Ubuntu Cloud Images for Vagrant
Fourth Extended Filesystem (ext4)
Similar Issue on StackOverflow
Updating Ubuntu Linux to the Latest JDK
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on December 16, 2013
Introduction
If you are Java Developer, new to the Linux environment, installing and configuring Java updates can be a bit daunting. In the following post, we will update a VirtualBox VM running Canonical’s popular Ubuntu Linux operating system. The VM currently contains an earlier version of Java. We will update the VM to the latest release of the Java.
All code for this post is available as Gists on GitHub.com, including a complete install script, explained at the end of this post.
Current Version of Java?
First, we will use the ‘update-alternatives –display java’ command to review all the versions of Java currently installed on the VM. We can have multiple copies installed, but only one will be configured and active. We can verify the active version using the ‘java -version’ command.
In the above example, the 1.7.0_17 JDK version of Java is configured and active. That version is located in the ‘/usr/lib/jvm/jdk1.7.0_17’ subdirectory. There are two other Java versions also installed but not active, an Oracle 1.7.0_17 JRE version and an older 1.7.0_04 JDK version. These three versions are referred to as ‘alternatives’, thus the ‘alternatives’ command. By selecting an alternative version of Java, we control which java.exe binary executable file the system calls when the ‘java’ command is executed. In a many software development environments, you may need different versions of Java, depending on different client project’s level of technology.
Alternatives
According to About.com, alternatives ‘make it possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. A generic name in the filesystem is shared by all files providing interchangeable functionality. The alternatives system and the system administrator together determine which actual file is referenced by this generic name. The generic name is not a direct symbolic link to the selected alternative. Instead, it is a symbolic link to a name in the alternatives directory, which in turn is a symbolic link to the actual file referenced.’
We can see this system at work by changing to our ‘/usr/bin’ directory. This directory contains the majority of binary executables on the system. Executing an ‘ls -Al /usr/bin/* | grep -e java -e jar -e appletviewer -e mozilla-javaplugin.so’ command, we see that each Java executable is actually a symbolic link to the alternatives directory, not a binary executable file.
To find out all the commands which support alternatives, you can use the ‘update-alternatives –get-selections’ command. We can use a similar command to get just the Java commands, ‘update-alternatives –get-selections | grep -e java -e jar -e appletview -e mozilla-javaplugin.so’.
Computer Architecture?
Next, we need to determine the computer processor architecture of the VM. The architecture determines which version of Java to download. The machine that hosts our VM may have a 64-bit architecture (also known as x86-64, x64, and amd64), while the VM might have a 32-bit architecture (also known as IA-32 or x86). Trying to install 64-bit versions of software onto 32-bit VMs is a common mistake.
The VM’s architecture was originally displayed with the ‘java -version’ command, above. To confirm the 64-bit architecture we can use either the ‘uname -a’ or ‘arch’ command.
JRE or JDK?
One last decision. The Java Runtime Environment (JRE) purpose is to run Java applications. The JRE covers most end-user’s needs. The Java Development Kit (JDK) purpose is to develop Java applications. The JDK includes a complete JRE, plus tools for developing, debugging, and monitoring Java applications. As developers, we will choose to install the JDK.
Download Latest Version
In the above screen grab, you see our VM is running a 64-bit version of Ubuntu 12.04.3 LTS (Precise Pangolin). Therefore, we will download the most recent version of the 64-bit Linux JDK. We could choose either Oracle’s commercial version of Java or the OpenJDK version. According to Oracle, the ‘OpenJDK is an open-source implementation of the Java Platform, Standard Edition (Java SE) specifications’. We will choose the latest commercial version of Oracle’s JDK. At the time of this post, that is JDK 7u45 (aka 1.7.0_45-b18).
The Linux file formats available for download, are a .rpm (Red Hat Package Manager) file and a .tar.gz file (aka tarball). For this post, we will download the tarball, the ‘jdk-7u45-linux-x64.tar.gz’ file.
Extract Tarball
We will use the command ‘sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm’, to extract the files directly to the ‘/usr/lib/jvm’ folder. This folder contains all previously installed versions of Java. Once the tarball’s files are extracted, we should see a new directory containing the new version of Java, ‘jdk1.7.0_45’, in the ‘/usr/lib/jvm’ directory.
Installation
There are two configuration modes available in the alternatives system, manual and automatic mode. According to die.net, ‘when a link group is in manual mode, the alternatives system will not (automatically) make any changes to the system administrator’s settings’. When a link group is in automatic mode, the alternatives system ensures that the links in the group point to the highest priority alternatives appropriate for the group’.
We will first install and configure the new version of Java in manual mode. To install the new version of Java, we run ‘update-alternatives –install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4’. Note the last parameter, ‘4’, the priority. Why ‘4’? If we chose to use automatic mode, as we will a little later, we want our new Java version to have the highest numeric priority. In automatic mode, the system looks at the priority to determine which version of Java it will run. In the post’s first screen grab, note each of the three installed Java versions had different priorities: 1, 2, and 3. If we want to use automatic mode later, we must set a higher priority on our new version of Java, ‘4’ or greater. We will set it now as part of the install, so we can use it later in automatic mode.
Configuration
First, to configure (activate) the new Java version in alternatives manual mode, we will run ‘update-alternatives –config java’. We are prompted to choose from the list of alternatives for Java executables (java.exe), which has now grown from three to four choices. Choose the new JDK version of Java, which we just installed.
That’s it. The system has been instructed to use the version we selected as the Java alternative. Now, when we run the ‘java’ command, the system will access the newly configured JDK version. To verify, we rerun the ‘java -version’ command. The version of Java has changed since the first time we ran the command (see first screen grab).
Now let’s switch to automatic mode. To switch to automatic mode, we use ‘update-alternatives –auto java’. Notice how the mode has changed in the screen grab below and our version with the highest priority is selected.
Other Java Command Alternatives
We will repeat this process for the other Java-related executables. Many are part of the JDK Tools and Utilities. Java-related executables include the Javadoc Tool (javadoc), Java Web Start (javaws), Java Compiler (javac), and Java Archive Tool (jar), javap, javah, appletviewer, and the Java Plugin for Linux. Note if you are updating a headless VM, you would not have a web browser installed. Therefore it would not be necessary to configuring the mozilla-javaplugin.
JAVA_HOME
Many applications which need the JDK/JRE to run, look up the JAVA_HOME environment variable for the location of the Java compiler/interpreter. The most common approach is to hard-wire the JAVA_HOME variable to the current JDK directory. User the ‘sudo nano ~/.bashrc’ command to open the bashrc file. Add or modify the following line, ‘export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45’. Remember to also add the java executable path to the PATH variable, ‘export PATH=$PATH:$JAVA_HOME/bin’. Lastly, execute a ‘bash –login’ command for the changes to be visible in the current session.
Alternately, we could use a symbolic link to ‘default-java’. There are several good posts on the Internet on using the ‘default-java’ link.
Complete Scripted Example
Below is a complete script to install or update the latest JDK on Ubuntu. Simply download the tarball to your home directory, along with the below script, available on GitHub. Execute the script with a ‘sh install_java_complete.sh’. All java executables will be installed and configured as alternatives in automatic mode. The JAVA_HOME and PATH environment variables will also be set.
Below is a test of the script on a fresh Vagrant VM of an Ubuntu Cloud Image of Ubuntu Server 13.10 (Saucy Salamander). Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The script was able to successfully install and configure the JDK, as well as the JAVA_HOME and PATH environment variables.
Deleting Old Versions?
Before deciding to completely delete previously installed versions of Java from the ‘/usr/lib/jvm’ directory, ensure there are no links to those versions from OS and application configuration files. Many applications, such as NetBeans, eclipse, soapUI, and WebLogic Server, may contain their own Java configurations. If they don’t use the JAVA_HOME variable, they should be updated to reflect the current active Java version when possible.
Resources
Ubuntu Linux: Install Latest Oracle Java 7
update-alternatives(8) – Linux man page
Setting Up the Nexus 7 for Development with ADT on Ubuntu
Posted by Gary A. Stafford in Software Development on February 17, 2013
Recently, I purchased a Google Nexus 7. Excited to being development with this Android 4.2 device, I first needed to prepare my development environment. The process of configuring my Ubuntu Linux-based laptop to debug directly on the Nexus 7 was fairly easy once I identified all necessary steps. I thought I would share my process for those who are trying to do the same. Note the following steps assume you already have Java installed on you development computer. Also, that your Nexus 7 is connected via USB.
Configuration
1) Install ADT Bundle: Download and extract the Android Developer Tools (ADT) Bundle from http://developer.android.com/sdk/index.html. I installed the Linux 64-bit version for use on my Ubuntu 12.10 laptop. The bundle, according to the website, includes ‘the essential Android SDK components and a version of the Eclipse IDE with built-in ADT to streamline your Android app development.’
2) Install IA32 Libraries: Install the ia32 shared libraries using Synaptic Package Manager, or the following command, ‘sudo apt-get install ia32-libs’. I received some initial errors with ADT, until I found a post on Stack Overflow that suggested loading the libraries; they did the trick.
3) Configure Nexus 7 to Auto-Mount: To auto-mount the Nexus 7 on your computer, you need to make some changes to your computer’s system configuration. Follow this post to ‘configure your Ubuntu computer to directly access your Nexus 7 exported filesystem in MTP mode as soon as you plug it to a USB port.’, ‘http://bernaerts.dyndns.org/linux/247-ubuntu-automount-nexus7-mtp‘. It looked a bit intimidating at first, but it actually turned out to be pretty easy and only took a few minutes.
4) Setup ADT to Debug Nexus 7: To debug on the Nexus 7 from ADT via USB, according to this next post, ‘you need to add a udev rules file that contains a USB configuration for each type of device you want to use for development.’ Follow the steps in this post, ‘http://developer.android.com/tools/device.html‘ to create the rules file, similar to step 3.
5) Setup Nexus 7 for USB Debugging: The post mentioned in step 4 also discusses setting up you Nexus 7 to enable USB debugging. The post instructs you to ‘go to Settings > About phone and tap ‘Build number seven times. Return to the previous screen to find Developer options.’ From there, check the ‘USB debugging’ box. There are any more options you may wish to experiment with, later.
6) Update you Software: After these first five steps, I suggest updating your software and restarting your computer. Run the following command to make sure your system is up-to-date, ‘sudo apt-get update && sudo apt-get upgrade’. I always run this before and after any software installations to make sure my system is current.
After completing step 3, the Nexus 7 device should appear on your Launcher. However, it is not until steps 4 and 5 that it will appear in ADT.
Demonstration
Below is a simple demonstration of debugging an Android application on the Nexus 7 from ADT, via USB. Although ADT allows you to configure an Android Virtual Device (AVD), direct debugging on the Nexus 7 is substantially faster. The Nexus 7 AVD emulation was incredibly slow, even on my 64-bit, Core i5 laptop.
Object Tracking on the Raspberry Pi with C++, OpenCV, and cvBlob
Posted by Gary A. Stafford in Bash Scripting, C++ Development, Raspberry Pi on February 9, 2013
Use C++ with OpenCV and cvBlob to perform image processing and object tracking on the Raspberry Pi, using a webcam.
Source code and compiled samples are now available on GitHub. The below post describes the original code on the ‘Master’ branch. As of May 2014, there is a revised and improved version of the project on the ‘rev05_2014’ branch, on GitHub. The README.md details the changes and also describes how to install OpenCV, cvBlob, and all dependencies!
Introduction
As part of a project with a local FIRST Robotics Competition (FRC) Team, I’ve been involved in developing a Computer Vision application for use on the Raspberry Pi. Our FRC team’s goal is to develop an object tracking and target acquisition application that could be run on the Raspberry Pi, as opposed to the robot’s primary embedded processor, a National Instrument’s NI cRIO-FRC II. We chose to work in C++ for its speed, We also decided to test two popular open-source Computer Vision (CV) libraries, OpenCV and cvBlob.
Due to its single ARM1176JZF-S 700 MHz ARM processor, a significant limitation of the Raspberry Pi is the ability to perform complex operations in real-time, such as image processing. In an earlier post, I discussed Motion to detect motion with a webcam on the Raspberry Pi. Although the Raspberry Pi was capable of running Motion, it required a greatly reduced capture size and frame-rate. And even then, the Raspberry Pi’s ability to process the webcam’s feed was very slow. I had doubts it would be able to meet the processor-intense requirements of this project.
Development for the Raspberry Pi
Using C++ in NetBeans 7.2.1 on Ubuntu 12.04.1 LTS and 12.10, I wrote several small pieces of code to demonstrate the Raspberry Pi’s ability to perform basic image processing and object tracking. Parts of the follow code are based on several OpenCV and cvBlob code examples, found in my research. Many of those examples are linked on the end of this article. Examples of cvBlob are especially hard to find.
The Code
There are five files: ‘main.cpp’, ‘testfps.cpp (testfps.h)’, and ‘testcvblob.cpp (testcvblob.h)’. The main.cpp file’s main method calls the test methods in the other two files. The cvBlob library only works with the pre-OpenCV 2.0. Therefore, I wrote all the code using the older objects and methods. The code is not written using the latest OpenCV 2.0 conventions. For example, cvBlob uses 1.0’s ‘IplImage’ image type instead 2.0’s newer ‘CvMat’ image type. My next projects is to re-write the cvBlob code to use OpenCV 2.0 conventions and/or find a newer library. The cvBlob library offered so many advantages, I felt not using the newer OpenCV 2.0 features was still worthwhile.
Main Program Method (main.cpp)
Tests 1-2 (testcvblob.hpp)
Tests 1-2 (testcvblob.cpp)
Tests 2-6 (testfps.hpp)
Tests 2-6 (testfps.cpp)
Compiling Locally on the Raspberry Pi
After writing the code, the first big challenge was cross-compiling the native C++ code, written on Intel IA-32 and 64-bit x86-64 processor-based laptops, to run on the Raspberry Pi’s ARM architecture. After failing to successfully cross-compile the C++ source code using crosstools-ng, mostly due to my lack of cross-compiling experience, I resorted to using g++ to compile the C++ source code directly on the Raspberry Pi.
First, I had to properly install the various CV libraries and the compiler on the Raspberry Pi, which itself is a bit daunting.
Compiling OpenCV 2.4.3, from the source-code, on the Raspberry Pi took an astounding 8 hours. Even though compiling the C++ source code takes longer on the Raspberry Pi, I could be assured the complied code would run locally. Below are the commands that I used to transfer and compile the C++ source code on my Raspberry Pi.
Copy and Compile Commands
Special Note About cvBlob on ARM
At first I had given up on cvBlob working on the Raspberry Pi. All the cvBlob tests I ran, no matter how simple, continued to hang on the Raspberry Pi after working perfectly on my laptop. I had narrowed the problem down to the ‘cvLabel’ method, but was unable to resolve. However, I recently discovered a documented bug on the cvBlob website. It concerned cvBlob and the very same ‘cvLabel’ method on ARM-based devices (ARM = Raspberry Pi!). After making a minor modification to cvBlob’s ‘cvlabel.cpp’ source code, as directed in the bug post, and re-compiling on the Raspberry Pi, the test worked perfectly.
Testing OpenCV and cvBlob
The code contains three pairs of tests (six total), as follows:
- OpenCV (w/ live webcam feed)
Determine if OpenCV is installed and functioning properly with the complied C++ code. Capture a webcam feed using OpenCV, and display the feed and frame rate (fps). - OpenCV (w/o live webcam feed)
Same as Test #1, but only print the frame rate (fps). The computer doesn’t need display the video feed to process the data. More importantly, the webcam’s feed might unnecessarily tax the computer’s processor and GPU. - OpenCV and cvBlob (w/ live webcam feed)
Determine if OpenCV and cvBlob are installed and functioning properly with the complied C++ code. Detect and display all objects (blobs) in a specific red color range, contained in a static jpeg image. - OpenCV and cvBlob (w/o live webcam feed)
Same as Test #3, but only print some basic information about the static image and number of blobs detected. Again, the computer doesn’t need display the video feed to process the data. - Blob Tracking (w/ live webcam feed)
Detect, track, and display all objects (blobs) in a specific blue color range, along with the largest blob’s positional data. Captured with a webcam, using OpenCV and cvBlob. - Blob Tracking (w/o live webcam feed)
Same as Test #5, but only display the largest blob’s positional data. Again, the computer doesn’t need the display the webcam feed, to process the data. The feed taxes the computer’s processor unnecessarily, which is being consumed with detecting and tracking the blobs. The blob’s positional data it sent to the robot and used by its targeting system to position its shooting platform.
The Program
There are two ways to run this program. First, from the command line you can call the application and pass in three parameters. The parameters include:
- Test method you want to run (1-6)
- Width of the webcam capture window in pixels
- Height of the webcam capture window in pixels.
An example would be ‘./TestFps 2 640 480’ or ‘./TestFps 5 320 240’.
The second method to run the program and not pass in any parameters. In that case, the program will prompt you to input the test number and other parameters on-screen.
Test 1: Laptop versus Raspberry Pi
Test 3: Laptop versus Raspberry Pi
Test 5: Laptop versus Raspberry Pi
The Results
Each test was first run on two Linux-based laptops, with Intel 32-bit and 64-bit architectures, and with two different USB webcams. The laptops were used to develop and test the code, as well as provide a baseline for application performance. Many factors can dramatically affect the application’s ability do image processing. They include the computer’s processor(s), RAM, HDD, GPU, USB, Operating System, and the webcam’s video capture size, compression ratio, and frame-rate. There are significant differences in all these elements when comparing an average laptop to the Raspberry Pi.
Frame-rates on the Intel processor-based Ubuntu laptops easily performed at or beyond the maximum 30 fps rate of the webcams, at 640 x 480 pixels. On a positive note, the Raspberry Pi was able to compile and execute the tests of OpenCV and cvBlob (see bug noted at end of article). Unfortunately, at least in my tests, the Raspberry Pi could not achieve more than 1.5 – 2 fps at most, even in the most basic tests, and at a reduced capture size of 320 x 240 pixels. This can be seen in the first and second screen-grabs of Test #1, above. Although, I’m sure there are ways to improve the code and optimize the image capture, the results were much to slow to provide accurate, real-time data to the robot’s targeting system.
Links of Interest
Static Test Images Free from: http://www.rgbstock.com/
Great Website for OpenCV Samples: http://opencv-code.com/
Another Good Website for OpenCV Samples: http://opencv-srf.blogspot.com/2010/09/filtering-images.html
cvBlob Code Sample: https://code.google.com/p/cvblob/source/browse/samples/red_object_tracking.cpp
Detecting Blobs with cvBlob: http://8a52labs.wordpress.com/2011/05/24/detecting-blobs-using-cvblobs-library/
Best Post/Script to Install OpenCV on Ubuntu and Raspberry Pi: http://jayrambhia.wordpress.com/2012/05/02/install-opencv-2-3-1-and-simplecv-in-ubuntu-12-04-precise-pangolin-arch-linux/
Measuring Frame-rate with OpenCV: http://8a52labs.wordpress.com/2011/05/19/frames-per-second-in-opencv/
OpenCV and Raspberry Pi: http://mitchtech.net/raspberry-pi-opencv/
Extending the Life of Older Windows Laptops with Linux
Posted by Gary A. Stafford in Software Development on January 29, 2013
Like thousands of technology-driven consumers, our family recently upgraded an older laptop. As a Christmas gift, my daughter received an Intel Core i3 Windows laptop. The new laptop replaced her well-worn Intel Core 2 laptop. That old Core 2 was my previous development laptop. It had been upgraded from Windows XP to Windows 7 Pro. It also had its RAM increased and a bigger hard drive installed. But after a few years of heavy use, it seemed painfully slow, bloated with endless software installs and Windows updates. It was certainly too under-powered to support an upgrade to Windows 8.
With four family members usually upgrading their laptops every 3-4 years, my daughter’s old laptop was destined to join a growing stack of ‘retired’ electronics in our basement — until today. In just a few hours, I transformed that laptop into ‘mean, lean, open-source-breathing, Ubuntu-powered machine’. Okay, well maybe not that dramatic, but a signification increase in performance and lifespan non-the-less, at no cost and with minimal effort.
I am a huge fan of Oracle VM VirtualBox. I have three other machines running Linux VMs on Windows. However, a VM is no substitute for a dedicated system, especially on a laptop. There are increased system requirements to run the host’s OS, along with VirtualBox, and the VM’s OS. At the same time, there are often limitations on the VM’s OS. I had wanted a dedicated Ubuntu laptop for a while, and this was my chance.
Performance with Ubuntu
My thoughts after a month with the laptop? First the cons. Some issues transcend the operating system. Although, the battery life has improved slightly with Linux, the battery is approaching end-of-life and needs to be replaced. Also, the laptop continues to run hotter than it should. However, it doesn’t suffer the occasional shutdowns from overheating and constant loss of wireless that it did with Windows. Lastly, the screen is starting to lose it brightness, but is still usable.
The pros? Limited memory is not as much an issue with Ubuntu as is with Windows. Ubuntu starts up in a fraction of the time it took Windows. All the laptop’s components worked out of the box on Ubuntu with no driver issues. All the applications I use on Windows are also available on Ubuntu, or there are good alternatives. Lastly, using free services such as Dropbox, Ubuntu One, and Google Drive, I can author and store all my documents in the Cloud, and transfer them seamlessly between Linux and Windows when necessary. What’s not to love?
Overall, I’m satisfied enough to use it as my personal laptop. I carry the laptop with me daily for Java, C++, and Python development, emailing, blogging, and web-surfing. Next time your temped to retire an old laptop, consider extending its life and saving yourself the cost of a new laptop with Linux.