Posts Tagged Vagrant
Automate the Provisioning and Configuration of HAProxy and an Apache Web Server Cluster Using Foreman
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps on February 21, 2015
Use Vagrant, Foreman, and Puppet to provision and configure HAProxy as a reverse proxy, load-balancer for a cluster of Apache web servers.
Introduction
In this post, we will use several technologies, including Vagrant, Foreman, and Puppet, to provision and configure a basic load-balanced web server environment. In this environment, a single node with HAProxy will act as a reverse proxy and load-balancer for two identical Apache web server nodes. All three nodes will be provisioned and bootstrapped using Vagrant, from a Linux CentOS 6.5 Vagrant Box. Afterwards, Foreman, with Puppet, will then be used to install and configure the nodes with HAProxy and Apache, using a series of Puppet modules.
For this post, I will assume you already have running instances of Vagrant with the vagrant-hostmanager plugin, VirtualBox, and Foreman. If you are unfamiliar with Vagrant, the vagrant-hostmanager plugin, VirtualBox, Foreman, or Puppet, review my recent post, Installing Foreman and Puppet Agent on Multiple VMs Using Vagrant and VirtualBox. This post demonstrates how to install and configure Foreman. In addition, the post also demonstrates how to provision and bootstrap virtual machines using Vagrant and VirtualBox. Basically, we will be repeating many of this same steps in this post, with the addition of HAProxy, Apache, and some custom configuration Puppet modules.
All code for this post is available on GitHub. However, it been updated as of 8/23/2015. Changes were required to fix compatibility issues with the latest versions of Puppet 4.x and Foreman. Additionally, the version of CentOS on all VMs was updated from 6.6 to 7.1 and the version of Foreman was updated from 1.7 to 1.9.
Steps
Here is a high-level overview of our steps in this post:
- Provision and configure the three CentOS-based virtual machines (‘nodes’) using Vagrant and VirtualBox
- Install the HAProxy and Apache Puppet modules, from Puppet Forge, onto the Foreman server
- Install the custom HAProxy and Apache Puppet configuration modules, from GitHub, onto the Foreman server
- Import the four new module’s classes to Foreman’s Puppet class library
- Add the three new virtual machines (‘hosts’) to Foreman
- Configure the new hosts in Foreman, assigning the appropriate Puppet classes
- Apply the Foreman Puppet configurations to the new hosts
- Test HAProxy is working as a reverse and proxy load-balancer for the two Apache web server nodes
In this post, I will use the terms ‘virtual machine’, ‘machine’, ‘node’, ‘agent node’, and ‘host’, interchangeable, based on each software’s own nomenclature.
Provisioning
First, using the process described in the previous post, provision and bootstrap the three new virtual machines. The new machine’s Vagrant configuration is shown below. This should be added to the JSON configuration file. All code for the earlier post is available on GitHub.
{ "nodes": { "haproxy.example.com": { ":ip": "192.168.35.101", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" }, "node01.example.com": { ":ip": "192.168.35.121", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" }, "node02.example.com": { ":ip": "192.168.35.122", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" } } }
After provisioning and bootstrapping, observe the three machines running in Oracle’s VM VirtualBox Manager.
Installing Puppet Forge Modules
The next task is to install the HAProxy and Apache Puppet modules on the Foreman server. This allows Foreman to have access to them. I chose the puppetlabs-haproxy HAProxy module and the puppetlabs-apache Apache modules. Both modules were authored by Puppet Labs, and are available on Puppet Forge.
The exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory.
sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-haproxy sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-apache # confirm module installation puppet module list --modulepath /etc/puppet/environments/production/modules
Installing Configuration Modules
Next, install the HAProxy and Apache configuration Puppet modules on the Foreman server. Both modules are hosted on my GitHub repository. Both modules can be downloaded directly from GitHub and installed on the Foreman server, from the command line. Again, the exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory. Also, notice I am currently downloading version 0.1.0 of both modules at the time of writing this post. Make sure to double-check for the latest versions of both modules before running the commands. Modify the commands if necessary.
# apache config module wget -N https://github.com/garystafford/garystafford-apache_example_config/archive/v0.1.0.tar.gz && \ sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force # haproxy config module wget -N https://github.com/garystafford/garystafford-haproxy_node_config/archive/v0.1.0.tar.gz && \ sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force # confirm module installation puppet module list --modulepath /etc/puppet/environments/production/modules
HAProxy Configuration
The HAProxy configuration module configures HAProxy’s /etc/haproxy/haproxy.cfg
file. The single class in the module’s init.pp
manifest is as follows:
class haproxy_node_config () inherits haproxy { haproxy::listen { 'puppet00': collect_exported => false, ipaddress => '*', ports => '80', mode => 'http', options => { 'option' => ['httplog'], 'balance' => 'roundrobin', }, } Haproxy::Balancermember <<| listening_service == 'puppet00' |>> haproxy::balancermember { 'haproxy': listening_service => 'puppet00', server_names => ['node01.example.com', 'node02.example.com'], ipaddresses => ['192.168.35.121', '192.168.35.122'], ports => '80', options => 'check', } }
The resulting /etc/haproxy/haproxy.cfg
file will have the following configuration added. It defines the two Apache web server node’s hostname, ip addresses, and http port. The configuration also defines the load-balancing method, ‘round-robin‘ in our example. In this example, we are using layer 7 load-balancing (application layer – http), as opposed to layer 4 load-balancing (transport layer – tcp). Either method will work for this example. The Puppet Labs’ HAProxy module’s documentation on Puppet Forge and HAProxy’s own documentation are both excellent starting points to understand how to configure HAProxy. We are barely scraping the surface of HAProxy’s capabilities in this brief example.
listen puppet00 bind *:80 mode http balance roundrobin option httplog server node01.example.com 192.168.35.121:80 check server node02.example.com 192.168.35.122:80 check
Apache Configuration
The Apache configuration module creates default web page in Apache’s docroot
directory, /var/www/html/index.html
. The single class in the module’s init.pp
manifest is as follows:
The resulting /var/www/html/index.html
file will look like the following. Observe that the facter variables shown in the module manifest above have been replaced by the individual node’s hostname and ip address during application of the configuration by Puppet (ie. ${fqdn}
became node01.example.com
).
Both of these Puppet modules were created specifically to configure HAProxy and Apache for this post. Unlike published modules on Puppet Forge, these two modules are very simple, and don’t necessarily represent the best practices and patterns for authoring Puppet Forge modules.
Importing into Foreman
After installing the new modules onto the Foreman server, we need to import them into Foreman. This is accomplished from the ‘Puppet classes’ tab, using the ‘Import from theforeman.example.com’ button. Once imported, the module classes are available to assign to host machines.
Add Host to Foreman
Next, add the three new hosts to Foreman. If you have questions on how to add the nodes to Foreman, start Puppet’s Certificate Signing Request (CSR) process on the hosts, signing the certificates, or other first time tasks, refer to the previous post. That post explains this process in detail.
Configure the Hosts
Next, configure the HAProxy and Apache nodes with the necessary Puppet classes. In addition to the base module classes and configuration classes, I recommend adding git and ntp modules to each of the new nodes. These modules were explained in the previous post. Refer to the screen-grabs below for correct module classes to add, specific to HAProxy and Apache.
Agent Configuration and Testing the System
Once configurations are retrieved and applied by Puppet Agent on each node, we can test our reverse proxy load-balanced environment. To start, open a browser and load haproxy.paychex.com
. You should see one of the two pages below. Refresh the page a few times. You should observe HAProxy re-directing you to one Apache web server node, and then the other, using HAProxy’s round-robin algorithm. You can differentiate the Apache web servers by the hostname and ip address displayed on the web page.
After hitting HAProxy’s URL several times successfully, view HAProxy’s built-in Statistics Report page at http://haproxy.example.com/haproxy?stats
. Note below, each of the two Apache node has been hit 44 times each from HAProxy. This demonstrates the effectiveness of the reverse proxy and load-balancing features of HAProxy.
Accessing Apache Directly
If you are testing HAProxy from the same machine on which you created the virtual machines (VirtualBox host), you will likely be able to directly access either of the Apache web servers (ei. node02.example.com
). The VirtualBox host file contains the ip addresses and hostnames of all three hosts. This DNS configuration was done automatically by the vagrant-hostmanager plugin. However, in an actual Production environment, only the HAProxy server’s hostname and ip address would be publicly accessible to a user. The two Apache nodes would sit behind a firewall, accessible only by the HAProxy server. HAProxy acts as a façade to public side of the network.
Testing Apache Host Failure
The main reason you would likely use a load-balancer is high-availability. With HAProxy acting as a load-balancer, we should be able to impair one of the two Apache nodes, without noticeable disruption. HAProxy will continue to serve content from the remaining Apache web server node.
Log into node01.example.com
, using the following command, vagrant ssh node01.example.com
. To simulate an impairment on ‘node01’, run the following command to stop Apache, sudo service httpd stop
. Now, refresh the haproxy.example.com
URL in your web browser. You should notice HAProxy is now redirecting all traffic to node02.example.com
.
Troubleshooting
While troubleshooting HAProxy configuration issues for this demonstration, I discovered logging is not configured by default on CentOS. No worries, I recommend HAProxy: Give me some logs on CentOS 6.5!, by Stephane Combaudon, to get logging running. Once logging is active, you can more easily troubleshoot HAProxy and Apache configuration issues. Here are some example commands you might find useful:
# haproxy sudo more -f /var/log/haproxy.log sudo haproxy -f /etc/haproxy/haproxy.cfg -c # check/validate config file # apache sudo ls -1 /etc/httpd/logs/ sudo tail -50 /etc/httpd/logs/error_log sudo less /etc/httpd/logs/access_log
Redundant Proxies
In this simple example, the system’s weakest point is obviously the single HAProxy instance. It represents a single-point-of-failure (SPOF) in our environment. In an actual production environment, you would likely have more than one instance of HAProxy. They may both be in a load-balanced pool, or one active and on standby as a failover, should one instance become impaired. There are several techniques for building in proxy redundancy, often with the use of Virtual IP and Keepalived. Below is a list of articles that might help you take this post’s example to the next level.
- An Introduction to HAProxy and Load Balancing Concepts
- Install HAProxy and Keepalived (Virtual IP)
- Redundant Load Balancers – HAProxy and Keepalived
- Howto setup a haproxy as fault tolerant / high available load balancer for multiple caching web proxies on RHEL/Centos/SL
- Keepalived Module on Puppet Forge: arioch/keepalived, byTom De Vylder
Installing Foreman and Puppet Agent on Multiple VMs Using Vagrant and VirtualBox
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Software Development on January 18, 2015
Automatically install and configure Foreman, the open source infrastructure lifecycle management tool, and multiple Puppet Agent VMs using Vagrant and VirtualBox.
Introduction
In the last post, Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox, we installed Puppet Master/Agent on VirtualBox VMs using Vagrant. Puppet Master is an excellent tool, but lacks the ease-of-use of Puppet Enterprise or Foreman. In this post, we will build an almost identical environment, substituting Foreman for Puppet Master.
According to Foreman’s website, “Foreman is an open source project that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. Using Puppet or Chef and Foreman’s smart proxy architecture, you can easily automate repetitive tasks, quickly deploy applications, and proactively manage change, both on-premise with VMs and bare-metal or in the cloud.”
Combined with Puppet Labs’ Open Source Puppet, Foreman is an effective solution to manage infrastructure and system configuration. Again, according to Foreman’s website, the Foreman installer is a collection of Puppet modules that installs everything required for a full working Foreman setup. The installer uses native OS packaging and adds necessary configuration for the complete installation. By default, the Foreman installer will configure:
- Apache HTTP with SSL (using a Puppet-signed certificate)
- Foreman running under mod_passenger
- Smart Proxy configured for Puppet, TFTP and SSL
- Puppet master running under mod_passenger
- Puppet agent configured
- TFTP server (under xinetd on Red Hat platforms)
For the average Systems Engineer or Software Developer, installing and configuring Foreman, Puppet Master, Apache, Puppet Agent, and the other associated software packages listed above, is daunting. If the installation doesn’t work properly, you must troubleshooting, or trying to remove and reinstall some or all the components.
A better solution is to automate the installation of Foreman into a Docker container, or on to a VM using Vagrant. Automating the installation process guarantees accuracy and consistency. The Vagrant VirtualBox VM can be snapshotted, moved to another host, or simply destroyed and recreated, if needed.
All code for this post is available on GitHub. However, it been updated as of 8/23/2015. Changes were required to fix compatibility issues with the latest versions of Puppet 4.x and Foreman. Additionally, the version of CentOS on all VMs was updated from 6.6 to 7.1 and the version of Foreman was updated from 1.7 to 1.9.
The Post’s Example
In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs in this post will be build from a standard CentOS 6.5 x64 base Vagrant Box, located on Atlas. We will use a single JSON-format configuration file to automatically build all three VMs with Vagrant. As part of the provisioning process, using Vagrant’s shell provisioner, we will execute a bootstrap shell script. The script will install Foreman and it’s associated software on the first VM, and Puppet Agent on the two remaining VMs (aka Puppet ‘agent nodes’ or Foreman ‘hosts’).
Foreman does have the ability to provision on bare-metal infrastructure and public or private clouds. However, this example would simulate an environment where you have existing nodes you want to manage with Foreman.
The Foreman bootstrap script will also download several Puppet modules. To test Foreman once the provisioning is complete, import those module’s classes into Foreman and assign the classes to the hosts. The hosts will fetch and apply the configurations. You can then test for the installed instances of those module’s components on the puppet agent hosts.
Vagrant
To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.
{ "nodes": { "theforeman.example.com": { ":ip": "192.168.35.5", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-foreman.sh" }, "agent01.example.com": { ":ip": "192.168.35.10", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" }, "agent02.example.com": { ":ip": "192.168.35.20", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" } } }
The Vagrantfile
uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up
‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile
to create as many VMs as you want. For this post’s example, we will not need to add any VirtualBox port mappings. However, that can also done from the JSON configuration file (see the READM.md for more directions).
If you have not used the CentOS Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.
# -*- mode: ruby -*- # vi: set ft=ruby : # Builds single Foreman server and # multiple Puppet Agent Nodes using JSON config file # Gary A. Stafford - 01/15/2015 # read vm and chef configurations from JSON files nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "chef/centos-6.5" nodes_config.each do |node| node_name = node[0] # name of node node_values = node[1] # content of node config.vm.define node_name do |config| # configures all forwarding ports in JSON array ports = node_values['ports'] ports.each do |port| config.vm.network :forwarded_port, host: port[':host'], guest: port[':guest'], id: port[':id'] end config.vm.hostname = node_name config.vm.network :private_network, ip: node_values[':ip'] config.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] vb.customize ["modifyvm", :id, "--name", node_name] end config.vm.provision :shell, :path => node_values[':bootstrap'] end end end
Once provisioned, the three VMs, also called ‘Machines’ by Vagrant, should appear in Oracle VM VirtualBox Manager.
The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name
), such as, ‘vagrant ssh theforeman.example.com
‘.
Bootstrapping Foreman
As part of the Vagrant provisioning process (‘vagrant up
‘ command), a bootstrap script is executed on the VMs (shown below). This script will do almost of the installation and configuration work. Below is script for bootstrapping the Foreman VM.
#!/bin/sh # Run on VM to bootstrap Foreman server # Gary A. Stafford - 01/15/2015 if ps aux | grep "/usr/share/foreman" | grep -v grep 2> /dev/null then echo "Foreman appears to all already be installed. Exiting..." else # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.35.5 theforeman.example.com theforeman" | sudo tee --append /etc/hosts 2> /dev/null # Update system first sudo yum update -y # Install Foreman for CentOS 6 sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm && \ sudo yum -y install epel-release http://yum.theforeman.org/releases/1.7/el6/x86_64/foreman-release.rpm && \ sudo yum -y install foreman-installer && \ sudo foreman-installer # First run the Puppet agent on the Foreman host which will send the first Puppet report to Foreman, # automatically creating the host in Foreman's database sudo puppet agent --test --waitforcert=60 # Install some optional puppet modules on Foreman server to get started... sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-ntp sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-git sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-docker fi
Bootstrapping Puppet Agent Nodes
Below is script for bootstrapping the puppet agent nodes. The agent node bootstrap script was executed as part of the Vagrant provisioning process.
#!/bin/sh # Run on VM to bootstrap Puppet Agent nodes # Gary A. Stafford - 01/15/2015 if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null then echo "Puppet Agent is already installed. Moving on..." else # Update system first sudo yum update -y # Install Puppet for CentOS 6 sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm && \ sudo yum -y install puppet # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.35.5 theforeman.example.com theforeman" | sudo tee --append /etc/hosts 2> /dev/null # Add agent section to /etc/puppet/puppet.conf (sets run interval to 120 seconds) echo "" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null && \ echo " server = theforeman.example.com" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null && \ echo " runinterval = 120" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null sudo service puppet stop sudo service puppet start sudo puppet resource service puppet ensure=running enable=true sudo puppet agent --enable fi
Now that the Foreman is running, use the command, ‘vagrant ssh agent01.example.com
‘, to ssh into the first puppet agent node. Run the command below.
sudo puppet agent --test --waitforcert=60
The command above manually starts Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each puppet agent node must have it certificate signed by the Foreman, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master (Foreman) and will retrieve a signed certificate once one is available.”
Open the Foreman browser-based interface, running at https://theforeman.example.com. Proceed to the ‘Infrastructure’ -> ‘Smart Proxies’ tab. Sign the certificate(s) from the agent nodes (shown below). The agent node will wait for the Foreman to sign the certificate, before continuing with the initial configuration.
Once the certificate signing process is complete, the host retrieves the client configuration from the Foreman and applies it to the hosts.
That’s it, you should now have one host running Foreman and two puppet agent nodes.
Testing Foreman
To test Foreman, import the classes from the Puppet modules installed with the Foreman bootstrap script.
Next, apply ntp, git, and Docker classes to both agent nodes (aka, Foreman ‘hosts’), as well as the Foreman node, itself.
Every two minutes, the two agent node hosts should fetch their latest configuration from Foreman and apply it. In a few minutes, check the times reported in the ‘Last report’ column on the ‘All Hosts’ tab. If the times are two minutes or less, Foreman and Puppet Agent are working. Note we changed the runinterval
to 120 seconds (‘120s’) in the bootstrap script to speed up the Puppet Agent updates for the sake of the demo. The normal default interval is 30 minutes. I recommend changing the agent node’s runinterval
back to 30 minutes (’30m’) on the hosts, once everything is working to save unnecessary use of resources.
Finally, to verify that the configuration was successfully applied to the hosts, check if ntp, git, and Docker are now running on the hosts.
Helpful Links
All the source code this project is on Github.
Foreman:
http://theforeman.org
Atlas – Discover Vagrant Boxes:
https://atlas.hashicorp.com/boxes/search
Learning Puppet – Basic Agent/Master Puppet
https://docs.puppetlabs.com/learning/agent_master_basic.html
Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html
Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on December 14, 2014
Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.
Introduction
Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.
Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.
A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.
In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).
Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntp, git, Docker and Fig, on the agent nodes.
All the source code this project is on Github.
Vagrant
To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.
{ "nodes": { "puppet.example.com": { ":ip": "192.168.32.5", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-master.sh" }, "node01.example.com": { ":ip": "192.168.32.10", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" }, "node02.example.com": { ":ip": "192.168.32.20", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" } } }
The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up
‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.
If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.
# vi: set ft=ruby : # Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file # Author: Gary A. Stafford # read vm and chef configurations from JSON files nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu/trusty64" nodes_config.each do |node| node_name = node[0] # name of node node_values = node[1] # content of node config.vm.define node_name do |config| # configures all forwarding ports in JSON array ports = node_values['ports'] ports.each do |port| config.vm.network :forwarded_port, host: port[':host'], guest: port[':guest'], id: port[':id'] end config.vm.hostname = node_name config.vm.network :private_network, ip: node_values[':ip'] config.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] vb.customize ["modifyvm", :id, "--name", node_name] end config.vm.provision :shell, :path => node_values[':bootstrap'] end end end
Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.
The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name
), such as, ‘vagrant ssh puppet.example.com
‘.
Bootstrapping Puppet Master Server
As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.
#!/bin/sh # Run on VM to bootstrap Puppet Master server if ps aux | grep "puppet master" | grep -v grep 2> /dev/null then echo "Puppet Master is already installed. Exiting..." else # Install Puppet Master wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \ sudo dpkg -i puppetlabs-release-trusty.deb && \ sudo apt-get update -yq && sudo apt-get upgrade -yq && \ sudo apt-get install -yq puppetmaster # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add optional alternate DNS names to /etc/puppet/puppet.conf sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf # Install some initial puppet modules on Puppet Master server sudo puppet module install puppetlabs-ntp sudo puppet module install garethr-docker sudo puppet module install puppetlabs-git sudo puppet module install puppetlabs-vcsrepo sudo puppet module install garystafford-fig # symlink manifest from Vagrant synced folder location ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp fi
There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete, ‘vagrant ssh puppet.example.com
‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com
‘ VM.
sudo service puppetmaster status # test that puppet master was installed sudo service puppetmaster stop sudo puppet master --verbose --no-daemonize # Ctrl+C to kill puppet master sudo service puppetmaster start sudo puppet cert list --all # check for 'puppet' cert
According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.‘
Bootstrapping Puppet Agent Nodes
Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T
‘). Use the command, ‘vagrant ssh node01.example.com
‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.
#!/bin/sh # Run on VM to bootstrap Puppet Agent nodes # http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/ if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null then echo "Puppet Agent is already installed. Moving on..." else sudo apt-get install -yq puppet fi if cat /etc/crontab | grep puppet 2> /dev/null then echo "Puppet Agent is already configured. Exiting..." else sudo apt-get update -yq && sudo apt-get upgrade -yq sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \ command='/usr/bin/puppet agent --onetime --no-daemonize --splay' sudo puppet resource service puppet ensure=running enable=true # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add agent section to /etc/puppet/puppet.conf echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null sudo puppet agent --enable fi
Run the two commands below within both the ‘node01.example.com
‘ and ‘node02.example.com
‘ agent nodes.
sudo service puppet status # test that agent was installed sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)
The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.”
Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.
sudo puppet cert list # should see 'node01.example.com' cert waiting for signature sudo puppet cert sign --all # sign the agent node certs sudo puppet cert list --all # check for signed certs
Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp
manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}
‘).
Below is the main site.pp
manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.
node default { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git } node 'node01.example.com', 'node02.example.com' { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git, docker, fig }
That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.
Helpful Links
All the source code this project is on Github.
Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html
Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options
Puppet Master Overview:
http://ci.openstack.org/puppet.html
Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html
Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i
Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html
Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet
Create Multi-VM Environments Using Vagrant, Chef, and JSON
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Software Development on February 27, 2014
Create and manage ‘multi-machine’ environments with Vagrant, using JSON configuration files. Allow increased portability across hosts, environments, and organizations.
Introduction
As their website says, Vagrant has made it very easy to ‘create and configure lightweight, reproducible, and portable development environments.’ Based on Ruby, the elegantly simple open-source programming language, Vagrant requires a minimal learning curve to get up and running.
In this post, we will create what Vagrant refers to as a ‘multi-machine’ environment. We will provision three virtual machines (VMs). The VMs will mirror a typical three-tier architected environment, with separate web, application, and database servers.
We will move all the VM-specific information from the Vagrantfile to a separate JSON format configuration file. There are a few advantages to moving the configuration information to separate file. First, we can configure any number VMs, while keeping the Vagrantfile exactly the same. Secondly and more importantly, we can re-use the same Vagrantfile to build different VMs on another host machine.
Although certainly not required, I am also using Chef in this example. More specifically, I am using Hosted Chef to further configure the VMs. Like the VM-specific information above, I have also moved the Chef-specific information to a separate JSON configuration file. We can now use the same Vagrantfile within another Chef Environment, or even within another Chef Organization, using an alternate configuration files. If you are not a Chef user, you can disregard that part of the configuration code. Alternately, you can substitute the Chef configuration code for Puppet, if that is your configuration automation tool of choice.
The only items we will not remove from the Vagrantfile are the Vagrant Box and synced folder configurations. These items could also be moved to a separate configuration file, making the Vagrantfile even more generic and portable.
The Code
Below is the VM-specific JSON configuration file, containing all the individual configuration information necessary for Vagrant to build the three VMs: ‘apps’, dbs’, and ‘web’. Each child ‘node’ in the parent ‘nodes’ object contains key/value pairs for VM names, IP addresses, forwarding ports, host names, and memory settings. To add another VM, you would simply add another ‘node’ object.
{ | |
"nodes": { | |
"apps": { | |
":node": "ApplicationServer-201", | |
":ip": "192.168.33.21", | |
":host": "apps.server-201", | |
"ports": [ | |
{ | |
":host": 2201, | |
":guest": 22, | |
":id": "ssh" | |
}, | |
{ | |
":host": 7709, | |
":guest": 7709, | |
":id": "wls-listen" | |
} | |
], | |
":memory": 2048 | |
}, | |
"dbs": { | |
":node": "DatabaseServer-301", | |
":ip": "192.168.33.31", | |
":host": "dbs.server-301", | |
"ports": [ | |
{ | |
":host": 2202, | |
":guest": 22, | |
":id": "ssh" | |
}, | |
{ | |
":host": 1529, | |
":guest": 1529, | |
":id": "xe-db" | |
}, | |
{ | |
":host": 8380, | |
":guest": 8380, | |
":id": "xe-listen" | |
} | |
], | |
":memory": 2048 | |
}, | |
"web": { | |
":node": "WebServer-401", | |
":ip": "192.168.33.41", | |
":host": "web.server-401", | |
"ports": [ | |
{ | |
":host": 2203, | |
":guest": 22, | |
":id": "ssh" | |
}, | |
{ | |
":host": 4756, | |
":guest": 4756, | |
":id": "apache" | |
} | |
], | |
":memory": 1024 | |
} | |
} | |
} |
Next, is the Chef-specific JSON configuration file, containing Chef configuration information common to all the VMs.
{ | |
"chef": { | |
":chef_server_url": "https://api.opscode.com/organizations/my-organization", | |
":client_key_path": "/etc/chef/my-client.pem", | |
":environment": "my-environment", | |
":provisioning_path": "/etc/chef", | |
":validation_client_name": "my-client", | |
":validation_key_path": "~/.chef/my-client.pem" | |
} | |
} |
Lastly, the Vagrantfile, which loads both configuration files. The Vagrantfile instructs Vagrant to loop through all nodes in the nodes.json file, provisioning VMs for each node. Vagrant then uses the chef.json file to further configure the VMs.
The environment and node configuration items in the chef.json reference an actual Chef Environment and Chef Nodes. They are both part of a Chef Organization, which is configured within a Hosted Chef account.
# -*- mode: ruby -*- | |
# vi: set ft=ruby : | |
# Multi-VM Configuration: Builds Web, Application, and Database Servers using JSON config file | |
# Configures VMs based on Hosted Chef Server defined Environment and Node (vs. Roles) | |
# Author: Gary A. Stafford | |
# read vm and chef configurations from JSON files | |
nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] | |
chef_config = (JSON.parse(File.read("chef.json")))['chef'] | |
VAGRANTFILE_API_VERSION = "2" | |
Vagrant.require_plugin "vagrant-omnibus" | |
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| | |
config.vm.box = "vagrant-oracle-vm-saucy64" | |
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-amd64-vagrant-disk1.box" | |
config.omnibus.chef_version = :latest | |
nodes_config.each do |node| | |
node_name = node[0] # name of node | |
node_values = node[1] # content of node | |
config.vm.define node_name do |config| | |
# configures all forwarding ports in JSON array | |
ports = node_values['ports'] | |
ports.each do |port| | |
config.vm.network :forwarded_port, | |
host: port[':host'], | |
guest: port[':guest'], | |
id: port[':id'] | |
end | |
config.vm.hostname = node_values[':node'] | |
config.vm.network :private_network, ip: node_values[':ip'] | |
# syncs local repository of large third-party installer files (quicker than downloading each time) | |
config.vm.synced_folder "#{ENV['HOME']}/Documents/git_repos/chef-artifacts", "/vagrant" | |
config.vm.provider :virtualbox do |vb| | |
vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] | |
vb.customize ["modifyvm", :id, "--name", node_values[':node']] | |
end | |
# chef configuration section | |
config.vm.provision :chef_client do |chef| | |
chef.environment = chef_config[':environment'] | |
chef.provisioning_path = chef_config[':provisioning_path'] | |
chef.chef_server_url = chef_config[':chef_server_url'] | |
chef.validation_key_path = chef_config[':validation_key_path'] | |
chef.node_name = node_values[':node'] | |
chef.validation_client_name = chef_config[':validation_client_name'] | |
chef.client_key_path = chef_config[':client_key_path'] | |
end | |
end | |
end | |
end |
Each VM has a varying number of ports it needs to configue and forward. To accomplish this, the Vagrantfile not only loops through the each node, it also loops through each port configuration object it finds within the node object. Shown below is the Database Server VM within VirtualBox, containing three forwarding ports.
In addition to the gists above, this repository on GitHub contains a complete copy of all the code used in the post.
The Results
Running the ‘vagrant up’ command will provision all three individually configured VMs. Once created and running in VirtualBox, Chef further configures the VMs with the necessary settings and applications specific to each server’s purposes. You can just as easily create 10, 100, or 1,000 VMs using this same process.
.
Helpful Links
Configure Git for Windows and Vagrant on a Corporate Network
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on December 31, 2013
Modified bashrc configuration for Git for Windows to work with both Git and Vagrant.

Introduction
In my last post, Easy Configuration of Git for Windows on a Corporate Network, I demonstrated how to configure Git for Windows to work when switching between working on-site, working off-site through a VPN, and working totally off the corporate network. Dealing with a proxy server was the main concern. The solution worked fine for Git. However, after further testing with Vagrant using the Git Bash interactive shell, I ran into a snag. Unlike Git, Vagrant did not seem to like the standard URI, which contained ‘domain\username’:
http(s)://domain\username:password@proxy_server:proxy_port
In a corporate environment with LDAP, qualifying the username with a domain is normal, like ‘domain\username’. But, when trying to install a Vagrant plug-in with a command such as ‘vagrant plugin install vagrant-omnibus’, I received an error similar to the following (proxy details obscured):
$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
c:/HashiCorp/Vagrant/embedded/lib/ruby/2.0.0/uri/common.rb:176: in `split':
bad URI(is not URI?): http://domain\username:password@proxy:port
(URI::InvalidURIError)...
Solution
After some research, it seems Vagrant’s ‘common.rb’ URI function does not like the ‘domain\username’ format of the original URI. To fix this problem, I modified the original ‘proxy_on’ function, removing the DOMAIN environment variable. I now suggest using the fully qualified domain name (FQDN) of the proxy server. So, instead of ‘my_proxy’, it would be ‘my_proxy.domain.tld’. The acronym ‘tld’ stands for the top-level domain (tld). Although .com is the most common one, there are over 300 top-level domains, so I don’t want assume yours is ‘.com’. The new proxy URI is as follows:
http(s)://username:password@proxy_server.domain.tld:proxy_port
Although all environments have different characteristics, I have found this change to work, with both Git and Vagrant, in my own environment. After making this change, I was able to install plug-ins and do other similar functions with Vagrant, using the Git Bash interactive shell.
$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
Installed the plugin 'vagrant-omnibus (1.2.1)'!
Change to Environment Variables
One change you will notice compared to my last post, and unrelated to the Vagrant domain issue, is a change to PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables. In the last post, I created and exported the PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables within the ‘proxy_on’ function. After further consideration, I permanently moved them to Environment Variables -> User variables. I felt this was a better solution, especially for my password. Instead of my user’s account password residing in the .bashrc file, in plain text, it’s now in my user’s environment variables. Although still not ideal, I felt my password was slightly more secure. Also, since my proxy server address rarely change when I am at work or on the VPN, I felt moving these was easier and cleaner than placing them into the .bashrc file.
The New Code
Verbose version:
# configure proxy for git while on corporate network | |
function proxy_on(){ | |
# assumes $USERDOMAIN, $USERNAME, $USERDNSDOMAIN | |
# are existing Windows system-level environment variables | |
# assumes $PASSWORD, $PROXY_SERVER, $PROXY_PORT | |
# are existing Windows current user-level environment variables (your user) | |
# environment variables are UPPERCASE even in git bash | |
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT" | |
export HTTPS_PROXY=$HTTP_PROXY | |
export FTP_PROXY=$HTTP_PROXY | |
export SOCKS_PROXY=$HTTP_PROXY | |
export NO_PROXY="localhost,127.0.0.1,$USERDNSDOMAIN" | |
# optional for debugging | |
export GIT_CURL_VERBOSE=1 | |
# optional Self Signed SSL certs and | |
# internal CA certificate in an corporate environment | |
export GIT_SSL_NO_VERIFY=1 | |
env | grep -e _PROXY -e GIT_ | sort | |
echo -e "\nProxy-related environment variables set." | |
} | |
# remove proxy settings when off corporate network | |
function proxy_off(){ | |
variables=( \ | |
"HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "SOCKS_PROXY" \ | |
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" \ | |
) | |
for i in "${variables[@]}" | |
do | |
unset $i | |
done | |
env | grep -e _PROXY -e GIT_ | sort | |
echo -e "\nProxy-related environment variables removed." | |
} | |
# if you are always behind a proxy uncomment below | |
#proxy_on | |
# increase verbosity of Vagrant output | |
export VAGRANT_LOG=INFO |
Compact version:
function proxy_on(){ | |
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT" | |
export HTTPS_PROXY="$HTTP_PROXY" FTP_PROXY="$HTTP_PROXY" ALL_PROXY="$HTTP_PROXY" \ | |
NO_PROXY="localhost,127.0.0.1,*.$USERDNSDOMAIN" \ | |
GIT_CURL_VERBOSE=1 GIT_SSL_NO_VERIFY=1 | |
echo -e "\nProxy-related environment variables set." | |
} | |
function proxy_off(){ | |
variables=( "HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "ALL_PROXY" \ | |
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" ) | |
for i in "${variables[@]}"; do unset $i; done | |
echo -e "\nProxy-related environment variables removed." | |
} | |
# if you are always behind a proxy uncomment below | |
#proxy_on | |
# increase verbosity of Vagrant output | |
export VAGRANT_LOG=INFO |
Dynamically Allocated Storage Issues with Ubuntu’s Cloud Images
Posted by Gary A. Stafford in Build Automation, Enterprise Software Development on December 22, 2013

Background
According to Canonical, ‘Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows and LXC’. Ubuntu also disk images, or ‘boxes’, built specifically for Vagrant and VirtualBox. Boxes, according to Vagrant, ‘are the skeleton from which Vagrant machines are constructed. They are portable files which can be used by others on any platform that runs Vagrant to bring up a working environment‘. Ubuntu’s images are very popular with Vagrant users to build their VMs.
Assuming you have VirtualBox and Vagrant installed on your Windows, Mac OS X, or Linux system, with a few simple commands, ‘vagrant add box…’, ‘vagrant init…’, and ‘vagrant up’, you can provision a VM from one of these boxes.
Dynamically Allocated Storage
The Ubuntu Cloud Images (boxes), are Virtual Machine Disk (VMDK) format files. These VMDK files are configured for dynamically allocated storage, with a virtual size of 40 GB. That means the VMDK format file should grow to an actual size of 40 GB, as files are added. According to VirtualBox, the VM ‘will initially be very small and not occupy any space for unused virtual disk sectors, but will grow every time a disk sector is written to for the first time, until the drive reaches the maximum capacity chosen when the drive was created’.
To illustrate dynamically allocated storage, below are three freshly provisioned VirtualBox virtual machines (VM), on three different hosts, all with different operating systems. One VM is hosted on Windows 7 Enterprise, another on Ubuntu 13.10 Desktop Edition, and the last on Mac OS X 10.6.8. The VMs were all created with Vagrant from the official Ubuntu Server 13.10 (Saucy Salamander) cloud images. The Windows and Ubuntu hosts used the 64-bit version. The Mac OS X host used the 32-bit version. According to VirtualBox Manager, on all three host platforms, the virtual size of the VMs is 40 GB and the actual size is about 1 GB.
So What’s the Problem?
After a significant amount of troubleshooting Chef recipe problems on two different Ubuntu-hosted VMs, the issue with the cloud images became painfully clear. Other than a single (seemingly charmed) Windows host, none of the VMs I tested on Windows-, Ubuntu-, and Mac OS X-hosts would expand beyond 4 GB. Below is the file system disk space usage report from four host’s VMs. All four were created with the most current version of Vagrant (1.4.1), and managed with the most current version of VirtualBox (4.3.6.x).
Windows-hosted 64-bit Cloud Image VM #1:
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 40G 1.1G 37G 3% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 12K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 233G 196G 38G 85% /vagrant
Windows-hosted 32-bit Cloud Image VM #2:
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1012M 2.8G 27% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 245M 8.0K 245M 1% /dev
tmpfs tmpfs 50M 336K 50M 1% /run
none tmpfs 5.0M 4.0K 5.0M 1% /run/lock
none tmpfs 248M 0 248M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 932G 209G 724G 23% /vagrant
Ubuntu-hosted 64-bit Cloud Image VM:
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1.1G 2.7G 28% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 241M 8.0K 241M 1% /dev
tmpfs tmpfs 50M 336K 49M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 246M 0 246M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant
Mac OS X-hosted 32-bit Cloud Image VM:
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.0G 1012M 2.8G 27% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
udev devtmpfs 245M 12K 245M 1% /dev
tmpfs tmpfs 50M 336K 50M 1% /run
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 248M 0 248M 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/vagrant vboxsf 149G 71G 79G 48% /vagrant
On the first Windows-hosted VM (the only host that actually worked), the virtual SCSI disk device (sda1), formatted ‘ext4‘, had a capacity of 40 GB. But, on the other three hosts, the same virtual device only had a capacity of 4 GB. I tested the various 32- and 64-bit Ubuntu Server 12.10 (Quantal Quetzal), 13.04 (Raring Ringtail), and 13.10 (Saucy Salamander) cloud images. They all exhibited the same issue. However, the Ubuntu 12.04.3 LTS (Precise Pangolin) worked fine on all three host OS systems.
To prove the issue was specifically with Ubuntu’s cloud images, I also tested boxes from Vagrant’s own repository, as well as other third-party providers. They all worked as expected, with no storage discrepancies. This was suggested in the only post I found on this issue, from StackExchange.
To confirm the Ubuntu-hosted VM will not expand beyond 4 GB, I also created a few multi-gigabyte files on each VM, totally 4 GB. The VMs virtual drive would not expand beyond 4 GB limit to accommodate the new files, as demonstrated below on a Ubuntu-hosted VM:
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000 dd: writing '/tmp/big_file2.bin': No space left on device 742+0 records in 741+0 records out 777560064 bytes (778 MB) copied, 1.81098 s, 429 MB/s vagrant@vagrant-ubuntu-saucy-64:~$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 4.0G 3.7G 196K 100% /
The exact cause eludes me, but I tend to think the cloud images are the issue. I know they are capable of working, since the Ubuntu 12.04.3 cloud images expand to 40 GB, but the three most recent releases are limited to 4 GB. Whatever the cause, it’s a significant problem. Imagine you’ve provisioned a 100 or a 1,000 server nodes on your network from any of these cloud images, expecting them to grow to 40 GB, but really only having 10% of that potential. Worse, they have live production data on them, and suddenly run out of space.
Test Results
Below is the complete shell sessions from three hosts.
Windows-hosted 64-bit Cloud Image VM #1:
gstafford@windows-host: ~/Documents/GitHub | |
$ vagrant --version | |
Vagrant 1.4.1 | |
gstafford@windows-host: /c/Program Files/Oracle/VirtualBox | |
$ VBoxManage.exe --version | |
4.3.6r91406 | |
gstafford@windows-host: ~/Documents/GitHub | |
$ mkdir cloudimage-test-win | |
gstafford@windows-host: ~/Documents/GitHub | |
$ cd cloudimage-test-win | |
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win | |
$ vagrant init | |
A `Vagrantfile` has been placed in this directory. You are now | |
ready to `vagrant up` your first virtual environment! Please read | |
the comments in the Vagrantfile as well as documentation on | |
`vagrantup.com` for more information on using Vagrant. | |
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win | |
$ vagrant up | |
Bringing machine 'default' up with 'virtualbox' provider... | |
[default] Importing base box 'vagrant-vm-saucy-server'... | |
[default] Matching MAC address for NAT networking... | |
[default] Setting the name of the VM... | |
[default] Clearing any previously set forwarded ports... | |
[default] Fixed port collision for 22 => 2222. Now on port 2200. | |
[default] Clearing any previously set network interfaces... | |
[default] Preparing network interfaces based on configuration... | |
[default] Forwarding ports... | |
[default] -- 22 => 2200 (adapter 1) | |
[default] Booting VM... | |
[default] Waiting for machine to boot. This may take a few minutes... | |
DL is deprecated, please use Fiddle | |
[default] Machine booted and ready! | |
[default] The guest additions on this VM do not match the installed version of | |
VirtualBox! In most cases this is fine, but in rare cases it can | |
cause things such as shared folders to not work properly. If you see | |
shared folder errors, please make sure the guest additions within the | |
virtual machine match the version of VirtualBox you have installed on | |
your host and reload your VM. | |
Guest Additions Version: 4.2.16 | |
VirtualBox Version: 4.3 | |
[default] Mounting shared folders... | |
[default] -- /vagrant | |
gstafford@windows-host: ~/Documents/GitHub/cloudimage-test-win | |
$ vagrant ssh | |
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-14-generic x86_64) | |
* Documentation: https://help.ubuntu.com/ | |
System information disabled due to load higher than 1.0 | |
Get cloud support with Ubuntu Advantage Cloud Guest: | |
http://www.ubuntu.com/business/services/cloud | |
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT | |
Filesystem Type Size Used Avail Use% Mounted on | |
/dev/sda1 ext4 40G 1.1G 37G 3% / | |
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup | |
udev devtmpfs 241M 12K 241M 1% /dev | |
tmpfs tmpfs 50M 336K 49M 1% /run | |
none tmpfs 5.0M 0 5.0M 0% /run/lock | |
none tmpfs 246M 0 246M 0% /run/shm | |
none tmpfs 100M 0 100M 0% /run/user | |
/vagrant vboxsf 233G 196G 38G 85% /vagrant | |
vagrant@vagrant-ubuntu-saucy-64:/tmp$ dd if=/dev/zero of=/tmp/big_file1.bin bs=1M count=2000 | |
2000+0 records in | |
2000+0 records out | |
2097152000 bytes (2.1 GB) copied, 5.11716 s, 410 MB/s | |
vagrant@vagrant-ubuntu-saucy-64:/tmp$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000 | |
2000+0 records in | |
2000+0 records out | |
2097152000 bytes (2.1 GB) copied, 7.78449 s, 269 MB/s | |
vagrant@vagrant-ubuntu-saucy-64:/tmp$ df -hT | |
Filesystem Type Size Used Avail Use% Mounted on | |
/dev/sda1 ext4 40G 5.0G 33G 14% / | |
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup | |
udev devtmpfs 241M 12K 241M 1% /dev | |
tmpfs tmpfs 50M 336K 49M 1% /run | |
none tmpfs 5.0M 0 5.0M 0% /run/lock | |
none tmpfs 246M 0 246M 0% /run/shm | |
none tmpfs 100M 0 100M 0% /run/user | |
/vagrant vboxsf 233G 196G 38G 85% /vagrant | |
vagrant@vagrant-ubuntu-saucy-64:/tmp$ |
Ubuntu-hosted 64-bit Cloud Image VM:
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant --version | |
Vagrant 1.4.1 | |
gstafford@ubuntu-host:/usr/bin$ vboxmanage --version | |
4.3.6r91406 | |
gstafford@ubuntu-host:~/GitHub$ mkdir cloudimage-test-ubt | |
gstafford@ubuntu-host:~/GitHub$ cd cloudimage-test-ubt/ | |
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant init | |
A `Vagrantfile` has been placed in this directory. You are now | |
ready to `vagrant up` your first virtual environment! Please read | |
the comments in the Vagrantfile as well as documentation on | |
`vagrantup.com` for more information on using Vagrant. | |
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant up | |
Bringing machine 'default' up with 'virtualbox' provider... | |
[default] Box 'vagrant-vm-saucy-server' was not found. Fetching box from specified URL for | |
the provider 'virtualbox'. Note that if the URL does not have | |
a box for this provider, you should interrupt Vagrant now and add | |
the box yourself. Otherwise Vagrant will attempt to download the | |
full box prior to discovering this error. | |
Downloading box from URL: file:/home/gstafford/GitHub/cloudimage-test-ubt/saucy-server-cloudimg-amd64-vagrant-disk1.box | |
Extracting box...te: 23.8M/s, Estimated time remaining: 0:00:01)) | |
Successfully added box 'vagrant-vm-saucy-server' with provider 'virtualbox'! | |
[default] Importing base box 'vagrant-vm-saucy-server'... | |
[default] Matching MAC address for NAT networking... | |
[default] Setting the name of the VM... | |
[default] Clearing any previously set forwarded ports... | |
[default] Fixed port collision for 22 => 2222. Now on port 2200. | |
[default] Clearing any previously set network interfaces... | |
[default] Preparing network interfaces based on configuration... | |
[default] Forwarding ports... | |
[default] -- 22 => 2200 (adapter 1) | |
[default] Booting VM... | |
[default] Waiting for machine to boot. This may take a few minutes... | |
[default] Machine booted and ready! | |
[default] The guest additions on this VM do not match the installed version of | |
VirtualBox! In most cases this is fine, but in rare cases it can | |
cause things such as shared folders to not work properly. If you see | |
shared folder errors, please make sure the guest additions within the | |
virtual machine match the version of VirtualBox you have installed on | |
your host and reload your VM. | |
Guest Additions Version: 4.2.16 | |
VirtualBox Version: 4.3 | |
[default] Mounting shared folders... | |
[default] -- /vagrant | |
gstafford@ubuntu-host:~/GitHub/cloudimage-test-ubt$ vagrant ssh | |
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64) | |
* Documentation: https://help.ubuntu.com/ | |
System information disabled due to load higher than 1.0 | |
Get cloud support with Ubuntu Advantage Cloud Guest: | |
http://www.ubuntu.com/business/services/cloud | |
_____________________________________________________________________ | |
WARNING! Your environment specifies an invalid locale. | |
This can affect your user experience significantly, including the | |
ability to manage packages. You may install the locales by running: | |
sudo apt-get install language-pack-en | |
or | |
sudo locale-gen en_US.UTF-8 | |
To see all available language packs, run: | |
apt-cache search "^language-pack-[a-z][a-z]$" | |
To disable this message for all users, run: | |
sudo touch /var/lib/cloud/instance/locale-check.skip | |
_____________________________________________________________________ | |
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT | |
Filesystem Type Size Used Avail Use% Mounted on | |
/dev/sda1 ext4 4.0G 1.1G 2.7G 28% / | |
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup | |
udev devtmpfs 241M 8.0K 241M 1% /dev | |
tmpfs tmpfs 50M 336K 49M 1% /run | |
none tmpfs 5.0M 0 5.0M 0% /run/lock | |
none tmpfs 246M 0 246M 0% /run/shm | |
none tmpfs 100M 0 100M 0% /run/user | |
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant | |
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file1.bin bs=1M count=2000 | |
2000+0 records in | |
2000+0 records out | |
2097152000 bytes (2.1 GB) copied, 4.72951 s, 443 MB/s | |
vagrant@vagrant-ubuntu-saucy-64:~$ dd if=/dev/zero of=/tmp/big_file2.bin bs=1M count=2000 | |
dd: writing '/tmp/big_file2.bin': No space left on device | |
742+0 records in | |
741+0 records out | |
777560064 bytes (778 MB) copied, 1.81098 s, 429 MB/s | |
vagrant@vagrant-ubuntu-saucy-64:~$ df -hT | |
Filesystem Type Size Used Avail Use% Mounted on | |
/dev/sda1 ext4 4.0G 3.7G 196K 100% / | |
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup | |
udev devtmpfs 241M 8.0K 241M 1% /dev | |
tmpfs tmpfs 50M 336K 49M 1% /run | |
none tmpfs 5.0M 0 5.0M 0% /run/lock | |
none tmpfs 246M 0 246M 0% /run/shm | |
none tmpfs 100M 0 100M 0% /run/user | |
/vagrant vboxsf 74G 65G 9.1G 88% /vagrant | |
vagrant@vagrant-ubuntu-saucy-64:~$ |
Mac OS X-hosted 32-bit Cloud Image VM:
gstafford@mac-development:cloudimage-test-osx $ vagrant box add saucycloud32 http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-i386-vagrant-disk1.box | |
Downloading box from URL: http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-i386-vagrant-disk1.box | |
Extracting box...te: 1383k/s, Estimated time remaining: 0:00:01) | |
Successfully added box 'saucycloud32' with provider 'virtualbox'! | |
gstafford@mac-development:cloudimage-test-osx $ vagrant init saucycloud32 | |
A `Vagrantfile` has been placed in this directory. You are now | |
ready to `vagrant up` your first virtual environment! Please read | |
the comments in the Vagrantfile as well as documentation on | |
`vagrantup.com` for more information on using Vagrant. | |
gstafford@mac-development:cloudimage-test-osx $ vagrant up | |
Bringing machine 'default' up with 'virtualbox' provider... | |
[default] Importing base box 'saucycloud32'... | |
[default] Matching MAC address for NAT networking... | |
[default] Setting the name of the VM... | |
[default] Clearing any previously set forwarded ports... | |
[default] Clearing any previously set network interfaces... | |
[default] Preparing network interfaces based on configuration... | |
[default] Forwarding ports... | |
[default] -- 22 => 2222 (adapter 1) | |
[default] Booting VM... | |
[default] Waiting for machine to boot. This may take a few minutes... | |
[default] Machine booted and ready! | |
[default] The guest additions on this VM do not match the installed version of | |
VirtualBox! In most cases this is fine, but in rare cases it can | |
prevent things such as shared folders from working properly. If you see | |
shared folder errors, please make sure the guest additions within the | |
virtual machine match the version of VirtualBox you have installed on | |
your host and reload your VM. | |
Guest Additions Version: 4.2.16 | |
VirtualBox Version: 4.3 | |
[default] Mounting shared folders... | |
[default] -- /vagrant | |
gstafford@mac-development:cloudimage-test-osx $ vagrant ssh | |
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic i686) | |
* Documentation: https://help.ubuntu.com/ | |
System information disabled due to load higher than 1.0 | |
Get cloud support with Ubuntu Advantage Cloud Guest: | |
http://www.ubuntu.com/business/services/cloud | |
vagrant@vagrant-ubuntu-saucy-32:~$ df -hT | |
Filesystem Type Size Used Avail Use% Mounted on | |
/dev/sda1 ext4 4.0G 1012M 2.8G 27% / | |
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup | |
udev devtmpfs 245M 12K 245M 1% /dev | |
tmpfs tmpfs 50M 336K 50M 1% /run | |
none tmpfs 5.0M 0 5.0M 0% /run/lock | |
none tmpfs 248M 0 248M 0% /run/shm | |
none tmpfs 100M 0 100M 0% /run/user | |
/vagrant vboxsf 149G 71G 79G 48% /vagrant | |
vagrant@vagrant-ubuntu-saucy-32:~$ exit | |
gstafford@mac-development:MacOS$ VBoxManage list vms --long hdds | |
UUID: ee72161f-25c5-4714-ab28-6ee9929500e8 | |
Parent UUID: base | |
State: locked write | |
Type: normal (base) | |
Location: /Users/gstafford/VirtualBox VMs/cloudimage-test-osx_default_1389280075070_65829/box-disk1.vmdk | |
Storage format: VMDK | |
Format variant: dynamic default | |
Capacity: 40960 MBytes | |
Size on disk: 1031 MBytes | |
In use by VMs: cloudimage-test-osx_default_1389280075070_65829 (UUID: 51df4527-99af-48be-92cd-ad73110be88c) |
Resources
Ubuntu Cloud Images for Vagrant
Fourth Extended Filesystem (ext4)
Similar Issue on StackOverflow
Shell Script to Automate Creation of Swap File on Linux
Posted by Gary A. Stafford in Bash Scripting, DevOps, Enterprise Software Development, Software Development on December 19, 2013
Introduction
Recently, while scripting the installation of Oracle’s WebLogic Server, I ran into an issue with a lack of a swap space. I was automating the installation of WebLogic in Silent Mode on a Vagrant VM. The VM was built from an Ubuntu Cloud Image of Ubuntu Server. Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The Ubuntu image did not have the minimum 512 MB of swap space required by the WebLogic installer.
Swap
According to Gary Sims on Linux.com, “Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.”
Scripts
To create the required swap space, I could create either a swap partition or a swap file. I chose to create a swap file, using a shell script. Actually, there are two scripts. The first script creates creates a 512 MB swap file as a pre-step in the automated installation of WebLogic. Once the WebLogic installation is complete, the second, optional script, may be ran to remove the swap file. ArchWiki (wiki.archlinux.org) has an excellent post on swap space I referenced to build my first script.
Use a ‘sudo ./create_swap.sh’ command to create the swap file and display the results in the terminal.
#!/bin/sh | |
# size of swapfile in megabytes | |
swapsize=512 | |
# does the swap file already exist? | |
grep -q "swapfile" /etc/fstab | |
# if not then create it | |
if [ $? -ne 0 ]; then | |
echo 'swapfile not found. Adding swapfile.' | |
fallocate -l ${swapsize}M /swapfile | |
chmod 600 /swapfile | |
mkswap /swapfile | |
swapon /swapfile | |
echo '/swapfile none swap defaults 0 0' >> /etc/fstab | |
else | |
echo 'swapfile found. No changes made.' | |
fi | |
# output results to terminal | |
cat /proc/swaps | |
cat /proc/meminfo | grep Swap | |
If the swap file is no longer required, the second script will remove it. Use a ‘sudo ./remove_swap.sh’ command to remove the swap file and display the results in the terminal. LinuxQuestions.org has a good Forum post on removing swap files I referenced to build my second script.
#!/bin/sh | |
# does the swap file exist? | |
grep -q "swapfile" /etc/fstab | |
# if it does then remove it | |
if [ $? -eq 0 ]; then | |
echo 'swapfile found. Removing swapfile.' | |
sed -i '/swapfile/d' /etc/fstab | |
echo "3" > /proc/sys/vm/drop_caches | |
swapoff -a | |
rm -f /swapfile | |
else | |
echo 'No swapfile found. No changes made.' | |
fi | |
# output results to terminal | |
cat /proc/swaps | |
cat /proc/meminfo | grep Swap |