Posts Tagged Puppet
Automate the Provisioning and Configuration of HAProxy and an Apache Web Server Cluster Using Foreman
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Continuous Delivery, DevOps on February 21, 2015
Use Vagrant, Foreman, and Puppet to provision and configure HAProxy as a reverse proxy, load-balancer for a cluster of Apache web servers.
Introduction
In this post, we will use several technologies, including Vagrant, Foreman, and Puppet, to provision and configure a basic load-balanced web server environment. In this environment, a single node with HAProxy will act as a reverse proxy and load-balancer for two identical Apache web server nodes. All three nodes will be provisioned and bootstrapped using Vagrant, from a Linux CentOS 6.5 Vagrant Box. Afterwards, Foreman, with Puppet, will then be used to install and configure the nodes with HAProxy and Apache, using a series of Puppet modules.
For this post, I will assume you already have running instances of Vagrant with the vagrant-hostmanager plugin, VirtualBox, and Foreman. If you are unfamiliar with Vagrant, the vagrant-hostmanager plugin, VirtualBox, Foreman, or Puppet, review my recent post, Installing Foreman and Puppet Agent on Multiple VMs Using Vagrant and VirtualBox. This post demonstrates how to install and configure Foreman. In addition, the post also demonstrates how to provision and bootstrap virtual machines using Vagrant and VirtualBox. Basically, we will be repeating many of this same steps in this post, with the addition of HAProxy, Apache, and some custom configuration Puppet modules.
All code for this post is available on GitHub. However, it been updated as of 8/23/2015. Changes were required to fix compatibility issues with the latest versions of Puppet 4.x and Foreman. Additionally, the version of CentOS on all VMs was updated from 6.6 to 7.1 and the version of Foreman was updated from 1.7 to 1.9.
Steps
Here is a high-level overview of our steps in this post:
- Provision and configure the three CentOS-based virtual machines (‘nodes’) using Vagrant and VirtualBox
- Install the HAProxy and Apache Puppet modules, from Puppet Forge, onto the Foreman server
- Install the custom HAProxy and Apache Puppet configuration modules, from GitHub, onto the Foreman server
- Import the four new module’s classes to Foreman’s Puppet class library
- Add the three new virtual machines (‘hosts’) to Foreman
- Configure the new hosts in Foreman, assigning the appropriate Puppet classes
- Apply the Foreman Puppet configurations to the new hosts
- Test HAProxy is working as a reverse and proxy load-balancer for the two Apache web server nodes
In this post, I will use the terms ‘virtual machine’, ‘machine’, ‘node’, ‘agent node’, and ‘host’, interchangeable, based on each software’s own nomenclature.
Provisioning
First, using the process described in the previous post, provision and bootstrap the three new virtual machines. The new machine’s Vagrant configuration is shown below. This should be added to the JSON configuration file. All code for the earlier post is available on GitHub.
{ "nodes": { "haproxy.example.com": { ":ip": "192.168.35.101", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" }, "node01.example.com": { ":ip": "192.168.35.121", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" }, "node02.example.com": { ":ip": "192.168.35.122", "ports": [], ":memory": 512, ":bootstrap": "bootstrap-node.sh" } } }
After provisioning and bootstrapping, observe the three machines running in Oracle’s VM VirtualBox Manager.
Installing Puppet Forge Modules
The next task is to install the HAProxy and Apache Puppet modules on the Foreman server. This allows Foreman to have access to them. I chose the puppetlabs-haproxy HAProxy module and the puppetlabs-apache Apache modules. Both modules were authored by Puppet Labs, and are available on Puppet Forge.
The exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory.
sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-haproxy sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-apache # confirm module installation puppet module list --modulepath /etc/puppet/environments/production/modules
Installing Configuration Modules
Next, install the HAProxy and Apache configuration Puppet modules on the Foreman server. Both modules are hosted on my GitHub repository. Both modules can be downloaded directly from GitHub and installed on the Foreman server, from the command line. Again, the exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory. Also, notice I am currently downloading version 0.1.0 of both modules at the time of writing this post. Make sure to double-check for the latest versions of both modules before running the commands. Modify the commands if necessary.
# apache config module wget -N https://github.com/garystafford/garystafford-apache_example_config/archive/v0.1.0.tar.gz && \ sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force # haproxy config module wget -N https://github.com/garystafford/garystafford-haproxy_node_config/archive/v0.1.0.tar.gz && \ sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force # confirm module installation puppet module list --modulepath /etc/puppet/environments/production/modules
HAProxy Configuration
The HAProxy configuration module configures HAProxy’s /etc/haproxy/haproxy.cfg
file. The single class in the module’s init.pp
manifest is as follows:
class haproxy_node_config () inherits haproxy { haproxy::listen { 'puppet00': collect_exported => false, ipaddress => '*', ports => '80', mode => 'http', options => { 'option' => ['httplog'], 'balance' => 'roundrobin', }, } Haproxy::Balancermember <<| listening_service == 'puppet00' |>> haproxy::balancermember { 'haproxy': listening_service => 'puppet00', server_names => ['node01.example.com', 'node02.example.com'], ipaddresses => ['192.168.35.121', '192.168.35.122'], ports => '80', options => 'check', } }
The resulting /etc/haproxy/haproxy.cfg
file will have the following configuration added. It defines the two Apache web server node’s hostname, ip addresses, and http port. The configuration also defines the load-balancing method, ‘round-robin‘ in our example. In this example, we are using layer 7 load-balancing (application layer – http), as opposed to layer 4 load-balancing (transport layer – tcp). Either method will work for this example. The Puppet Labs’ HAProxy module’s documentation on Puppet Forge and HAProxy’s own documentation are both excellent starting points to understand how to configure HAProxy. We are barely scraping the surface of HAProxy’s capabilities in this brief example.
listen puppet00 bind *:80 mode http balance roundrobin option httplog server node01.example.com 192.168.35.121:80 check server node02.example.com 192.168.35.122:80 check
Apache Configuration
The Apache configuration module creates default web page in Apache’s docroot
directory, /var/www/html/index.html
. The single class in the module’s init.pp
manifest is as follows:
The resulting /var/www/html/index.html
file will look like the following. Observe that the facter variables shown in the module manifest above have been replaced by the individual node’s hostname and ip address during application of the configuration by Puppet (ie. ${fqdn}
became node01.example.com
).
Both of these Puppet modules were created specifically to configure HAProxy and Apache for this post. Unlike published modules on Puppet Forge, these two modules are very simple, and don’t necessarily represent the best practices and patterns for authoring Puppet Forge modules.
Importing into Foreman
After installing the new modules onto the Foreman server, we need to import them into Foreman. This is accomplished from the ‘Puppet classes’ tab, using the ‘Import from theforeman.example.com’ button. Once imported, the module classes are available to assign to host machines.
Add Host to Foreman
Next, add the three new hosts to Foreman. If you have questions on how to add the nodes to Foreman, start Puppet’s Certificate Signing Request (CSR) process on the hosts, signing the certificates, or other first time tasks, refer to the previous post. That post explains this process in detail.
Configure the Hosts
Next, configure the HAProxy and Apache nodes with the necessary Puppet classes. In addition to the base module classes and configuration classes, I recommend adding git and ntp modules to each of the new nodes. These modules were explained in the previous post. Refer to the screen-grabs below for correct module classes to add, specific to HAProxy and Apache.
Agent Configuration and Testing the System
Once configurations are retrieved and applied by Puppet Agent on each node, we can test our reverse proxy load-balanced environment. To start, open a browser and load haproxy.paychex.com
. You should see one of the two pages below. Refresh the page a few times. You should observe HAProxy re-directing you to one Apache web server node, and then the other, using HAProxy’s round-robin algorithm. You can differentiate the Apache web servers by the hostname and ip address displayed on the web page.
After hitting HAProxy’s URL several times successfully, view HAProxy’s built-in Statistics Report page at http://haproxy.example.com/haproxy?stats
. Note below, each of the two Apache node has been hit 44 times each from HAProxy. This demonstrates the effectiveness of the reverse proxy and load-balancing features of HAProxy.
Accessing Apache Directly
If you are testing HAProxy from the same machine on which you created the virtual machines (VirtualBox host), you will likely be able to directly access either of the Apache web servers (ei. node02.example.com
). The VirtualBox host file contains the ip addresses and hostnames of all three hosts. This DNS configuration was done automatically by the vagrant-hostmanager plugin. However, in an actual Production environment, only the HAProxy server’s hostname and ip address would be publicly accessible to a user. The two Apache nodes would sit behind a firewall, accessible only by the HAProxy server. HAProxy acts as a façade to public side of the network.
Testing Apache Host Failure
The main reason you would likely use a load-balancer is high-availability. With HAProxy acting as a load-balancer, we should be able to impair one of the two Apache nodes, without noticeable disruption. HAProxy will continue to serve content from the remaining Apache web server node.
Log into node01.example.com
, using the following command, vagrant ssh node01.example.com
. To simulate an impairment on ‘node01’, run the following command to stop Apache, sudo service httpd stop
. Now, refresh the haproxy.example.com
URL in your web browser. You should notice HAProxy is now redirecting all traffic to node02.example.com
.
Troubleshooting
While troubleshooting HAProxy configuration issues for this demonstration, I discovered logging is not configured by default on CentOS. No worries, I recommend HAProxy: Give me some logs on CentOS 6.5!, by Stephane Combaudon, to get logging running. Once logging is active, you can more easily troubleshoot HAProxy and Apache configuration issues. Here are some example commands you might find useful:
# haproxy sudo more -f /var/log/haproxy.log sudo haproxy -f /etc/haproxy/haproxy.cfg -c # check/validate config file # apache sudo ls -1 /etc/httpd/logs/ sudo tail -50 /etc/httpd/logs/error_log sudo less /etc/httpd/logs/access_log
Redundant Proxies
In this simple example, the system’s weakest point is obviously the single HAProxy instance. It represents a single-point-of-failure (SPOF) in our environment. In an actual production environment, you would likely have more than one instance of HAProxy. They may both be in a load-balanced pool, or one active and on standby as a failover, should one instance become impaired. There are several techniques for building in proxy redundancy, often with the use of Virtual IP and Keepalived. Below is a list of articles that might help you take this post’s example to the next level.
- An Introduction to HAProxy and Load Balancing Concepts
- Install HAProxy and Keepalived (Virtual IP)
- Redundant Load Balancers – HAProxy and Keepalived
- Howto setup a haproxy as fault tolerant / high available load balancer for multiple caching web proxies on RHEL/Centos/SL
- Keepalived Module on Puppet Forge: arioch/keepalived, byTom De Vylder
My Favorite Puppet, Docker, Git, and npm Code Snippets
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Software Development on February 15, 2015
The following is a collection of my favorite Puppet, Docker, Git, and npm commands and code snippets.
Puppet
############################################################################### | |
# Helpful Puppet commands and code snippets | |
############################################################################### | |
sudo puppet module list # different paths w/ or w/o sudo | |
puppet config print | |
puppet config print | grep <search_term> | |
sudo cat /etc/puppet/puppet.conf | grep <search_term> | |
sudo vi /etc/puppet/puppet.conf | |
sudo puppet module install <author>/<module_name> | |
sudo puppet module install <author>-<module_name>-<version>.tar.gz # Geppetto Export Module to File System | |
sudo puppet module install -i /etc/puppet/environments/production/modules <author>/<module_name> | |
sudo puppet module uninstall <module_name> | |
sudo rm -rf /etc/puppet/environments/production/module/<module_name> # use remove vs. uninstall when in env. path | |
sudo puppet apply <manifest>.pp | |
# Install Modules from GitHub onto Foreman node: | |
# Apache Config Example | |
wget -N https://github.com/garystafford/garystafford-apache_example_config/archive/v0.1.0.tar.gz && \ | |
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force | |
# HAProxy Config Example | |
wget -N https://github.com/garystafford/garystafford-haproxy_node_config/archive/v0.1.0.tar.gz && \ | |
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force |
Docker
############################################################################### | |
# Helpful Docker commands and code snippets | |
############################################################################### | |
### CONTAINERS ### | |
docker stop $(docker ps -a -q) #stop ALL containers | |
docker rm -f $(docker ps -a -q) # remove ALL containers | |
docker rm -f $(sudo docker ps --before="container_id_here" -q) # can also filter | |
# exec into container | |
docker exec -it $(docker container ls | grep '<seach_term>' | awk '{print $1}') sh | |
# exec into container on windows with Git Bash | |
winpty docker exec -it $(docker container ls | grep '<seach_term>' | awk '{print $1}') sh | |
# helps with error: 'unexpected end of JSON input' | |
docker rm -f $(docker ps -a -q) # Remove all in one command with --force | |
docker exec -i -t "container_name_here" /bin/bash # Go to container command line | |
# to exit above use 'ctrl p', 'ctrl q' (don't exit or it will be in exited state) | |
docker rm $(docker ps -q -f status=exited) # remove all exited containers | |
### IMAGES ### | |
# list images and containers | |
docker images | grep "search_term_here" | |
# remove image(s) (must remove associated containers first) | |
docker rmi -f image_id_here # remove image(s) | |
docker rmi -f $(docker images -q) # remove ALL images!!! | |
docker rmi -f $(docker images | grep "^<none>" | awk '{print $3}') # remove all <none> images | |
docker rmi -f $(docker images | grep 'search_term_here' | awk '{print $1}') # i.e. 2 days ago | |
docker rmi -f $(docker images | grep 'search_1\|search_2' | awk '{print $1}') | |
### DELETE BOTH IMAGES AND CONTAINERS ### | |
docker images && docker ps -a | |
# stop and remove containers and associated images with common grep search term | |
docker ps -a --no-trunc | grep "search_term_here" | awk "{print $1}" | xargs -r --no-run-if-empty docker stop && \ | |
docker ps -a --no-trunc | grep "search_term_here" | awk "{print $1}" | xargs -r --no-run-if-empty docker rm && \ | |
docker images --no-trunc | grep "search_term_here" | awk "{print $3}" | xargs -r --no-run-if-empty docker rmi | |
# stops only exited containers and delete only non-tagged images | |
docker ps --filter 'status=Exited' -a | xargs docker stop docker images --filter "dangling=true" -q | xargs docker rmi | |
### DELETE NETWORKS AND VOLUMES ### | |
# clean up orphaned volumes | |
docker volume rm $(docker volume ls -qf dangling=true) | |
# clean up orphaned networks | |
docker network rm $(docker network ls -q) | |
### NEW IMAGES/CONTAINERS ### | |
# create new docker container, ie. ubuntu | |
docker pull ubuntu:latest # 1x pull down image | |
docker run -i -t ubuntu /bin/bash # drops you into new container as root | |
### OTHER ### | |
# install docker first using directions for installing latest version | |
# https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit | |
# other great tips: http://www.centurylinklabs.com/15-quick-docker-tips/ | |
# fix fig / docker config: https://gist.github.com/RuslanHamidullin/94d95328a7360d843e52 |
Git
############################################################################### | |
# Helpful Git/GitHub commands and code snippets | |
############################################################################### | |
#### Remove/Squash All History on Master - CAUTION! #### | |
# https://stackoverflow.com/a/26000395/580268 | |
git checkout --orphan latest_branch \ | |
&& git add -A \\ | |
&& git commit -am "Remove/squash all project history" \ | |
&& git branch -D master \ | |
&& git branch -m master \ | |
&& git push -f origin master | |
#### List all committed files being tracked by git | |
git ls-tree --full-tree -r HEAD | |
### Display remote origin ### | |
git remote --verbose | |
### Clone single branch from repo | |
git clone -b --single-branch <branch> <remote_repo> | |
### Get all branches from a repo | |
git branch -r | grep -v '\->' | while read remote; do git branch --track "${remote#origin/}" "$remote"; done | |
git fetch --all | |
git pull --all | |
### Delete a single remote branch | |
git push origin --delete <branch> | |
### Change commit message ### | |
git commit --amend -m "Revised message here..." | |
git push -f # if already pushed | |
### Undo an Add File(s) | |
git reset <filename> | |
### Removing files from GitHub ### | |
# that should have been in .gitignore | |
git ls-tree -r master --name-only # list tracked files | |
git rm --cached <file> or git rm -r --cached <folder> #ie. git rm -r --cached .idea/ | |
git add -A && git commit -m "Removing cached files from GitHub" && git push | |
# works even better! | |
git rm --cached -r . && git add . | |
### Tagging repos ### | |
# http://git-scm.com/book/en/v2/Git-Basics-Tagging | |
git tag -a v0.1.0 -m "version 0.1.0" | |
git push --tags | |
git tag # list all tags | |
# Finding and tagging old commit points | |
git log --pretty=oneline # find hash of commit | |
git tag -a v0.1.0 -m 'version 0.1.0' <partial_commit_hash_here> | |
git push --tags #origin master | |
git tag # list all tags | |
git checkout tags/v0.1.0 # check out that tagged point in commit history | |
# Chaning a tag to a new commit | |
git push origin :refs/tags/<tagname> | |
git tag -fa <tagname> | |
git push origin master --tags | |
# Remove a tag both locally and remote | |
git tag -d <tagname>; # local | |
git push origin ::refs/tags/<tagname> # remote | |
### Committing changes to GitHub ### | |
git add -A # add all | |
git commit -m "my changes..." | |
git push | |
# Combined | |
git add . && git commit -am "Initial commit of project" && git push | |
### Adding an existing project to GitHub ### | |
# Create repo in GitHub first | |
git add -A # add all | |
git commit -m "Initial commit of project" | |
git remote add origin <https://github.com/username/repo.git> | |
git remote -v # verify new remote | |
git pull origin master # pull down license and .gitignore | |
# might get message like to merge manually: 'Error reading...Press Enter to continue starting nano.' | |
git push --set-upstream origin master | |
git status # check if add/commit/push required? | |
### Branching GitHub repos ### | |
# https://github.com/Kunena/Kunena-Forum/wiki/Create-a-new-branch-with-git-and-manage-branches | |
# Run from within local repo | |
NEW_BRANCH=<new_branch_here> | |
git checkout -b ${NEW_BRANCH} | |
git push origin ${NEW_BRANCH} | |
git branch # list branches | |
git add -A # add all | |
git commit -m "my changes..." | |
git push --set-upstream origin ${NEW_BRANCH} | |
git checkout master # switch back to master | |
# Merge in the <new_branch_here> branch to master | |
# https://www.atlassian.com/git/tutorials/using-branches/git-merge | |
git checkout master | |
git merge ${NEW_BRANCH} | |
# deletes branch!! | |
git branch -d ${NEW_BRANCH} # deletes branch!! | |
# show pretty history | |
git log --graph --oneline --all --decorate --topo-order | |
# Rollback commit | |
git log # find commit hash | |
git revert <commit_hash> |
npm
############################################################################### | |
# Helpful npm commands and code snippets | |
############################################################################### | |
# list top level packages w/o dependencies | |
npm list --depth=0 | |
npm list --depth=0 -g | |
# list top level packages that are outdated w/o dependencies | |
npm outdated | sort | |
npm outdated -g | sort | |
# install packages | |
npm install -g <pkg>@<x.x.x> | |
npm install -g <pkg>@latest | |
# update packages | |
npm update --save | |
npm update --save-dev | |
npm update -g <pkg1> <pkg2> <pkg3> | |
# Update to latest dependencies, ignore and update package.json! | |
npm install -g npm-check-updates | |
npm-check-updates | |
npm-check-updates -u | |
npm update # verify new versions work with project | |
# prune and rebuild | |
npm prune | |
npm rebuild | |
npm rebuild -g | |
# find package dependencies | |
npm ls <pkg> | |
# alternate tool to check and update dependencies | |
npm install -g david | |
david -g | |
david update -g | |
bower list | sort | |
bower update | |
bower prune # uninstalls local extraneous packages | |
# lists available tasks | |
grunt --help |
Installing Foreman and Puppet Agent on Multiple VMs Using Vagrant and VirtualBox
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Software Development on January 18, 2015
Automatically install and configure Foreman, the open source infrastructure lifecycle management tool, and multiple Puppet Agent VMs using Vagrant and VirtualBox.
Introduction
In the last post, Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox, we installed Puppet Master/Agent on VirtualBox VMs using Vagrant. Puppet Master is an excellent tool, but lacks the ease-of-use of Puppet Enterprise or Foreman. In this post, we will build an almost identical environment, substituting Foreman for Puppet Master.
According to Foreman’s website, “Foreman is an open source project that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. Using Puppet or Chef and Foreman’s smart proxy architecture, you can easily automate repetitive tasks, quickly deploy applications, and proactively manage change, both on-premise with VMs and bare-metal or in the cloud.”
Combined with Puppet Labs’ Open Source Puppet, Foreman is an effective solution to manage infrastructure and system configuration. Again, according to Foreman’s website, the Foreman installer is a collection of Puppet modules that installs everything required for a full working Foreman setup. The installer uses native OS packaging and adds necessary configuration for the complete installation. By default, the Foreman installer will configure:
- Apache HTTP with SSL (using a Puppet-signed certificate)
- Foreman running under mod_passenger
- Smart Proxy configured for Puppet, TFTP and SSL
- Puppet master running under mod_passenger
- Puppet agent configured
- TFTP server (under xinetd on Red Hat platforms)
For the average Systems Engineer or Software Developer, installing and configuring Foreman, Puppet Master, Apache, Puppet Agent, and the other associated software packages listed above, is daunting. If the installation doesn’t work properly, you must troubleshooting, or trying to remove and reinstall some or all the components.
A better solution is to automate the installation of Foreman into a Docker container, or on to a VM using Vagrant. Automating the installation process guarantees accuracy and consistency. The Vagrant VirtualBox VM can be snapshotted, moved to another host, or simply destroyed and recreated, if needed.
All code for this post is available on GitHub. However, it been updated as of 8/23/2015. Changes were required to fix compatibility issues with the latest versions of Puppet 4.x and Foreman. Additionally, the version of CentOS on all VMs was updated from 6.6 to 7.1 and the version of Foreman was updated from 1.7 to 1.9.
The Post’s Example
In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs in this post will be build from a standard CentOS 6.5 x64 base Vagrant Box, located on Atlas. We will use a single JSON-format configuration file to automatically build all three VMs with Vagrant. As part of the provisioning process, using Vagrant’s shell provisioner, we will execute a bootstrap shell script. The script will install Foreman and it’s associated software on the first VM, and Puppet Agent on the two remaining VMs (aka Puppet ‘agent nodes’ or Foreman ‘hosts’).
Foreman does have the ability to provision on bare-metal infrastructure and public or private clouds. However, this example would simulate an environment where you have existing nodes you want to manage with Foreman.
The Foreman bootstrap script will also download several Puppet modules. To test Foreman once the provisioning is complete, import those module’s classes into Foreman and assign the classes to the hosts. The hosts will fetch and apply the configurations. You can then test for the installed instances of those module’s components on the puppet agent hosts.
Vagrant
To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.
{ "nodes": { "theforeman.example.com": { ":ip": "192.168.35.5", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-foreman.sh" }, "agent01.example.com": { ":ip": "192.168.35.10", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" }, "agent02.example.com": { ":ip": "192.168.35.20", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" } } }
The Vagrantfile
uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up
‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile
to create as many VMs as you want. For this post’s example, we will not need to add any VirtualBox port mappings. However, that can also done from the JSON configuration file (see the READM.md for more directions).
If you have not used the CentOS Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.
# -*- mode: ruby -*- # vi: set ft=ruby : # Builds single Foreman server and # multiple Puppet Agent Nodes using JSON config file # Gary A. Stafford - 01/15/2015 # read vm and chef configurations from JSON files nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "chef/centos-6.5" nodes_config.each do |node| node_name = node[0] # name of node node_values = node[1] # content of node config.vm.define node_name do |config| # configures all forwarding ports in JSON array ports = node_values['ports'] ports.each do |port| config.vm.network :forwarded_port, host: port[':host'], guest: port[':guest'], id: port[':id'] end config.vm.hostname = node_name config.vm.network :private_network, ip: node_values[':ip'] config.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] vb.customize ["modifyvm", :id, "--name", node_name] end config.vm.provision :shell, :path => node_values[':bootstrap'] end end end
Once provisioned, the three VMs, also called ‘Machines’ by Vagrant, should appear in Oracle VM VirtualBox Manager.
The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name
), such as, ‘vagrant ssh theforeman.example.com
‘.
Bootstrapping Foreman
As part of the Vagrant provisioning process (‘vagrant up
‘ command), a bootstrap script is executed on the VMs (shown below). This script will do almost of the installation and configuration work. Below is script for bootstrapping the Foreman VM.
#!/bin/sh # Run on VM to bootstrap Foreman server # Gary A. Stafford - 01/15/2015 if ps aux | grep "/usr/share/foreman" | grep -v grep 2> /dev/null then echo "Foreman appears to all already be installed. Exiting..." else # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.35.5 theforeman.example.com theforeman" | sudo tee --append /etc/hosts 2> /dev/null # Update system first sudo yum update -y # Install Foreman for CentOS 6 sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm && \ sudo yum -y install epel-release http://yum.theforeman.org/releases/1.7/el6/x86_64/foreman-release.rpm && \ sudo yum -y install foreman-installer && \ sudo foreman-installer # First run the Puppet agent on the Foreman host which will send the first Puppet report to Foreman, # automatically creating the host in Foreman's database sudo puppet agent --test --waitforcert=60 # Install some optional puppet modules on Foreman server to get started... sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-ntp sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-git sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-docker fi
Bootstrapping Puppet Agent Nodes
Below is script for bootstrapping the puppet agent nodes. The agent node bootstrap script was executed as part of the Vagrant provisioning process.
#!/bin/sh # Run on VM to bootstrap Puppet Agent nodes # Gary A. Stafford - 01/15/2015 if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null then echo "Puppet Agent is already installed. Moving on..." else # Update system first sudo yum update -y # Install Puppet for CentOS 6 sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm && \ sudo yum -y install puppet # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.35.5 theforeman.example.com theforeman" | sudo tee --append /etc/hosts 2> /dev/null # Add agent section to /etc/puppet/puppet.conf (sets run interval to 120 seconds) echo "" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null && \ echo " server = theforeman.example.com" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null && \ echo " runinterval = 120" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null sudo service puppet stop sudo service puppet start sudo puppet resource service puppet ensure=running enable=true sudo puppet agent --enable fi
Now that the Foreman is running, use the command, ‘vagrant ssh agent01.example.com
‘, to ssh into the first puppet agent node. Run the command below.
sudo puppet agent --test --waitforcert=60
The command above manually starts Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each puppet agent node must have it certificate signed by the Foreman, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master (Foreman) and will retrieve a signed certificate once one is available.”
Open the Foreman browser-based interface, running at https://theforeman.example.com. Proceed to the ‘Infrastructure’ -> ‘Smart Proxies’ tab. Sign the certificate(s) from the agent nodes (shown below). The agent node will wait for the Foreman to sign the certificate, before continuing with the initial configuration.
Once the certificate signing process is complete, the host retrieves the client configuration from the Foreman and applies it to the hosts.
That’s it, you should now have one host running Foreman and two puppet agent nodes.
Testing Foreman
To test Foreman, import the classes from the Puppet modules installed with the Foreman bootstrap script.
Next, apply ntp, git, and Docker classes to both agent nodes (aka, Foreman ‘hosts’), as well as the Foreman node, itself.
Every two minutes, the two agent node hosts should fetch their latest configuration from Foreman and apply it. In a few minutes, check the times reported in the ‘Last report’ column on the ‘All Hosts’ tab. If the times are two minutes or less, Foreman and Puppet Agent are working. Note we changed the runinterval
to 120 seconds (‘120s’) in the bootstrap script to speed up the Puppet Agent updates for the sake of the demo. The normal default interval is 30 minutes. I recommend changing the agent node’s runinterval
back to 30 minutes (’30m’) on the hosts, once everything is working to save unnecessary use of resources.
Finally, to verify that the configuration was successfully applied to the hosts, check if ntp, git, and Docker are now running on the hosts.
Helpful Links
All the source code this project is on Github.
Foreman:
http://theforeman.org
Atlas – Discover Vagrant Boxes:
https://atlas.hashicorp.com/boxes/search
Learning Puppet – Basic Agent/Master Puppet
https://docs.puppetlabs.com/learning/agent_master_basic.html
Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html
Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on December 14, 2014
Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.
Introduction
Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.
Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.
A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.
In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).
Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntp, git, Docker and Fig, on the agent nodes.
All the source code this project is on Github.
Vagrant
To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.
{ "nodes": { "puppet.example.com": { ":ip": "192.168.32.5", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-master.sh" }, "node01.example.com": { ":ip": "192.168.32.10", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" }, "node02.example.com": { ":ip": "192.168.32.20", "ports": [], ":memory": 1024, ":bootstrap": "bootstrap-node.sh" } } }
The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up
‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.
If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.
# vi: set ft=ruby : # Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file # Author: Gary A. Stafford # read vm and chef configurations from JSON files nodes_config = (JSON.parse(File.read("nodes.json")))['nodes'] VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu/trusty64" nodes_config.each do |node| node_name = node[0] # name of node node_values = node[1] # content of node config.vm.define node_name do |config| # configures all forwarding ports in JSON array ports = node_values['ports'] ports.each do |port| config.vm.network :forwarded_port, host: port[':host'], guest: port[':guest'], id: port[':id'] end config.vm.hostname = node_name config.vm.network :private_network, ip: node_values[':ip'] config.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--memory", node_values[':memory']] vb.customize ["modifyvm", :id, "--name", node_name] end config.vm.provision :shell, :path => node_values[':bootstrap'] end end end
Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.
The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name
), such as, ‘vagrant ssh puppet.example.com
‘.
Bootstrapping Puppet Master Server
As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.
#!/bin/sh # Run on VM to bootstrap Puppet Master server if ps aux | grep "puppet master" | grep -v grep 2> /dev/null then echo "Puppet Master is already installed. Exiting..." else # Install Puppet Master wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \ sudo dpkg -i puppetlabs-release-trusty.deb && \ sudo apt-get update -yq && sudo apt-get upgrade -yq && \ sudo apt-get install -yq puppetmaster # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add optional alternate DNS names to /etc/puppet/puppet.conf sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf # Install some initial puppet modules on Puppet Master server sudo puppet module install puppetlabs-ntp sudo puppet module install garethr-docker sudo puppet module install puppetlabs-git sudo puppet module install puppetlabs-vcsrepo sudo puppet module install garystafford-fig # symlink manifest from Vagrant synced folder location ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp fi
There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete, ‘vagrant ssh puppet.example.com
‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com
‘ VM.
sudo service puppetmaster status # test that puppet master was installed sudo service puppetmaster stop sudo puppet master --verbose --no-daemonize # Ctrl+C to kill puppet master sudo service puppetmaster start sudo puppet cert list --all # check for 'puppet' cert
According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.‘
Bootstrapping Puppet Agent Nodes
Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T
‘). Use the command, ‘vagrant ssh node01.example.com
‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.
#!/bin/sh # Run on VM to bootstrap Puppet Agent nodes # http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/ if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null then echo "Puppet Agent is already installed. Moving on..." else sudo apt-get install -yq puppet fi if cat /etc/crontab | grep puppet 2> /dev/null then echo "Puppet Agent is already configured. Exiting..." else sudo apt-get update -yq && sudo apt-get upgrade -yq sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \ command='/usr/bin/puppet agent --onetime --no-daemonize --splay' sudo puppet resource service puppet ensure=running enable=true # Configure /etc/hosts file echo "" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.5 puppet.example.com puppet" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.10 node01.example.com node01" | sudo tee --append /etc/hosts 2> /dev/null && \ echo "192.168.32.20 node02.example.com node02" | sudo tee --append /etc/hosts 2> /dev/null # Add agent section to /etc/puppet/puppet.conf echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null sudo puppet agent --enable fi
Run the two commands below within both the ‘node01.example.com
‘ and ‘node02.example.com
‘ agent nodes.
sudo service puppet status # test that agent was installed sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)
The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.”
Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.
sudo puppet cert list # should see 'node01.example.com' cert waiting for signature sudo puppet cert sign --all # sign the agent node certs sudo puppet cert list --all # check for signed certs
Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp
manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}
‘).
Below is the main site.pp
manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.
node default { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git } node 'node01.example.com', 'node02.example.com' { # Test message notify { "Debug output on ${hostname} node.": } include ntp, git, docker, fig }
That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.
Helpful Links
All the source code this project is on Github.
Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html
Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options
Puppet Master Overview:
http://ci.openstack.org/puppet.html
Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html
Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i
Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html
Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet