Posts Tagged Fig

Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox

 Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.

Puppet Master Agent Vagrant (3)

Introduction

Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.

Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.

A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.

In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a  Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).

Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntpgitDocker and Fig, on the agent nodes.

All the source code this project is on Github.

Vagrant

To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.

{
  "nodes": {
    "puppet.example.com": {
      ":ip": "192.168.32.5",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-master.sh"
    },
    "node01.example.com": {
      ":ip": "192.168.32.10",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    },
    "node02.example.com": {
      ":ip": "192.168.32.20",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    }
  }
}

The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.

If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.

# vi: set ft=ruby :

# Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file
# Author: Gary A. Stafford

# read vm and chef configurations from JSON files
nodes_config = (JSON.parse(File.read("nodes.json")))['nodes']

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"

  nodes_config.each do |node|
    node_name   = node[0] # name of node
    node_values = node[1] # content of node

    config.vm.define node_name do |config|
      # configures all forwarding ports in JSON array
      ports = node_values['ports']
      ports.each do |port|
        config.vm.network :forwarded_port,
          host:  port[':host'],
          guest: port[':guest'],
          id:    port[':id']
      end

      config.vm.hostname = node_name
      config.vm.network :private_network, ip: node_values[':ip']

      config.vm.provider :virtualbox do |vb|
        vb.customize ["modifyvm", :id, "--memory", node_values[':memory']]
        vb.customize ["modifyvm", :id, "--name", node_name]
      end

      config.vm.provision :shell, :path => node_values[':bootstrap']
    end
  end
end

Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.

Vagrant Machines in VM VirtualBox Manager

Vagrant Machines in VM VirtualBox Manager

The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name), such as, ‘vagrant ssh puppet.example.com‘.

Vagrant Machine Names

Vagrant Machine Names

Bootstrapping Puppet Master Server

As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.

#!/bin/sh

# Run on VM to bootstrap Puppet Master server

if ps aux | grep "puppet master" | grep -v grep 2> /dev/null
then
    echo "Puppet Master is already installed. Exiting..."
else
    # Install Puppet Master
    wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \
    sudo dpkg -i puppetlabs-release-trusty.deb && \
    sudo apt-get update -yq && sudo apt-get upgrade -yq && \
    sudo apt-get install -yq puppetmaster

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add optional alternate DNS names to /etc/puppet/puppet.conf
    sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf

    # Install some initial puppet modules on Puppet Master server
    sudo puppet module install puppetlabs-ntp
    sudo puppet module install garethr-docker
    sudo puppet module install puppetlabs-git
    sudo puppet module install puppetlabs-vcsrepo
    sudo puppet module install garystafford-fig

    # symlink manifest from Vagrant synced folder location
    ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp
fi

There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete,  ‘vagrant ssh puppet.example.com‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com‘ VM.

sudo service puppetmaster status # test that puppet master was installed
sudo service puppetmaster stop
sudo puppet master --verbose --no-daemonize
# Ctrl+C to kill puppet master
sudo service puppetmaster start
sudo puppet cert list --all # check for 'puppet' cert

According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.

Bootstrapping Puppet Agent Nodes

Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T‘). Use the command, ‘vagrant ssh node01.example.com‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.

#!/bin/sh

# Run on VM to bootstrap Puppet Agent nodes
# http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/

if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null
then
    echo "Puppet Agent is already installed. Moving on..."
else
    sudo apt-get install -yq puppet
fi

if cat /etc/crontab | grep puppet 2> /dev/null
then
    echo "Puppet Agent is already configured. Exiting..."
else
    sudo apt-get update -yq && sudo apt-get upgrade -yq

    sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \
        command='/usr/bin/puppet agent --onetime --no-daemonize --splay'

    sudo puppet resource service puppet ensure=running enable=true

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add agent section to /etc/puppet/puppet.conf
    echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null

    sudo puppet agent --enable
fi

Run the two commands below within both the ‘node01.example.com‘ and ‘node02.example.com‘ agent nodes.

sudo service puppet status # test that agent was installed
sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)

The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.

Agent Node Starting Puppet's Certificate Signing Request (CSR) Process

Agent Node Starting Puppet’s Certificate Signing Request (CSR) Process

Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.

sudo puppet cert list # should see 'node01.example.com' cert waiting for signature
sudo puppet cert sign --all # sign the agent node certs
sudo puppet cert list --all # check for signed certs
Puppet Master Completing Puppet's Certificate Signing Request (CSR) Process

Puppet Master Completing Puppet’s Certificate Signing Request (CSR) Process

Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}‘).

Configuration Run Completed on Puppet Agent Node

Configuration Run Completed on Puppet Agent Node

Below is the main site.pp manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.

node default {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git
}

node 'node01.example.com', 'node02.example.com' {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git, docker, fig
}

That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.

Helpful Links

All the source code this project is on Github.

Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html

Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options

Puppet Master Overview:
http://ci.openstack.org/puppet.html

Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html

Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i

Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html

Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet

, , , , , , , , , ,

8 Comments

Preventing Race Conditions Between Containers in ‘Dockerized’ MEAN Applications

Eliminate potential race conditions between the MongoDB data Docker container and the Node.js web-application container in a ‘Dockerized’ MEAN application.

MEAN.JS Dockerized

Introduction

The MEAN stack is a has gained enormous popularity as a reliable and scalable full-stack JavaScript solution. MEAN web application’s have four main components, MongoDB, Express, AngularJS, and Node.js. MEAN web-applications often includes other components, such as Mongoose, Passport, Twitter Bootstrap, Yoeman, Grunt or Gulp, and Bower. The two most popular ready-made MEAN application templates are MEAN.io from Linnovate, and MEAN.JS. Both of these offer a ready-made application framework for building MEAN applications.

Docker has also gained enormous popularity. According to Docker, Docker is an open platform, which enables developers and sysadmins apps to be quickly assembled from components. ‘Dockerized’ apps are completely portable and can run anywhere.

Docker is an ideal solution for MEAN applications. Being a full-stack JavaScript solution, MEAN applications are based on a multi-tier architecture. The MEAN application’s data tier contains the MongoDB noSQL database. The application tier (logic tier) contains Node.js and Express. The application tier can also contain other components, such as Mongoose, a Node.js Object Document Mapper (ODM) for MongoDB, and Passport, an authentication middleware for Node.js. Lastly, the presentation tier (front end) has client-side tools, such as AngularJS and Twitter Bootstrap.

Using Docker, we can ‘Dockerize’ or containerize each tier of a MEAN application, mirroring the physical architecture we would deploy a MEAN application to, in a Production environment. Just as we would always run a separate database server or servers for MongoDB, we can isolate MongoDB into a Docker container. Likewise, we can isolate the Node.js web server, along with the rest of the components (Mongoose, Express, Passport) on the application and presentation tiers, into a Docker container. We can easily add more containers, for more functionality, such as load-balancing and reverse-proxies (nginx), and caching (Redis and Memcached).

The MEAN.JS project has been very progressive in implementing Docker, to offer a more realistic environment for development and testing. An additional tool that the MEAN.JS project has implemented, to automate the creation of multiple Docker containers, is Fig. The tool, Fig, provides quick, automated creation of multiple, linked Docker containers.

Using Docker and Fig, a Developer can pull down ready-made base containers from Docker Hub, configure the containers as part of a multi-tier application environment, deploy our MEAN application components to the containers, and start the applications, all with a short list of commands.

MEAN.JS Dockerized
Note, I said development and test, not production. To extend Docker and Fig to production, you can use tools such as Flocker. Flocker, by ClusterHQ, can scale the single-host Fig environment to multiple containers on multiple machines (hosts).

MEAN Dockerized

Race Conditions

Docker containers have a very fast start-up time, compared to other technologies, such as VMs (virtual machines). However, based on their contents, containers take varying amounts of time to fully start-up. In most multi-tier applications, there is a required start-up sequence for components (tiers, servers, applications). For example, in a database-driven application, like a MEAN application, you should make sure the MongoDB database server is up and running, before starting the application. Although this is obvious, it becomes harder to guarantee the order in which components will start-up, when you leverage an asynchronous, automated, continuous delivery solution like Docker with Fig.

When component dependencies are not met because another container is not fully started, we can refer to this as race condition. I have found with most multi-container MEAN application, the slower starting MongoDB data container prevents the quicker-starting Node.js web-application container from properly starting the MEAN application. In other words, the application crashes.

Fixing Race Conditions with MEAN.JS Applications

In order to eliminate race conditions, we need to script our start-up sequence to guarantee the order in which components will start, ensuring the overall application starts correctly. Specifically in this post, we will eliminate the potential race condition between the MongoDB data container (db_1) and the Node.js web-application container (web_1). At the same time, we will fix a small error with the existing MEAN.JS project, that prevents proper start-up of the ‘dockerized’ container MEAN.JS application.

Race Condition with Docker

 

Download and Build MEAN.JS App

Clone the meanjs/mean repository, and install npm and bower packages.

git clone https://github.com/meanjs/mean.git
cd mean
npm install
bower install

Modify MEAN.JS App

  1. Add fig_start.sh start-up script to root of mean project.
  2. Modify the Dockerfile, replace CMD["grunt"] with CMD /bin/sh /home/mean/wait_mongo_start.sh
  3. Optional, add wait_mongo_start.sh clean-up script to root of mean project.

Fix Existing Issue with MEAN.JS App When Using Docker and Fig

The existing MEAN.JS application references localhost in the development configuration (config/env/development.js). The development configuration is the one used by the MEAN.JS application, at start-up. The MongoDB data container (db_1) is not running on localhost, it is running on a IP address, assigned my Docker. To discover the IP address, we must reference an environment variable (DB_1_PORT_27017_TCP_ADDR), created by Docker, within the Node.js web-application container (web_1).

  1. Modify the config/env/development.js file, add var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';
  2. Modify the config/env/development.js file, change db: 'mongodb://localhost/mean-dev', to db: 'mongodb://' + DB_HOST + '/mean-dev',

Start the Application

Start the application using Fig commands or using the clean-up/start-up script (sh fig_start.sh).

  1. Run fig build && fig up
  2. Alternately, run sh fig_start.sh

The Details…

The CMD command is the last step in the Dockerfile.The CMD command sets the wait_mongo_start.sh script to execute in the Node.js web-application container (web_1) when the container starts. This script prevents the grunt command from running, until nc (or netcat) succeeds at connecting to the IP address and port of mongod, the primary daemon process for the MongoDB system, on the MongoDB data container (db_1). The script uses a 3-second polling interval, which can be modified if necessary.

#!/bin/sh

polling_interval=3

# optional, view db_1 container-related env vars
#env | grep DB_1 | sort

echo "wait for mongo to start first..."

# wait until mongo is running in db_1 container
until nc -z $DB_1_PORT_27017_TCP_ADDR $DB_1_PORT_27017_TCP_PORT
do
 echo "waiting for $polling_interval seconds..."
 sleep $polling_interval
done

# start node app
grunt

The environment variables referenced in the script are created in the Node.js web-application container (web_1), automatically, by Docker. They are shown in the screen grab, below. You can discover these variables by uncommenting the env | grep DB_1 | sort line, above.

Docker Environment Variables Relating to DB_1

Docker Environment Variables Relating to DB_1

The Dockerfile modification is highlighted below.

#FROM dockerfile/nodejs

MAINTAINER Matthias Luebken, matthias@catalyst-zero.com

WORKDIR /home/mean

# Install Mean.JS Prerequisites
RUN npm install -g grunt-cli
RUN npm install -g bower

# Install Mean.JS packages
ADD package.json /home/mean/package.json
RUN npm install

# Manually trigger bower. Why doesn't this work via npm install?
ADD .bowerrc /home/mean/.bowerrc
ADD bower.json /home/mean/bower.json
RUN bower install --config.interactive=false --allow-root

# Make everything available for start
ADD . /home/mean

# Currently only works for development
ENV NODE_ENV development

# Port 3000 for server
# Port 35729 for livereload
EXPOSE 3000 35729

CMD /bin/sh /home/mean/wait_mongo_start.sh

The config/env/development.js modifications are highlighted below (abridged code).

'use strict';

// used when building application using fig and Docker
var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';

module.exports = {
	db: 'mongodb://' + DB_HOST + '/mean-dev',
	log: {
		// Can specify one of 'combined', 'common', 'dev', 'short', 'tiny'
		format: 'dev',
		// Stream defaults to process.stdout
		// Uncomment to enable logging to a log on the file system
		options: {
			//stream: 'access.log'
		}
	},
        ...

The fig_start.sh file is optional and not part of the solution for the race condition. Instead of repeating multiple commands, I prefer running a single script, which can execute the commands, consistently. Note, commands in this script remove ALL ‘Exited’ containers and untagged (<none>) images.

#!/bin/sh

# remove all exited containers
echo "Removing all 'Exited' containers..."
docker rm -f $(docker ps --filter 'status=Exited' -a) > /dev/null 2>&1

# remove all  images
echo "Removing all untagged images..."
docker rmi $(docker images | grep "^" | awk "{print $3}") > /dev/null 2>&1

# build and start containers with fig
fig build && fig up

MEAN Application Start-Up Screen Grabs

Below are screen grabs showing the MEAN.JS application starting up, both before and after the changes were implemented.

Start Script Cleaning Up Docker Images and Containers and Running Fig

Start Script Cleaning Up Docker Images and Containers and Running Fig

MongoDB Cannot Connect on localhost

MongoDB Cannot Connect on localhost

MEAN Application Waiting for MongoDB to Start, Currently at 70%...

MEAN Application Waiting for MongoDB to Start, Currently at 70%…

Connected to MongoDB on Correct IP Address and Grunt Running

Connected to MongoDB on Correct IP Address and Grunt Running

MEAN.JS Docker Containers Created

MEAN.JS Docker Containers Created

MEAN Application Successfully Running in Docker Containers

MEAN Application Successfully Running in Docker Containers

New Article Created with MEAN Application

New Article Created with MEAN Application

, , , , , , , , , , , , , , , , , ,

1 Comment