Posts Tagged bash
Preventing Race Conditions Between Containers in ‘Dockerized’ MEAN Applications
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Client-Side Development, DevOps, Enterprise Software Development, Software Development on November 30, 2014
Introduction
The MEAN stack is a has gained enormous popularity as a reliable and scalable full-stack JavaScript solution. MEAN web application’s have four main components, MongoDB, Express, AngularJS, and Node.js. MEAN web-applications often includes other components, such as Mongoose, Passport, Twitter Bootstrap, Yoeman, Grunt or Gulp, and Bower. The two most popular ready-made MEAN application templates are MEAN.io from Linnovate, and MEAN.JS. Both of these offer a ready-made application framework for building MEAN applications.
Docker has also gained enormous popularity. According to Docker, Docker is an open platform, which enables developers and sysadmins apps to be quickly assembled from components. ‘Dockerized’ apps are completely portable and can run anywhere.
Docker is an ideal solution for MEAN applications. Being a full-stack JavaScript solution, MEAN applications are based on a multi-tier architecture. The MEAN application’s data tier contains the MongoDB noSQL database. The application tier (logic tier) contains Node.js and Express. The application tier can also contain other components, such as Mongoose, a Node.js Object Document Mapper (ODM) for MongoDB, and Passport, an authentication middleware for Node.js. Lastly, the presentation tier (front end) has client-side tools, such as AngularJS and Twitter Bootstrap.
Using Docker, we can ‘Dockerize’ or containerize each tier of a MEAN application, mirroring the physical architecture we would deploy a MEAN application to, in a Production environment. Just as we would always run a separate database server or servers for MongoDB, we can isolate MongoDB into a Docker container. Likewise, we can isolate the Node.js web server, along with the rest of the components (Mongoose, Express, Passport) on the application and presentation tiers, into a Docker container. We can easily add more containers, for more functionality, such as load-balancing and reverse-proxies (nginx), and caching (Redis and Memcached).
The MEAN.JS project has been very progressive in implementing Docker, to offer a more realistic environment for development and testing. An additional tool that the MEAN.JS project has implemented, to automate the creation of multiple Docker containers, is Fig. The tool, Fig, provides quick, automated creation of multiple, linked Docker containers.
Using Docker and Fig, a Developer can pull down ready-made base containers from Docker Hub, configure the containers as part of a multi-tier application environment, deploy our MEAN application components to the containers, and start the applications, all with a short list of commands.
Note, I said development and test, not production. To extend Docker and Fig to production, you can use tools such as Flocker. Flocker, by ClusterHQ, can scale the single-host Fig environment to multiple containers on multiple machines (hosts).
Race Conditions
Docker containers have a very fast start-up time, compared to other technologies, such as VMs (virtual machines). However, based on their contents, containers take varying amounts of time to fully start-up. In most multi-tier applications, there is a required start-up sequence for components (tiers, servers, applications). For example, in a database-driven application, like a MEAN application, you should make sure the MongoDB database server is up and running, before starting the application. Although this is obvious, it becomes harder to guarantee the order in which components will start-up, when you leverage an asynchronous, automated, continuous delivery solution like Docker with Fig.
When component dependencies are not met because another container is not fully started, we can refer to this as race condition. I have found with most multi-container MEAN application, the slower starting MongoDB data container prevents the quicker-starting Node.js web-application container from properly starting the MEAN application. In other words, the application crashes.
Fixing Race Conditions with MEAN.JS Applications
In order to eliminate race conditions, we need to script our start-up sequence to guarantee the order in which components will start, ensuring the overall application starts correctly. Specifically in this post, we will eliminate the potential race condition between the MongoDB data container (db_1) and the Node.js web-application container (web_1). At the same time, we will fix a small error with the existing MEAN.JS project, that prevents proper start-up of the ‘dockerized’ container MEAN.JS application.
Download and Build MEAN.JS App
Clone the meanjs/mean repository, and install npm and bower packages.
git clone https://github.com/meanjs/mean.git cd mean npm install bower install
Modify MEAN.JS App
- Add
fig_start.sh
start-up script to root of mean project. - Modify the Dockerfile, replace
CMD["grunt"]
withCMD /bin/sh /home/mean/wait_mongo_start.sh
- Optional, add
wait_mongo_start.sh
clean-up script to root of mean project.
Fix Existing Issue with MEAN.JS App When Using Docker and Fig
The existing MEAN.JS application references localhost
in the development configuration (config/env/development.js
). The development
configuration is the one used by the MEAN.JS application, at start-up. The MongoDB data container (db_1) is not running on localhost
, it is running on a IP address, assigned my Docker. To discover the IP address, we must reference an environment variable (DB_1_PORT_27017_TCP_ADDR
), created by Docker, within the Node.js web-application container (web_1).
- Modify the config/env/development.js file, add
var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';
- Modify the config/env/development.js file, change
db: 'mongodb://localhost/mean-dev',
todb: 'mongodb://' + DB_HOST + '/mean-dev',
Start the Application
Start the application using Fig commands or using the clean-up/start-up script (sh fig_start.sh
).
- Run
fig build && fig up
- Alternately, run
sh fig_start.sh
The Details…
The CMD
command is the last step in the Dockerfile
.The CMD
command sets the wait_mongo_start.sh
script to execute in the Node.js web-application container (web_1) when the container starts. This script prevents the grunt
command from running, until nc
(or netcat) succeeds at connecting to the IP address and port of mongod
, the primary daemon process for the MongoDB system, on the MongoDB data container (db_1). The script uses a 3-second polling interval, which can be modified if necessary.
#!/bin/sh polling_interval=3 # optional, view db_1 container-related env vars #env | grep DB_1 | sort echo "wait for mongo to start first..." # wait until mongo is running in db_1 container until nc -z $DB_1_PORT_27017_TCP_ADDR $DB_1_PORT_27017_TCP_PORT do echo "waiting for $polling_interval seconds..." sleep $polling_interval done # start node app grunt
The environment variables referenced in the script are created in the Node.js web-application container (web_1), automatically, by Docker. They are shown in the screen grab, below. You can discover these variables by uncommenting the env | grep DB_1 | sort
line, above.
The Dockerfile
modification is highlighted below.
#FROM dockerfile/nodejs MAINTAINER Matthias Luebken, matthias@catalyst-zero.com WORKDIR /home/mean # Install Mean.JS Prerequisites RUN npm install -g grunt-cli RUN npm install -g bower # Install Mean.JS packages ADD package.json /home/mean/package.json RUN npm install # Manually trigger bower. Why doesn't this work via npm install? ADD .bowerrc /home/mean/.bowerrc ADD bower.json /home/mean/bower.json RUN bower install --config.interactive=false --allow-root # Make everything available for start ADD . /home/mean # Currently only works for development ENV NODE_ENV development # Port 3000 for server # Port 35729 for livereload EXPOSE 3000 35729 CMD /bin/sh /home/mean/wait_mongo_start.sh
The config/env/development.js
modifications are highlighted below (abridged code).
'use strict'; // used when building application using fig and Docker var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost'; module.exports = { db: 'mongodb://' + DB_HOST + '/mean-dev', log: { // Can specify one of 'combined', 'common', 'dev', 'short', 'tiny' format: 'dev', // Stream defaults to process.stdout // Uncomment to enable logging to a log on the file system options: { //stream: 'access.log' } }, ...
The fig_start.sh
file is optional and not part of the solution for the race condition. Instead of repeating multiple commands, I prefer running a single script, which can execute the commands, consistently. Note, commands in this script remove ALL ‘Exited’ containers and untagged (<none>) images.
#!/bin/sh # remove all exited containers echo "Removing all 'Exited' containers..." docker rm -f $(docker ps --filter 'status=Exited' -a) > /dev/null 2>&1 # remove all images echo "Removing all untagged images..." docker rmi $(docker images | grep "^" | awk "{print $3}") > /dev/null 2>&1 # build and start containers with fig fig build && fig up
MEAN Application Start-Up Screen Grabs
Below are screen grabs showing the MEAN.JS application starting up, both before and after the changes were implemented.
Configure Chef Client on Windows for a Proxy Server
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on January 1, 2014
Configure Chef Client on Windows to work with a proxy server, by modifying Chef Knife’s configuration file.
Introduction
In my last two post, Configure Git for Windows and Vagrant on a Corporate Network and Easy Configuration of Git for Windows on a Corporate Network, I demonstrated how to configure Git for Windows and Vagrant to work properly on a corporate network with a proxy server. Modifying the .bashrc file and adding a few proxy-related environment variables worked fine for Git and Vagrant.
However, even though Chef Client also uses the Git Bash interactive shell to execute commands on Windows using Knife, Chef depends on Knife’s configuration file (knife.rb) for proxy settings. In the following example, Git and Vagrant connect to the proxy server and authenticate using the proxy-related environment variables created by the ‘proxy_on’ function (described in my last post). However, Chef’s Knife command line tool fails to return the status of the online Hosted Chef server account, because the default knife.rb file contains no proxy server settings.
For Chef to work correctly behind a proxy server, you must modify the knife.rb file, adding the necessary proxy-related settings. The good news, we can leverage the same proxy-related environment variables we already created for Git and Vagrant.
Configuring Chef Client
First, make sure you have your knife.rb file in the .chef folder, within your home directory (C:\Users\username\.chef\knife.rb’). This allows Chef to use the knife.rb file’s settings for all Chef repos on your local machine.
Next, make sure you have the following environment variables set up on your computer: USERNAME, USERDNSDOMAIN, PASSWORD, PROXY_SERVER, and PROXY_PORT. The USERNAME and USERDNSDOMAIN should already present in the system wide environment variables on Windows. If you haven’t created the PASSWORD, PROXY_SERVER, PROXY_PORT environment variables already, based on my last post, I suggest adding them to the current user environment ( Environment Variables -> User variables, shown below) as opposed to the system wide environment (Environment Variables -> System variables). You can add the User variables manually, using Windows+Pause Keys -> Advanced system settings ->Environment Variables… -> New…
Alternately, you can use the ‘SETX‘ command. See commands below. When using ‘SETX’, do not use the ‘/m’ parameter, especially when setting the PASSWORD variable. According to SETX help (‘SETX /?’), the ‘/m’ parameter specifies that the variable is set in the system wide (HKEY_LOCAL_MACHINE) environment. The default is to set the variable under the HKEY_CURRENT_USER environment (no ‘/m’). If you set your PASSWORD in the system wide environment, all user accounts on your machine could get your PASSWORD.
To see your changes with SETX, close and re-open your current command prompt window. Then, use a ‘env | grep -e PASSWORD -e PROXY’ command to view the three new environment variables.
[gist https://gist.github.com/garystafford/8233123 /]Lastly, modify your existing knife.rb file, adding the required proxy-related settings, shown below. Notice, we use the ‘HTTP_PROXY’ and ‘HTTPS_PROXY’ environment variables set by ‘proxy_on’; no need to redefine them. Since my particular network environment requires proxy authentication, I have also included the ‘http_proxy_user’, ‘http_proxy_pass’, ‘https_proxy_user’, and ‘https_proxy_pass’ settings.
[gist https://gist.github.com/garystafford/8222755 /]If your environment requires authentication and you fail to set these variables, you will see an error similar to the one shown below. Note the first line of the error. In this example, Chef cannot authenticate against the https proxy server. There is a ‘https_proxy’ setting, but no ‘https_proxy_user’ and ‘https_proxy_pass’ settings in the Knife configuration file.
Using the Code
Adding the proxy settings to the knife.rb file, Knife is able connect to the proxy server, authenticate, and complete its status check successfully. Now, Git, Vagrant, and Chef all have Internet connectivity through the proxy server, as shown below.
Why Include Authentication Settings?
Even with the domain, username and password, all included in the HTTP_PROXY and HTTPS_PROXY URIs, Chef still insists on using the ‘http_proxy_user’ and ‘http_proxy_pass’ or ‘https_proxy_user’ and ‘https_proxy_pass’ credential settings for proxy authentication. In my tests, if these settings are missing from Knife’s configuration file, Chef fails to authenticate with the proxy server.
Configure Git for Windows and Vagrant on a Corporate Network
Posted by Gary A. Stafford in Bash Scripting, Build Automation, DevOps, Enterprise Software Development, Software Development on December 31, 2013
Modified bashrc configuration for Git for Windows to work with both Git and Vagrant.

Introduction
In my last post, Easy Configuration of Git for Windows on a Corporate Network, I demonstrated how to configure Git for Windows to work when switching between working on-site, working off-site through a VPN, and working totally off the corporate network. Dealing with a proxy server was the main concern. The solution worked fine for Git. However, after further testing with Vagrant using the Git Bash interactive shell, I ran into a snag. Unlike Git, Vagrant did not seem to like the standard URI, which contained ‘domain\username’:
http(s)://domain\username:password@proxy_server:proxy_port
In a corporate environment with LDAP, qualifying the username with a domain is normal, like ‘domain\username’. But, when trying to install a Vagrant plug-in with a command such as ‘vagrant plugin install vagrant-omnibus’, I received an error similar to the following (proxy details obscured):
$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
c:/HashiCorp/Vagrant/embedded/lib/ruby/2.0.0/uri/common.rb:176: in `split':
bad URI(is not URI?): http://domain\username:password@proxy:port
(URI::InvalidURIError)...
Solution
After some research, it seems Vagrant’s ‘common.rb’ URI function does not like the ‘domain\username’ format of the original URI. To fix this problem, I modified the original ‘proxy_on’ function, removing the DOMAIN environment variable. I now suggest using the fully qualified domain name (FQDN) of the proxy server. So, instead of ‘my_proxy’, it would be ‘my_proxy.domain.tld’. The acronym ‘tld’ stands for the top-level domain (tld). Although .com is the most common one, there are over 300 top-level domains, so I don’t want assume yours is ‘.com’. The new proxy URI is as follows:
http(s)://username:password@proxy_server.domain.tld:proxy_port
Although all environments have different characteristics, I have found this change to work, with both Git and Vagrant, in my own environment. After making this change, I was able to install plug-ins and do other similar functions with Vagrant, using the Git Bash interactive shell.
$ vagrant plugin install vagrant-omnibus
Installing the 'vagrant-omnibus' plugin. This can take a few minutes...
Installed the plugin 'vagrant-omnibus (1.2.1)'!
Change to Environment Variables
One change you will notice compared to my last post, and unrelated to the Vagrant domain issue, is a change to PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables. In the last post, I created and exported the PASSWORD, PROXY_SERVER, and PROXY_PORT environment variables within the ‘proxy_on’ function. After further consideration, I permanently moved them to Environment Variables -> User variables. I felt this was a better solution, especially for my password. Instead of my user’s account password residing in the .bashrc file, in plain text, it’s now in my user’s environment variables. Although still not ideal, I felt my password was slightly more secure. Also, since my proxy server address rarely change when I am at work or on the VPN, I felt moving these was easier and cleaner than placing them into the .bashrc file.
The New Code
Verbose version:
# configure proxy for git while on corporate network | |
function proxy_on(){ | |
# assumes $USERDOMAIN, $USERNAME, $USERDNSDOMAIN | |
# are existing Windows system-level environment variables | |
# assumes $PASSWORD, $PROXY_SERVER, $PROXY_PORT | |
# are existing Windows current user-level environment variables (your user) | |
# environment variables are UPPERCASE even in git bash | |
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT" | |
export HTTPS_PROXY=$HTTP_PROXY | |
export FTP_PROXY=$HTTP_PROXY | |
export SOCKS_PROXY=$HTTP_PROXY | |
export NO_PROXY="localhost,127.0.0.1,$USERDNSDOMAIN" | |
# optional for debugging | |
export GIT_CURL_VERBOSE=1 | |
# optional Self Signed SSL certs and | |
# internal CA certificate in an corporate environment | |
export GIT_SSL_NO_VERIFY=1 | |
env | grep -e _PROXY -e GIT_ | sort | |
echo -e "\nProxy-related environment variables set." | |
} | |
# remove proxy settings when off corporate network | |
function proxy_off(){ | |
variables=( \ | |
"HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "SOCKS_PROXY" \ | |
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" \ | |
) | |
for i in "${variables[@]}" | |
do | |
unset $i | |
done | |
env | grep -e _PROXY -e GIT_ | sort | |
echo -e "\nProxy-related environment variables removed." | |
} | |
# if you are always behind a proxy uncomment below | |
#proxy_on | |
# increase verbosity of Vagrant output | |
export VAGRANT_LOG=INFO |
Compact version:
function proxy_on(){ | |
export HTTP_PROXY="http://$USERNAME:$PASSWORD@$PROXY_SERVER.$USERDNSDOMAIN:$PROXY_PORT" | |
export HTTPS_PROXY="$HTTP_PROXY" FTP_PROXY="$HTTP_PROXY" ALL_PROXY="$HTTP_PROXY" \ | |
NO_PROXY="localhost,127.0.0.1,*.$USERDNSDOMAIN" \ | |
GIT_CURL_VERBOSE=1 GIT_SSL_NO_VERIFY=1 | |
echo -e "\nProxy-related environment variables set." | |
} | |
function proxy_off(){ | |
variables=( "HTTP_PROXY" "HTTPS_PROXY" "FTP_PROXY" "ALL_PROXY" \ | |
"NO_PROXY" "GIT_CURL_VERBOSE" "GIT_SSL_NO_VERIFY" ) | |
for i in "${variables[@]}"; do unset $i; done | |
echo -e "\nProxy-related environment variables removed." | |
} | |
# if you are always behind a proxy uncomment below | |
#proxy_on | |
# increase verbosity of Vagrant output | |
export VAGRANT_LOG=INFO |
Easy Configuration of Git for Windows on a Corporate Network
Posted by Gary A. Stafford in Bash Scripting, DevOps, Enterprise Software Development, Software Development on December 25, 2013
Configure Git for Windows to work when switching between working on-site, working off-site through a VPN, and working totally off the corporate network.

Introduction
Configuring Git to work on your corporate network can be challenging. A typical large corporate network may require Git to work behind proxy servers and firewalls, use LDAP authentication on a corporate domain, handle password expiration, deal with self-signed and internal CA certificates, and so forth. Telecommuters have the added burden of constantly switching device configurations between working on-site, working off-site through a VPN, and working totally off the corporate network at home or the local coffee shop.
There are dozens of posts on the Internet from users trying to configure Git for Windows to work on their corporate network. Many posts are oriented toward Git on Unix-based systems. Many responses only offer partial solutions without any explanation. Some responses incorrectly mix configurations for Unix-based systems with those for Windows.
Most solutions involve one of two approaches to handle proxy servers, authentication, and so forth. They are, modify Git’s .gitconfig file or set equivalent environment variables that Git will look for automatically. In my particular development situation, I spend equal amounts of time on and off a corporate network, on a Windows-based laptop. If I were always on-site, I would modify the .gitconfig file. However, since I am constantly moving on and off the network with a laptop, I chose a solution to create and destroy the environment variables, as I move on and off the corporate network.
Git for Windows
Whether you download Git from the Git website or the msysGit website, you will get the msysGit version of Git for Windows. As explained on the msysGit Wiki, msysGit is the build environment for Git for Windows. MSYS (thus the name, msysGit), is a Bourne Shell command line interpreter system, used by MinGW and originally forked from Cygwin. MinGW is a minimalist development environment for native Microsoft Windows applications.
Why do you care? By installing Git for Windows, you actually get a fairly functional Unix system running on Windows. Many of the commands you use on Unix-based systems also work on Windows, within msysGit’s Git Bash.
Setting Up Code
There are two identical versions of the post’s code, a well-commented version and a compact version. Add either version’s contents to the .bashrc file in home directory. If you’ve worked with Linux, you are probably familiar with the .bashrc file and it’s functionality. On Unix-based systems, your home directory is ‘~/’ (/home/username), while on Windows, the equivalent directory path is ‘C:\Users\username\’.
On Windows, the .bashrc file is not created by default by Git for Windows. If you do not have a .bashrc file already, the easiest way to implement the post’s code is to download either Gist, shown below, from GitHub, rename it to .bashrc, and place it in your home directory.
After adding the code, change the PASSWORD, PROXY_SERVER, and PROXY_PORT environment variable values to match your network. Security note, this solution requires you to store you Windows user account password in plain text on your local system. This presents a certain level of security risk, as would storing it in your .gitconfig file.
The script assumes the same proxy server address for all protocols – HTTP, HTTPS, FTP, and SOCKS. If any of the proxy servers or ports are different, simply change the script’s variables. You may also choose to add other variables and protocols, or remove them, based on your network requirements. Remember, environment variables on Windows are UPPERCASE. Even when using the interactive Git Bash shell, environment variables need to be UPPERCASED.
Lastly, as with most shells, you must exit any current interactive Git Bash shells and re-open a new interactive shell for the new functions in the .bashrc file to be available.
Verbose version:
[gist https://gist.github.com/garystafford/8128922 /]
Compact version:
[gist https://gist.github.com/garystafford/8135027 /]
Using the Code
When on-site and connected to your corporate network, or off-site and connected through a VPN, execute the ‘proxy_on’ function. When off your corporate network, execute the ‘proxy_off’ function.
Below, are a few examples of using Git to clone the popular angular.js repo from github.com (git clone https://github.com/angular/angular.js). The first example shows what happens on the corporate network when Git for Windows is not configured to work with the proxy server.
The next example demonstrate successfully cloning the angular.js repo from github.com, while on the corporate network. The environment variables are set with the ‘proxy_on’ function. I have obscured the variable’s values and most of the verbose output from Git to hide confidential network-related details.
What’s My Proxy Server Address?
To setup the ‘proxy_on’ function, you need to know your proxy server’s address. One way to find this, is Control Panels -> Internet Options -> Connections -> LAN Settings. If your network requires a proxy server, it should be configured here.
However, on many corporate networks, Windows devices are configured to use a proxy auto-config (PAC) file. According to Wikipedia, a PAC file defines how web browsers and other user agents can automatically choose a network’s appropriate proxy server. The downside of a PAC file is that you cannot easily figure out what proxy server you are connected to.
To discover your proxy server with a PAC file, open a Windows command prompt and execute the following command. Use the command’s output to populate the script’s PROXY_SERVER and PORT variables.
reg query “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings” | find /i “proxyserver”
Resources
Arch Linux Wiki – Proxy Settings