Posts Tagged Continuous Delivery

Preparing for Your Organization’s DevOps Journey

19672001 - man looking at pencil with eraser erases maze

Copyright: peshkova / 123RF Stock Photo

Introduction

Recently, I was asked two questions regarding DevOps. The first, ‘How do you get started implementing DevOps in an organization?’ A question I get asked, and answer, fairly frequently. The second was a bit more challenging to answer, ‘How do you prepare your organization to implement DevOps?

Getting Started

The first question, ‘How do you get started implementing DevOps in an organization?’, is a popular question many companies ask. The answer varies depending on who you ask, but the process is fairly well practiced and documented by a number of well-known and respected industry pundits. A successful DevOps implementation is a combination of strategic planning and effective execution.

A successful DevOps implementation is a combination of strategic planning and effective execution.

Most commonly, an organization starts with some form of a DevOps maturity assessment. The concept of a DevOps maturity model was introduced by Jez Humble and David Farley, in their ground-breaking book, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series), circa 2011.

Humble and Farley presented their ‘Maturity Model for Configuration and Release Management’ (page 419). This model, which encompassed much more than just CM and RM, was created as a means of evaluating and improving an organization’s DevOps practices.

Although there are several variations, maturity models ordinarily all provide some means of ranking the relative maturity of an organization’s DevOps practices. Less sophisticated models focus primarily on tooling and processes. More holistic models, such as Accenture’s DevOps Maturity Assessment, focus on tooling, processes, people and culture.

Following the analysis, most industry experts recommend a strategic plan, followed an implementation plan. The plans set milestones for reaching higher levels of maturity, according to the model. Experts will identify key performance indicators, such as release frequency, defect rates, production downtime, and mean time to recovery from failures, which are often used to measure DevOps success.

Preparing for the Journey

As I said, the second question, ‘How do you prepare your organization to implement DevOps?’, is a bit more challenging to answer. And, as any good consultant would respond, it depends.

The exact answer depends on many factors. How engaged is management in wanting to transform their organization? How mature is the organization’s current IT practices? Are the other parts of the organization, such as sales, marketing, training, product documentation, and customer support, aligned with IT? Is IT aligned with them?

Even the basics matter, such as the organization’s size, both physical and financial, as well as the age of the organization? The industry? Are they in a highly regulated industry? Are they a global organization with distributed IT resources? Have they tried DevOps before and failed? Why did they fail?

As overwhelming as those questions might seem, I managed to break down my answer to the question, “How do you prepare your organization to implement DevOps?”, into five key areas. In my experience, each of these is critical for any DevOps transformation to succeed. Before the journey starts, these are five areas an organization needs to consider:

  1. Have an Agile Mindset
  2. Breakdown Silos
  3. Know Your Business
  4. Take the Long View
  5. Be Introspective

Have an Agile Mindset

It is commonly accepted that DevOps was born from the need of Agile software development to increase the frequency of releases. More releases required faster feedback loops, better quality control methods, and the increased use of automation, amongst other necessities. DevOps practices evolved to meet those challenges.

If an organization is considering DevOps, it should have already successfully embraced Agile, or be well along in their Agile transformation. An outgrowth of Agile software development, DevOps follow many Agile practices. Such Agile practices as cross-team collaboration, continuous and rapid feedback loops, continuous improvement, test-driven development, continuous integration, scheduling work in sprints, and breaking down business requirements into epics, stories, and tasks, are usually all part of a successful DevOps implementation.

If your organization cannot adopt Agile, it will likely fail to successfully embrace DevOps. Imagine a typical scenario in which DevOps enables an organization to release more frequently — monthly instead of quarterly, weekly instead of monthly. However, if the rest of the organization — sales, marketing, training, product documentation, and customer support, is still working in a non-Agile manner, they will not be able to match the improved cycle time DevOps would provide.

Breakdown Silos

Closely associated with an Agile mindset, is breaking down departmental silos. If your organization has already made an Agile transformation, then one should assume those ‘silos’, the physical or more often process-induced ‘walls’ between departments, have been torn down. Having embraced Agile, we assume that Development and Testing are working side-by-side as part of an Agile software development team.

Implementing DevOps requires closing the often wide gap between Development and Operations. If your organization cannot tear down the typically shorter wall between Development and Testing, then tearing down the larger walls between Development and Operations will be impossible.

Know Your Business

Before starting your DevOps journey, an organization needs to know thyself. Most organizations establish business metrics, such as sales quotas, profit targets, employee retention objectives, and client acquisition goals. However, many organizations have not formalized their IT-related Key Performance Indicators (KPIs) or Service Level Agreements (SLAs).

DevOps is all about measurements — application response time, incident volume, severity, and impact, defect density, Mean Time To Recovery (MTTR), downtime, uptime, and so forth. Established meaningful and measurable metrics is one of the best ways to evaluate the continuous improvements achieved by a maturing DevOps practice.

To successfully implement DevOps, an organization should first identify its business critical performance metrics and service level expectations. Additionally, an organization must accurately and honestly measure itself against those metrics, before beginning the DevOps journey.

Take the Long View

Rome was not built in a day, organizations don’t transform overnight, and DevOps is a journey, not a time-boxed task in a team’s backlog. Before an organization sets out on their journey, they must be willing to take the long view on DevOps. There is a reason DevOps maturity models exist. Like most engineering practices, cultural and organizational transformation, and skill-building exercise, DevOps takes the time to become successfully entrenched in a company.

Rome was not built in a day, organizations don’t transform overnight, and DevOps is a journey, not a time-boxed task in a team’s backlog.

Organizations need to value quick, small wins, followed by more small wins. They should not expect a big bang with DevOps. Achieving high levels DevOps performance is similar to the Agile practice of delivering small pieces of valuable functionality, in an incremental fashion.

Getting the ‘Hello World’ application successfully through a simple continuous integration pipeline might seem small, but think of all the barriers that were overcome to achieve that task — source control, continuous integration server, unit testing, artifact repository, and so on. Your next win, deploy that ‘Hello World’ application to your Test environment, automatically, through a continuous deployment pipeline…

This practice reminds me of an adage. Would you prefer a dollar, every day for the next week, or seven dollars at the end of the week? Most people prefer the immediacy of a dollar each day (small wins), as well as the satisfaction of seeing the value build consistently, day after day. Exercise the same philosophy with DevOps.

Be Introspective

As stated earlier, generally, the first step in creating a strategic plan for implementing DevOps is analyzing your organization’s current level of IT maturity. Individual departments must be willing to be open, honest, and objective when assessing their current state.

The inability of organizations to be transparent about their practices, challenges, and performance, is a sign of an unhealthy corporate culture. Not only is an accurate perspective critical for a maturity analysis and strategic planning, but the existence of an unhealthy culture can also be fatal to most DevOps transformation. DevOps only thrives in an open, collaborative, and supportive culture.

Conclusion

As Alexander Graham Bell once famously said, ‘before anything else, preparation is the key to success.’ Although not a guarantee, properly preparing for a DevOps transformation by addressing these five key areas, should greatly improve an organization’s chances of success.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , ,

Leave a comment

Infrastructure as Code Maturity Model

Systematically Evolving an Organization’s Infrastructure

Infrastructure and software development teams are increasingly building and managing infrastructure using automated tools that have been described as “infrastructure as code.” – Kief Morris (Infrastructure as Code)

The process of managing and provisioning computing infrastructure and their configuration through machine-processable, declarative, definition files, rather than physical hardware configuration or the use of interactive configuration tools. – Wikipedia (abridged)

Convergence of CD, Cloud, and IaC

In 2011, co-authors Jez Humble, formerly of ThoughtWorks, and David Farley, published their ground-breaking book, Continuous Delivery. Humble and Farley’s book set out, in their words, to automate the ‘painful, risky, and time-consuming process’ of the software ‘build, deployment, and testing process.

cd_image_02

Over the next five years, Humble and Farley’s Continuous Delivery made a significant contribution to the modern phenomena of DevOps. According to Wikipedia, DevOps is the ‘culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.

In parallel with the growth of DevOps, Cloud Computing continued to grow at an explosive rate. Amazon pioneered modern cloud computing in 2006 with the launch of its Elastic Compute Cloud. Two years later, in 2008, Microsoft launched its cloud platform, Azure. In 2010, Rackspace launched OpenStack.

Today, there is a flock of ‘cloud’ providers. Their services fall into three primary service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Since we will be discussing infrastructure, we will focus on IaaS and PaaS. Leaders in this space include Google Cloud Platform, RedHat, Oracle Cloud, Pivotal Cloud Foundry, CenturyLink Cloud, Apprenda, IBM SmartCloud Enterprise, and Heroku, to mention just a few.

Finally, fast forward to June 2016, O’Reilly releases Infrastructure as Code
Managing Servers in the Cloud
, by Kief Morris, ThoughtWorks. This crucial work bridges many of the concepts first introduced in Humble and Farley’s Continuous Delivery, with the evolving processes and practices to support cloud computing.

cd_image_03

This post examines how to apply the principles found in the Continuous Delivery Maturity Model, an analysis tool detailed in Humble and Farley’s Continuous Delivery, and discussed herein, to the best practices found in Morris’ Infrastructure as Code.

Infrastructure as Code

Before we continue, we need a shared understanding of infrastructure as code. Below are four examples of infrastructure as code, as Wikipedia defined them, ‘machine-processable, declarative, definition files.’ The code was written using four popular tools, including HashiCorp Packer, Docker, AWS CloudFormation, and HashiCorp Terraform. Executing the code provisions virtualized cloud infrastructure.

HashiCorp Packer

Packer definition of an AWS EBS-backed AMI, based on Ubuntu.

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "region": "us-east-1",
    "source_ami": "ami-fce3c696",
    "instance_type": "t2.micro",
    "ssh_username": "ubuntu",
    "ami_name": "packer-example {{timestamp}}"
  }]
}

Docker

Dockerfile, used to create a Docker image, and subsequently a Docker container, running MongoDB.

FROM ubuntu:16.04
MAINTAINER Docker
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu" \
$(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | \
tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
RUN mkdir -p /data/db
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]

AWS CloudFormation

AWS CloudFormation declaration for three services enabled on a running instance.

services:
  sysvinit:
    nginx:
      enabled: "true"
      ensureRunning: "true"
      files:
        - "/etc/nginx/nginx.conf"
      sources:
        - "/var/www/html"
    php-fastcgi:
      enabled: "true"
      ensureRunning: "true"
      packages:
        yum:
          - "php"
          - "spawn-fcgi"
    sendmail:
      enabled: "false"
      ensureRunning: "false"

HashiCorp Terraform

Terraform definition of an AWS m1.small EC2 instance, running NGINX on Ubuntu.

resource "aws_instance" "web" {
  connection { user = "ubuntu" }
instance_type = "m1.small"
Ami = "${lookup(var.aws_amis, var.aws_region)}"
Key_name = "${aws_key_pair.auth.id}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]
Subnet_id = "${aws_subnet.default.id}"
provisioner "remote-exec" {
  inline = [
    "sudo apt-get -y update",
    "sudo apt-get -y install nginx",
    "sudo service nginx start",
  ]
 }
}

Cloud-based Infrastructure as a Service

The previous examples provide but the narrowest of views into the potential breadth of infrastructure as code. Leading cloud providers, such as Amazon and Microsoft, offer hundreds of unique offerings, most of which may be defined and manipulated through code — infrastructure as code.

cd_image_05

cd_image_04

What Infrastructure as Code?

The question many ask is, what types of infrastructure can be defined as code? Although vendors and cloud providers have their unique names and descriptions, most infrastructure is divided into a few broad categories:

  • Compute
  • Databases, Caching, and Messaging
  • Storage, Backup, and Content Delivery
  • Networking
  • Security and Identity
  • Monitoring, Logging, and Analytics
  • Management Tooling

Continuous Delivery Maturity Model

We also need a common understanding of the Continuous Delivery Maturity Model. According to Humble and Farley, the Continuous Delivery Maturity Model was distilled as a model that ‘helps to identify where an organization stands in terms of the maturity of its processes and practices and defines a progression that an organization can work through to improve.

The Continuous Delivery Maturity Model is a 5×6 matrix, consisting of six areas of practice and five levels of maturity. Each of the matrix’s 30 elements defines a required discipline an organization needs to follow, to be considered at that level of maturity within that practice.

Areas of Practice

The CD Maturity Model examines six broad areas of practice found in most enterprise software organizations:

  • Build Management and Continuous Integration
  • Environments and Deployment
  • Release Management and Compliance
  • Testing
  • Data Management
  • Configuration Management

Levels of Maturity

The CD Maturity Model defines five level of increasing maturity, from a score of -1 to 3, from Regressive to Optimizing:

  • Level 3: Optimizing – Focus on process improvement
  • Level 2: Quantitatively Managed – Process measured and controlled
  • Level 1: Consistent – Automated processes applied across whole application lifecycle
  • Level 0: Repeatable – Process documented and partly automated
  • Level -1: Regressive – Processes unrepeatable, poorly controlled, and reactive

cd_image_06

Maturity Model Analysis

The CD Maturity Model is an analysis tool. In my experience, organizations use the maturity model in one of two ways. First, an organization completes an impartial evaluation of their existing levels of maturity across all areas of practice. Then, the organization focuses on improving the overall organization’s maturity, attempting to achieve a consistent level of maturity across all areas of practice. Alternately, the organization concentrates on a subset of the practices, which have the greatest business value, or given their relative immaturity, are a detriment to the other practices.

cd_image_01

* CD Maturity Model Analysis Tool available on GitHub.

Infrastructure as Code Maturity Levels

Although infrastructure as code is not explicitly called out as a practice in the CD Maturity Model, many of it’s best practices can be found in the maturity model. For example, the model prescribes automated environment provisioning, orchestrated deployments, and the use of metrics for continuous improvement.

Instead of trying to retrofit infrastructure as code into the existing CD Maturity Model, I believe it is more effective to independently apply the model’s five levels of maturity to infrastructure as code. To that end, I have selected many of the best practices from the book, Infrastructure as Code, as well as from my experiences. Those selected practices have been distributed across the model’s five levels of maturity.

The result is the first pass at an evolving Infrastructure as Code Maturity Model. This model may be applied alongside the broader CD Maturity Model, or independently, to evaluate and further develop an organization’s infrastructure practices.

IaC Level -1: Regressive

Processes unrepeatable, poorly controlled, and reactive

  • Limited infrastructure is provisioned and managed as code
  • Infrastructure provisioning still requires many manual processes
  • Infrastructure code is not written using industry-standard tooling and patterns
  • Infrastructure code not built, unit-tested, provisioned and managed, as part of a pipeline
  • Infrastructure code, processes, and procedures are inconsistently documented, and not available to all required parties

IaC Level 0: Repeatable

Processes documented and partly automated

  • All infrastructure code and configuration are stored in a centralized version control system
  • Testing, provisioning, and management of infrastructure are done as part of automated pipeline
  • Infrastructure is deployable as individual components
  • Leverages programmatic interfaces into physical devices
  • Automated security inspection of components and dependencies
  • Self-service CLI or API, where internal customers provision their resources
  • All code, processes, and procedures documented and available
  • Immutable infrastructure and processes

IaC Level 1: Consistent

Automated processes applied across whole application lifecycle

  • Fully automated provisioning and management of infrastructure
  • Minimal use of unsupported, ‘home-grown’ infrastructure tooling
  • Unit-tests meet code-coverage requirements
  • Code is continuously tested upon every check-in to version control system
  • Continuously available infrastructure using zero-downtime provisioning
  • Uses configuration registries
  • Templatized configuration files (no awk/sed magic)
  • Secrets are securely management
  • Auto-scaling based on user-defined load characteristics

IaC Level 2: Quantitatively Managed

Processes measured and controlled

  • Uses infrastructure definition files
  • Capable of automated rollbacks
  • Infrastructure and supporting systems are highly available and fault tolerant
  • Externalized configuration, no black box API to modify configuration
  • Fully monitored infrastructure with configurable alerting
  • Aggregated, auditable infrastructure logging
  • All code, processes, and procedures are well documented in a Knowledge Management System
  • Infrastructure code uses declarative versus imperative programming model, maybe…

IaC Level 3: Optimizing

Focus on process improvement

  • Self-healing, self-configurable, self-optimizing, infrastructure
  • Performance tested and monitored against business KPIs
  • Maximal infrastructure utilization and workload density
  • Adheres to Cloud Native and 12-Factor patterns
  • Cloud-agnostic code that minimizes cloud vendor lock-in

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , ,

4 Comments

Automate the Provisioning and Configuration of HAProxy and an Apache Web Server Cluster Using Foreman

Use Vagrant, Foreman, and Puppet to provision and configure HAProxy as a reverse proxy, load-balancer for a cluster of Apache web servers.

Simple Load Balanced 2

Introduction

In this post, we will use several technologies, including VagrantForeman, and Puppet, to provision and configure a basic load-balanced web server environment. In this environment, a single node with HAProxy will act as a reverse proxy and load-balancer for two identical Apache web server nodes. All three nodes will be provisioned and bootstrapped using Vagrant, from a Linux CentOS 6.5 Vagrant Box. Afterwards, Foreman, with Puppet, will then be used to install and configure the nodes with HAProxy and Apache, using a series of Puppet modules.

For this post, I will assume you already have running instances of Vagrant with the vagrant-hostmanager plugin, VirtualBox, and Foreman. If you are unfamiliar with Vagrant, the vagrant-hostmanager plugin, VirtualBox, Foreman, or Puppet, review my recent post, Installing Foreman and Puppet Agent on Multiple VMs Using Vagrant and VirtualBox. This post demonstrates how to install and configure Foreman. In addition, the post also demonstrates how to provision and bootstrap virtual machines using Vagrant and VirtualBox. Basically, we will be repeating many of this same steps in this post, with the addition of HAProxy, Apache, and some custom configuration Puppet modules.

All code for this post is available on GitHub. However, it been updated as of 8/23/2015. Changes were required to fix compatibility issues with the latest versions of Puppet 4.x and Foreman. Additionally, the version of CentOS on all VMs was updated from 6.6 to 7.1 and the version of Foreman was updated from 1.7 to 1.9.

Steps

Here is a high-level overview of our steps in this post:

  1. Provision and configure the three CentOS-based virtual machines (‘nodes’) using Vagrant and VirtualBox
  2. Install the HAProxy and Apache Puppet modules, from Puppet Forge, onto the Foreman server
  3. Install the custom HAProxy and Apache Puppet configuration modules, from GitHub, onto the Foreman server
  4. Import the four new module’s classes to Foreman’s Puppet class library
  5. Add the three new virtual machines (‘hosts’) to Foreman
  6. Configure the new hosts in Foreman, assigning the appropriate Puppet classes
  7. Apply the Foreman Puppet configurations to the new hosts
  8. Test HAProxy is working as a reverse and proxy load-balancer for the two Apache web server nodes

In this post, I will use the terms ‘virtual machine’, ‘machine’, ‘node’, ‘agent node’, and ‘host’, interchangeable, based on each software’s own nomenclature.

Provisioning

First, using the process described in the previous post, provision and bootstrap the three new virtual machines. The new machine’s Vagrant configuration is shown below. This should be added to the JSON configuration file. All code for the earlier post is available on GitHub.

{
  "nodes": {
    "haproxy.example.com": {
      ":ip": "192.168.35.101",
      "ports": [],
      ":memory": 512,
      ":bootstrap": "bootstrap-node.sh"
    },
    "node01.example.com": {
      ":ip": "192.168.35.121",
      "ports": [],
      ":memory": 512,
      ":bootstrap": "bootstrap-node.sh"
    },
    "node02.example.com": {
      ":ip": "192.168.35.122",
      "ports": [],
      ":memory": 512,
      ":bootstrap": "bootstrap-node.sh"
    }
  }
}

After provisioning and bootstrapping, observe the three machines running in Oracle’s VM VirtualBox Manager.

Oracle VM VirtualBox Manager View of New Nodes

Oracle VM VirtualBox Manager View of New Nodes

Installing Puppet Forge Modules

The next task is to install the HAProxy and Apache Puppet modules on the Foreman server. This allows Foreman to have access to them. I chose the puppetlabs-haproxy HAProxy module and the puppetlabs-apache Apache modules. Both modules were authored by Puppet Labs, and are available on Puppet Forge.

The exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory.

sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-haproxy
sudo puppet module install -i /etc/puppet/environments/production/modules puppetlabs-apache

# confirm module installation
puppet module list --modulepath /etc/puppet/environments/production/modules

Installing Configuration Modules

Next, install the HAProxy and Apache configuration Puppet modules on the Foreman server. Both modules are hosted on my GitHub repository. Both modules can be downloaded directly from GitHub and installed on the Foreman server, from the command line. Again, the exact commands to install the modules onto your Foreman server will depend on your Foreman environment configuration. In my case, I used the following two commands to install the two Puppet Forge modules into my ‘Production’ environment’s module directory. Also, notice I am currently downloading version 0.1.0 of both modules at the time of writing this post. Make sure to double-check for the latest versions of both modules before running the commands. Modify the commands if necessary.

# apache config module
wget -N https://github.com/garystafford/garystafford-apache_example_config/archive/v0.1.0.tar.gz && \
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force

# haproxy config module
wget -N https://github.com/garystafford/garystafford-haproxy_node_config/archive/v0.1.0.tar.gz && \
sudo puppet module install -i /etc/puppet/environments/production/modules ~/v0.1.0.tar.gz --force

# confirm module installation
puppet module list --modulepath /etc/puppet/environments/production/modules
GitHub Repository for Apache Config Example

GitHub Repository for Apache Config Example

HAProxy Configuration
The HAProxy configuration module configures HAProxy’s /etc/haproxy/haproxy.cfg file. The single class in the module’s init.pp manifest is as follows:

class haproxy_node_config () inherits haproxy {
  haproxy::listen { 'puppet00':
    collect_exported => false,
    ipaddress        => '*',
    ports            => '80',
    mode             => 'http',
    options          => {
      'option'  => ['httplog'],
      'balance' => 'roundrobin',
    },
  }

  Haproxy::Balancermember <<| listening_service == 'puppet00' |>>

  haproxy::balancermember { 'haproxy':
    listening_service => 'puppet00',
    server_names      => ['node01.example.com', 'node02.example.com'],
    ipaddresses       => ['192.168.35.121', '192.168.35.122'],
    ports             => '80',
    options           => 'check',
  }
}

The resulting /etc/haproxy/haproxy.cfg file will have the following configuration added. It defines the two Apache web server node’s hostname, ip addresses, and http port. The configuration also defines the load-balancing method, ‘round-robin‘ in our example. In this example, we are using layer 7 load-balancing (application layer – http), as opposed to layer 4 load-balancing (transport layer – tcp). Either method will work for this example. The Puppet Labs’ HAProxy module’s documentation on Puppet Forge and HAProxy’s own documentation are both excellent starting points to understand how to configure HAProxy. We are barely scraping the surface of HAProxy’s capabilities in this brief example.

listen puppet00
  bind *:80
  mode  http
  balance  roundrobin
  option  httplog
  server node01.example.com 192.168.35.121:80 check
  server node02.example.com 192.168.35.122:80 check

Apache Configuration
The Apache configuration module creates default web page in Apache’s docroot directory, /var/www/html/index.html. The single class in the module’s init.pp manifest is as follows:
ApacheConfigClass
The resulting /var/www/html/index.html file will look like the following. Observe that the facter variables shown in the module manifest above have been replaced by the individual node’s hostname and ip address during application of the configuration by Puppet (ie. ${fqdn} became node01.example.com).

ApacheConfigClass

Both of these Puppet modules were created specifically to configure HAProxy and Apache for this post. Unlike published modules on Puppet Forge, these two modules are very simple, and don’t necessarily represent the best practices and patterns for authoring Puppet Forge modules.

Importing into Foreman

After installing the new modules onto the Foreman server, we need to import them into Foreman. This is accomplished from the ‘Puppet classes’ tab, using the ‘Import from theforeman.example.com’ button. Once imported, the module classes are available to assign to host machines.

Importing Puppet Classes into Foreman

Importing Puppet Classes into Foreman

Add Host to Foreman

Next, add the three new hosts to Foreman. If you have questions on how to add the nodes to Foreman, start Puppet’s Certificate Signing Request (CSR) process on the hosts, signing the certificates, or other first time tasks, refer to the previous post. That post explains this process in detail.

Foreman Hosts Tab Showing New Nodes

Foreman Hosts Tab Showing New Nodes

Configure the Hosts

Next, configure the HAProxy and Apache nodes with the necessary Puppet classes. In addition to the base module classes and configuration classes, I recommend adding git and ntp modules to each of the new nodes. These modules were explained in the previous post. Refer to the screen-grabs below for correct module classes to add, specific to HAProxy and Apache.

HAProxy Node Puppet Classes Tab

HAProxy Node Puppet Classes Tab

Apache Nodes Puppet Classes Tab

Apache Nodes Puppet Classes Tab

Agent Configuration and Testing the System

Once configurations are retrieved and applied by Puppet Agent on each node, we can test our reverse proxy load-balanced environment. To start, open a browser and load haproxy.paychex.com. You should see one of the two pages below. Refresh the page a few times. You should observe HAProxy re-directing you to one Apache web server node, and then the other, using HAProxy’s round-robin algorithm. You can differentiate the Apache web servers by the hostname and ip address displayed on the web page.

Load Balancer Directing Traffic to Node01

Load Balancer Directing Traffic to Node01

Load Balancer Directing Traffic to Node02

Load Balancer Directing Traffic to Node02

After hitting HAProxy’s URL several times successfully, view HAProxy’s built-in Statistics Report page at http://haproxy.example.com/haproxy?stats. Note below, each of the two Apache node has been hit 44 times each from HAProxy. This demonstrates the effectiveness of the reverse proxy and load-balancing features of HAProxy.

Statistics Report for HAProxy

Statistics Report for HAProxy

Accessing Apache Directly
If you are testing HAProxy from the same machine on which you created the virtual machines (VirtualBox host), you will likely be able to directly access either of the Apache web servers (ei. node02.example.com). The VirtualBox host file contains the ip addresses and hostnames of all three hosts. This DNS configuration was done automatically by the vagrant-hostmanager plugin. However, in an actual Production environment, only the HAProxy server’s hostname and ip address would be publicly accessible to a user. The two Apache nodes would sit behind a firewall, accessible only by the HAProxy server. HAProxy acts as a façade to public side of the network.

Testing Apache Host Failure
The main reason you would likely use a load-balancer is high-availability. With HAProxy acting as a load-balancer, we should be able to impair one of the two Apache nodes, without noticeable disruption. HAProxy will continue to serve content from the remaining Apache web server node.

Log into node01.example.com, using the following command, vagrant ssh node01.example.com. To simulate an impairment on ‘node01’, run the following command to stop Apache, sudo service httpd stop. Now, refresh the haproxy.example.com URL in your web browser. You should notice HAProxy is now redirecting all traffic to node02.example.com.

Troubleshooting

While troubleshooting HAProxy configuration issues for this demonstration, I discovered logging is not configured by default on CentOS. No worries, I recommend HAProxy: Give me some logs on CentOS 6.5!, by Stephane Combaudon, to get logging running. Once logging is active, you can more easily troubleshoot HAProxy and Apache configuration issues. Here are some example commands you might find useful:

# haproxy
sudo more -f /var/log/haproxy.log
sudo haproxy -f /etc/haproxy/haproxy.cfg -c # check/validate config file

# apache
sudo ls -1 /etc/httpd/logs/
sudo tail -50 /etc/httpd/logs/error_log
sudo less /etc/httpd/logs/access_log

Redundant Proxies

In this simple example, the system’s weakest point is obviously the single HAProxy instance. It represents a single-point-of-failure (SPOF) in our environment. In an actual production environment, you would likely have more than one instance of HAProxy. They may both be in a load-balanced pool, or one active and on standby as a failover, should one instance become impaired. There are several techniques for building in proxy redundancy, often with the use of Virtual IP and Keepalived. Below is a list of articles that might help you take this post’s example to the next level.

, , , , , , , , , , , , ,

Leave a comment

Cloud-based Continuous Integration and Deployment for .NET Development

Create a cloud-based, continuous integration and deployment toolchain for distributed .NET development teams, using GitHub, AppVeyor, and Microsoft Azure.

Introduction

Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.

If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBeesJIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.

There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using GitGitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.

Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.

GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.

GitHub View of Solution

GitHub View of Solution

AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.

Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support.  AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.

AppVeyor View of Last Build of Solution

AppVeyor View of Latest Build of Solution

Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.

New Microsoft Azure Portal View of VM

New Microsoft Azure Portal View of VM

Sample Solution

The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014’) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.

Revised Restaurant Menu Demo Viewed on Android Tablet

Revised Restaurant Menu Demo Viewed on Android Tablet

The updated VS Solution contains the following four Projects:

  1. Restaurant – C# Class Library
  2. RestaurantUnitTests – Unit Test Project
  3. RestaurantWcfService – C# WCF Service Application
  4. RestaurantDemoSite – Web Site (JS/HTML5)
VS 2013 View of Solution

VS 2013 View of Solution

The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.

As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.

Installing and Configuring the Solution

The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create environment variables
[Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", `
  "{YOUR HOSTNAME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", `
  "{YOUR USERNME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", `
  "{YOUR PASSWORD HERE}", "User")

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

Cloud-Based Continuous Integration and Delivery

Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.

GitHub's AppVeyor Webhook Configuration

GitHub’s AppVeyor Webhook Configuration

Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:

Code Coverage Results for Restaurant Class Library

Code Coverage Results for Restaurant Class Library

AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’).  As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.

AppVeyor Running Automated Unit Tests Using VSTest.Console

AppVeyor Running Automated Unit Tests Using VSTest.Console

VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.

Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create new local non-admin User and Group for Web Deploy

# Main variables (Change these!)
[string]$userName = "USER_NAME_HERE" # mjones
[string]$fullName = "FULL USER NAME HERE" # Mike Jones
[string]$password = "USER_PASSWORD_HERE" # pa$$w0RD!
[string]$groupName = "GROUP_NAME_HERE" # Development

# Create new local user account
[ADSI]$server = "WinNT://$Env:COMPUTERNAME"
$newUser = $server.Create("User", $userName)
$newUser.SetPassword($password)

$newUser.Put("FullName", "$fullName")
$newUser.Put("Description", "$fullName User Account")

# Assign flags to user
[int]$ADS_UF_PASSWD_CANT_CHANGE = 64
[int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536
[int]$COMBINED_FLAG_VALUE = 65600

$flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE
$newUser.put("userFlags", $flags)
$newUser.SetInfo()

# Create new local group
$newGroup=$server.Create("Group", $groupName)
$newGroup.Put("Description","$groupName Group")
$newGroup.SetInfo()

# Assign user to group
[string]$serverPath = $server.Path
$group = [ADSI]"$serverPath/$groupName, group"
$group.Add("$serverPath/$userName, user")

# Assign local non-admin User in IIS for Web Deploy
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE)
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)

Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure.  Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.

To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.

The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.

Publish Web Profile Tab

Publish Web Profile Tab

Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).

As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>FileSystem</WebPublishMethod>
        <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish>
        <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <publishUrl>C:\MenuWcfRestService</publishUrl>
        <DeleteExistingFiles>True</DeleteExistingFiles>
    </PropertyGroup>
</Project>

A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>MSDeploy</WebPublishMethod>
        <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish />
        <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL>
        <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath>
        <RemoteSitePhysicalPath />
        <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer>
        <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
        <EnableMSDeployBackup>True</EnableMSDeployBackup>
        <UserName>$(AZURE_VM_USERNAME)</UserName>
        <_SavePWD>False</_SavePWD>
        <_DestinationType>AzureVirtualMachine</_DestinationType>
    </PropertyGroup>
</Project>

The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…

Publish Web Connection Tab - Failed Validation

Publish Web Connection Tab – Failed Validation

We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles.  AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.

To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.

# Build WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:Configuration=AppVeyor /verbosity:minimal /nologo

# Build website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:Configuration=Release /verbosity:minimal /nologo

Write-Host "*** Solution builds complete."
# Deploy WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

# Deploy website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

Write-Host "*** Solution deployments complete."

Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.

AppVeyor Output from Deployments to Azure.

AppVeyor Output from Deployments to Azure.

Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.

Final View of IIS Sites Running on Azure VM

Final View of IIS Sites Running on Azure VM

Links

 

, , , , , , , , , , , , , , , , , ,

1 Comment

Build a Continuous Deployment System with Maven, Hudson, WebLogic Server, and JUnit

Build an automated testing, continuous integration, and continuous deployment system, using Maven, Hudson, WebLogic Server, JUnit, and NetBeans. Developed with Oracle’s Pre-Built Enterprise Java Development VM. Download the complete source code from Dropbox and on GitHub.

System Diagram

Introduction

In this post, we will build a basic automated testing, continuous integration, and continuous deployment system, using Oracle’s Pre-Built Enterprise Java Development VM. The primary goal of the system is to automatically compile, test, and deploy a simple Java EE web application to a test environment. As this post will demonstrate, the key to a successful system is not a single application, but the effective integration of all the system’s applications into a well-coordinated and consistent workflow.

Building system such as this can be complex and time-consuming. However, Oracle’s Pre-Built Enterprise Java Development VM already has all the components we need. The Oracle VM includes NetBeans IDE for development, Apache Subversion for version control, Hudson Continuous Integration (CI) Server for build automation, JUnit and Hudson for unit test automation, and WebLogic Server for application hosting.

In addition, we will use Apache Maven, also included on the Oracle VM, to help manage our project’s dependencies, as well as the build and deployment process. Overlapping with some of Apache Ant’s build task functionality, Maven is a powerful cross-cutting tool for managing the modern software development projects. This post will only draw upon a small part of Maven’s functionality.

Demonstration Requirements

To save some time, we will use the same WebLogic Server (WLS) domain we built-in the last post, Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM. We will also use code from the sample Hello World Java EE web project from that post. If you haven’t already done so, work through the last post’s example, first.

Here is a quick list of requirements for this demonstration:

  • Oracle VM
    • Oracle’s Pre-Built Enterprise Java Development VM running on current version of Oracle VM VirtualBox (mine: 4.2.12)
    • Oracle VM’s has the latest system updates installed (see earlier post for directions)
    • WLS domain from last post created and running in Oracle VM
    • Credentials supplied with Oracle VM for Hudson (username and password)
  • Window’s Development Machine
    • Current version of Apache Maven installed and configured (mine: 3.0.5)
    • Current version of NetBeans IDE installed and configured (mine: 7.3)
    • Optional: Current version of WebLogic Server installed and configured
    • All environmental variables properly configured for Maven, Java, WLS, etc. (MW_HOME, M2, etc.)

The Process

The steps involved in this post’s demonstration are as follows:

  1. Install the WebLogic Maven Plugin into the Oracle VM’s Maven Repositories, as well as the Development machine
  2. Create a new Maven Web Application Project in NetBeans
  3. Copy the classes from the Hello World project in the last post to new project
  4. Create a properties file to store Maven configuration values for the project
  5. Add the Maven Properties Plugin to the Project’s POM file
  6. Add the WebLogic Maven Plugin to project’s POM file
  7. Add JUnit tests and JUnit dependencies to project
  8. Add a WebLogic Descriptor to the project
  9. Enable Tunneling on the new WLS domain from the last post
  10. Build, test, and deploy the project locally in NetBeans
  11. Add project to Subversion
  12. Optional: Upgrade existing Hudson 2.2.0 and plugins on the Oracle VM latest 3.x version
  13. Create and configure new Hudson CI job for the project
  14. Build the Hudson job to compile, test, and deploy project to WLS

WebLogic Maven Plugin

First, we need to install the WebLogic Maven Plugin (‘weblogic-maven-plugin’) onto both the Development machine’s local Maven Repository and the Oracle VM’s Maven Repository. Installing the plugin will allow us to deploy our sample application from NetBeans and Hudson, using Maven. The weblogic-maven-plugin, a JAR file, is not part of the Maven repository by default. According to Oracle, ‘WebLogic Server provides support for Maven through the provisioning of plug-ins that enable you to perform various operations on WebLogic Server from within a Maven environment. As of this release, there are two separate plug-ins available.’ In this post, we will use the weblogic-maven-plugin, as opposed to the wls-maven-plugin. Again, according to Oracle, the weblogic-maven-plugin “delivered in WebLogic Server 11g Release 1, provides support for deployment operations.”

The best way to understand the plugin install process is by reading the Using the WebLogic Development Maven Plug-In section of the Oracle Fusion Middleware documentation on Developing Applications for Oracle WebLogic Server. It goes into detail on how to install and configure the plugin.

In a nutshell, below is a list of the commands I executed to install the weblogic-maven-plugin version 12.1.1.0 on both my Windows development machine and on my Oracle VM. If you do not have WebLogic Server installed on your development machine, and therefore no access to the plugin, install it into the Maven Repository on the Oracle VM first, then copy the jar file to the development machine and follow the normal install process from that point forward.

On Windows Development Machine:

Installing weblogic-maven-plugin onto Dev Maven Repository

Installing weblogic-maven-plugin on a Windows Machine

On the Oracle VM:

Installing WebLogic Maven Plugin into Oracle VM Maven Repository

Installing WebLogic Maven Plugin into the Oracle VM

To test the success of your plugin installation, you can run the following maven command on Windows or Linux:

mvn help:describe -Dplugin=com.oracle.weblogic:weblogic-maven-plugin

Sample Maven Web Application

Using NetBeans on your development machine, create a new Maven Web Application. For those of you familiar with Maven, the NetBeans’ Maven Web Application project is based on the ‘webapp-javaee6:1.5’ Archetype. NetBeans creates the project by executing a ‘archetype:generate’ Maven Goal. This is seen in the ‘Output’ tab after the project is created.

01a - Choose the Maven Web Application Project Type

1a – Choose the Maven Web Application Project Type

01b - Name and Location of New Project

1b – Name and Location of New Project

By default you may have Tomcat and GlassFish as installed options on your system. Unfortunately, NetBeans currently does not have the ability to configure a remote connection to the WLS instance running on the Oracle VM, as I understand. You do not need an instance of WLS installed on your development machine since we are going to use the copy on the Oracle VM. We will use Maven to deploy the project to WLS on the Oracle VM, later in the post.

01c - Default Server and Java Settings

1c – Default Server and Java Settings

1d - New Maven Project in NetBeans

1d – New Maven Project in NetBeans

Next, copy the two java class files from the previous blog post’s Hello World project to the new project’s source package. Alternately, download a zipped copy this post’s complete sample code from Dropbox or on GitHub.

02a - Copy Two Class Files from Previous Project

2a – Copy Two Class Files from Previous Project

Because we are copying a RESTful web service to our new project, NetBeans will prompt us for some REST resource configuration options. To keep this new example simple, choose the first option and uncheck the Jersey option.

02b - REST Resource Configuration

2b – REST Resource Configuration

02c - New Project with Files Copied from Previous Project

2c – New Project with Files Copied from Previous Project

JUnit Tests

Next, create a set of JUnit tests for each class by right-clicking on both classes and selecting ‘Tools’ -> ‘Create Tests’.

03a - Create JUnit Tests for Both Class Files

3a – Create JUnit Tests for Both Class Files

03b - Choose JUnit Version 4.x

3b – Choose JUnit Version 4.x

03c - New Project with Test Classes and JUnit Test Dependencies

3c – New Project with Test Classes and JUnit Test Dependencies

We will use the test classes and dependencies NetBeans just added to the project. However, we will not use the actual JUnit tests themselves that NetBeans created. To properly set-up the default JUnit tests to work with an embedded version of WLS is well beyond the scope of this post.

Overwrite the contents of the class file with the code provided from Dropbox. I have replaced the default JUnit tests with simpler versions for this demonstration. Build the file to make sure all the JUnit tests all pass.

03d - Project Successfully Built with New JUnit Tests

3d – Project Successfully Built with New JUnit Tests

Project Properties

Next, add a new Properties file to the project, entitled ‘maven.properties’.

04a - Add Properties File to Project

4a – Add Properties File to Project

04b - Add Properties File to Project

4b – Add Properties File to Project

Add the following key/value pairs to the properties file. These key/value pairs are referenced will be referenced the POM.xml by the weblogic-maven-plugin, added in the next step. Placing the configuration values into a Properties file is not necessary for this post. However, if you wish to deploy to multiple environments, moving environmentally-specific configurations into separate properties files, using Maven Build Profiles, and/or using frameworks such as Spring, are all best practices.

Java Properties File (maven.properties):

Maven Plugins and the POM File

Next, add the WLS Maven Plugin (‘weblogic-maven-plugin’) and the Maven Properties Plugin (‘properties-maven-plugin’) to the end of the project’s Maven POM.xml file. The Maven Properties Plugin, part of the Mojo Project, allows us to substitute configuration values in the Maven POM file from a properties file. According to codehaus,org, who hosts the Mojo Project, ‘It’s main use-case is loading properties from files instead of declaring them in pom.xml, something that comes in handy when dealing with different environments.’

Project Object Model File (pom.xml):

WebLogic Deployment Descriptor

A WebLogic Deployment Descriptor file is the last item we need to add to the new Maven Web Application project. NetBeans has descriptors for multiple servers, including Tomcat (context.xml), GlassFish (application.xml), and WebLogic (weblogic.xml). They provide a convenient location to store specific server properties, used during the deployment of the project.

06a - Add New WebLogic Descriptor

6a – Add New WebLogic Descriptor

06b - Add New WebLogic Descriptor

6b – Add New WebLogic Descriptor

Add the ‘context-root’ tag. The value will be the name of our project, ‘HelloWorldMaven’, as shown below. According to Oracle, “the context-root element defines the context root of this standalone Web application.” The context-root of the application will form part of the URL we enter to display our application, later.

06c - Add Context Root Element to Descriptor

6c – Add Context Root Element to Descriptor

Make sure to the WebLogic descriptor file (‘weblogic.xml’) is placed in the WEB-INF folder. If not, the descriptor’s properties will not be read. If the descriptor is not read, the context-root of the deployed application will default to the project’s WAR file’s name. Instead of ‘HelloWorldMaven’ as the context-root, you would see ‘HelloWorldMaven-1.0-SNAPSHOT’.

06d - Move WebLogic Descriptor into WEB-INF Folder

6d – Move WebLogic Descriptor into WEB-INF Folder

Enable Tunneling

Before we compile, test, and deploy our project, we need to make a small change to WLS. In order to deploy our project remotely to the Oracle VM’s WLS, using the WebLogic Maven Plugin, we must enable tunneling on our WLS domain. According to Oracle, the ‘Enable Tunneling’ option “Specifies whether tunneling for the T3, T3S, HTTP, HTTPS, IIOP, and IIOPS protocols should be enabled for this server.” To enable tunneling, from the WLS Administration Console, select the ‘AdminServer’ Server, ‘Protocols’ tab, ‘General’ sub-tab.

Enabling Tunneling on WLS for HTTP Deployments

Enabling Tunneling on WLS for HTTP Deployments

Build and Test the Project

Right-click and select ‘Build’, ‘Clean and Build’, or ‘Build with Dependencies’. NetBeans executes a ‘mvn install’ command. This command initiates a series of Maven Goals. The goals, visible NetBean’s Output window, include ‘dependency:copy’, ‘properties:read-project-properties’, ‘compiler:compile’, ‘surefire:test’, and so forth. They move the project’s code through the Maven Build Lifecycle. Most goals are self-explanatory by their title.

The last Maven Goal to execute, if the other goals have succeeded, is the ‘weblogic:deploy’ goal. This goal deploys the project to the Oracle VM’s WLS domain we configured in our project. Recall in the POM file, we configured the weblogic-maven-plugin to call the ‘deploy’ goal whenever ‘install’, referred to as Execution Phase by Maven, is executed. If all goals complete without error, you have just compiled, tested, and deployed your first Maven web application to a remote WLS domain. Later, we will have Hudson do it for us, automatically.

06e - Successful Build of Project

6e – Successful Build of Project

Executing Maven Goals in NetBeans

A small aside, if you wish to run alternate Maven goals in NetBeans, right-click on the project and select ‘Custom’ -> ‘Goals…’. Alternately, click on the lighter green arrows (‘Re-run with different parameters’), adjacent to the ‘Output’ tab.

For example, in the ‘Run Maven’ pop-up, replace ‘install’ with ‘surefire:test’ or simply ‘test’. This will compile the project and run the JUnit tests. There are many Maven goals that can be ran this way. Use the Control key and Space Bar key combination in the Maven Goals text box to display a pop-up list of available goals.

07a - Executing the Maven test Goal

7a – Executing Other Maven Goals

07a - JUnit Test Results using Maven test Goal

7a – JUnit Test Results using Maven ‘test’ Goal

Subversion

Now that our project is complete and tested, we will commit the project to Subversion (SVN). We will commit a copy of our source code to SVN, installed on the Oracle VM, for safe-keeping. Having our source code in SVN also allows Hudson to retrieve a copy. Hudson will then compile, test, and deploy the project to WLS.

The Repository URL, User, and Password are all supplied in the Oracle VM information, along with the other URLs and credentials.

08a - Add Project to Subversion Repository

8a – Add Project to Subversion Repository

08b - Subversion Repository Folder for Project

8b – Subversion Repository Folder for Project

When you import you project for the first time, you will see more files than are displayed below. I had already imported part of the project earlier while creating this post. Therefore most of my files were already managed by Subversion.

08c - List of Files Imported into Subversion

8c – List of Files Imported into Subversion (you will see more)

08d - Project Successfully Imported into Subversion

08d – Project Successfully Imported into Subversion

Upgrading Hudson CI Server

The Oracle VM comes with Hudson pre-installed in it’s own WLS domain, ‘hudson-ci_dev’, running on port 5001. Start the domain from within the VM by double-clicking the ‘WLS 12c – Hudson CI 5001’ icon on the desktop, or by executing the domain’s WLS start-up script from a terminal window:

/labs/wls1211/user_projects/domains/hudson-ci_dev/startWebLogic.sh

Once started, the WLS Administration Console 12c is accessible at the following URL. User your VM’s IP address or ‘localhost’ if you are within the VM.

http://[your_vm_ip_address]:5001/console/login/LoginForm.jsp

The Oracle VM comes loaded with Hudson version 2.2.0. I strongly suggest is updating Hudson to the latest version (3.0.1 at the time of this post). To upgrade, download, deploy, and started a new 3.0.1 version in the same domain on the same ‘AdminServer’ Server. I was able to do this remotely, from my development machine, using the browser-based Hudson Dashboard and WLS Administration Console. There is no need to do any of the installation from within the VM, itself.

When the upgrade is complete, stop the 2.2.0 deployment currently running in the WLS domain.

Hudson 3.0.1 Deployed to WLS Domain on VM

Hudson 3.0.1 Deployed to WLS Domain on VM

The new version of Hudson is accessible from the following URL (adjust the URL your exact version of Hudson):

http://[your_vm_ip_address]:5001/hudson-3.0.1/

It’s also important to update all the Hudson plugins. Hudson makes this easy with the Hudson Plugin Manager, accessible via the Manage Hudson’ option.

View of Hudson 3.0.1 Running on WLS with All Plugins Updated

View of Hudson 3.0.1 Running on WLS with All Plugins Updated

Note on the top of the Manage Hudson page, there is a warning about the server’s container not using UTF-8 to decode URLs. You can follow this post, if you want to resolve the issue by configuring Hudson differently. I did not worry about it for this post.

Building a Hudson Job

We are ready to configure Hudson to build, test, and deploy our Maven Web Application project. Return to the ‘Hudson Dashboard’, select ‘New Job’, and then ‘Build a new free-style software job’. This will open the ‘Job Configurations’ for the new job.

01 - Creating New Hudson Free-Style Software Job

1 – Creating New Hudson Free-Style Software Job

02 -Default Job Configurations

2 -Default Job Configurations

Start by configuring the ‘Source Code Management’ section. The Subversion Repository URL is the same as the one you used in NetBeans to commit the code. To avoid the access error seen below, you must provide the Subversion credentials to Hudson, just as you did in NetBeans.

03 -Subversion SCM Configuration

3 -Subversion SCM Configuration

04 - Subversion SCM Authentication

4 – Subversion SCM Authentication

05 -Subversion SCM Configuration Authenticated

5 -Subversion SCM Configuration Authenticated

Next, configure the Maven 3 Goals. I chose the ‘clean’ and ‘install’ goals. Using ‘clean’ insures the project is compiled each time by deleting the output of the build directory.

Optionally, you can configure Hudson to publish the JUnit test results as shown below. Be sure to save your configuration.

06 -Maven 3 and JUnit Configurations

6 -Maven 3 and JUnit Configurations

Start a build of the new Hudson Job, by clicking ‘Build Now’. If your Hudson job’s configurations are correct, and the new WLS domain is running, you should have a clean build. This means the project compiled without error, all tests passed, and the web application’s WAR file was deployed successfully to the new WLS domain within the Oracle VM.

07 -Job Built Successfully Using Configurations

7 -Job Built Successfully Using Configurations

08 - Test Results from Build

8 – Test Results from Build

WebLogic Server

To view the newly deployed Maven Web Application, log into the WebLogic Server Administration Console for the new domain. In my case, the new domain was running on port 7031, so the URL would be:

http://[your_vm_ip_address]:7031/console/login/LoginForm.jsp

You should see the deployment, in an ‘Active’ state, as shown below.

09a - HelloWorldMaven Deployed to WLS from Hudson Build

9a – Project Deployed to WLS from Hudson Build

09b - Project Context Root Set by WebLogic Descriptor File

9b – Project’s Context Root Set by WebLogic Descriptor File

09c - Projects Servlet Path Set by web.xml File

9c – Project’s Servlet Paths

To test the deployment, open a new browser tab and go to the URL of the Servlet. In this case the URL would be:

http://[your_vm_ip_address]:7031/HelloWorldMaven/resources/helloWorld

You should see the original phrase from the previous project displayed, ‘Hello WebLogic Server!’.

10 - HelloWorldMaven Web Application Running in Browser

10 – Project’s Web Application Running in Browser

To further test the system, make a simple change to the project in NetBeans. I changed the name variable’s default value from ‘WebLogic Server’ to ‘Hudson, Maven, and WLS’. Commit the change to SVN.

11 - Make a Code Change to Project and Commit to Subversion

11 – Make a Code Change to Project and Commit to Subversion

Return to Hudson and run a new build of the job.

12 - Rebuild Project with Changes in Hudson

12 – Rebuild Project with Changes in Hudson

After the build completes, refresh the sample Web Application’s browser window. You should see the new text string displayed. Your code change was just re-compiled, re-tested, and re-deployed by Hudson.

13 - HelloWorldMaven Deployed to WLS Showing Code Change

13 – Project Showing Code Change

True Continuous Deployment

Although Hudson is now doing a lot of the work for us, the system still is not fully automated. We are still manually building our Hudson Job, in order to deploy our application. If you want true continuous integration and deployment, you need to trust the system to automatically deploy the project, based on certain criteria.

SCM polling with Hudson is one way to demonstrate continuous deployment. In ‘Job Configurations’, turn on ‘Poll SCM’ and enter Unix cron-like value(s) in the ‘Schedule’ text box. In the example below, I have indicated a polling frequency every hour (‘@hourly’). Every hour, Hudson will look for committed changes to the project in Subversion. If changes are found, Hudson w retrieves the source code, compiles, and tests. If the project compiles and passes all tests, it is deployed to WLS.

SCM Polling Interval

SCM Polling Interval

There are less resource-intense methods to react to changes than SCM polling. Push-notifications from the repository is alternate, more preferable method.

Additionally, you should configure messaging in Hudson to notify team members of new deployments and the changes they contain. You should also implement a good deployment versioning strategy, for tracking purposes. Knowing the version of deployed artifacts is critical for accurate change management and defect tracking.

Helpful Links

Maven Plug-In Goals

Maven Build Lifecycle

Configuring and Using the WebLogic Maven Plug-In for Deployment

Jenkins: Building a Software Project

Kohsuke Kawaguchi: Polling must die: triggering Jenkins builds from a git hook

, , , , , , , , , , , , , , , , , , , , , ,

6 Comments