Posts Tagged Azure
Azure Kubernetes Service (AKS) Observability with Istio Service Mesh
Posted by Gary A. Stafford in Azure, Bash Scripting, Cloud, DevOps, Go, JavaScript, Kubernetes, Software Development on March 31, 2019
In the last two-part post, Kubernetes-based Microservice Observability with Istio Service Mesh, we deployed Istio, along with its observability tools, Prometheus, Grafana, Jaeger, and Kiali, to Google Kubernetes Engine (GKE). Following that post, I received several questions about using Istio’s observability tools with other popular managed Kubernetes platforms, primarily Azure Kubernetes Service (AKS). In most cases, including with AKS, both Istio and the observability tools are compatible.
In this short follow-up of the last post, we will replace the GKE-specific cluster setup commands, found in part one of the last post, with new commands to provision a similar AKS cluster on Azure. The new AKS cluster will run Istio 1.1.3, released 4/15/2019, alongside the latest available version of AKS (Kubernetes), 1.12.6. We will replace Google’s Stackdriver logging with Azure Monitor logs. We will retain the external MongoDB Atlas cluster and the external CloudAMQP cluster dependencies.
Previous articles about AKS include First Impressions of AKS, Azure’s New Managed Kubernetes Container Service (November 2017) and Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB (December 2017).
Source Code
All source code for this post is available on GitHub in two projects. The Go-based microservices source code, all Kubernetes resources, and all deployment scripts are located in the k8s-istio-observe-backend project repository.
git clone \ --branch master --single-branch \ --depth 1 --no-tags \ https://github.com/garystafford/k8s-istio-observe-backend.git
The Angular UI TypeScript-based source code is located in the k8s-istio-observe-frontend repository. You will not need to clone the Angular UI project for this post’s demonstration.
Setup
This post assumes you have a Microsoft Azure account with the necessary resource providers registered, and the Azure Command-Line Interface (CLI), az
, installed and available to your command shell. You will also need Helm and Istio 1.1.3 installed and configured, which is covered in the last post.
Start by logging into Azure from your command shell.
az login \ --username {{ your_username_here }} \ --password {{ your_password_here }}
Resource Providers
If you are new to Azure or AKS, you may need to register some additional resource providers to complete this demonstration.
az provider list --output table
If you are missing required resource providers, you will see errors similar to the one shown below. Simply activate the particular provider corresponding to the error.
Operation failed with status:'Bad Request'. Details: Required resource provider registrations Microsoft.Compute, Microsoft.Network are missing.
To register the necessary providers, use the Azure CLI or the Azure Portal UI.
az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.Network az provider register --namespace Microsoft.Compute
Resource Group
AKS requires an Azure Resource Group. According to Azure, a resource group is a container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. I chose to create a new resource group associated with my closest geographic Azure Region, East US, using the Azure CLI.
az group create \ --resource-group aks-observability-demo \ --location eastus
Create the AKS Cluster
Before creating the GKE cluster, check for the latest versions of AKS. At the time of this post, the latest versions of AKS was 1.12.6.
az aks get-versions \ --location eastus \ --output table
Using the latest GKE version, create the GKE managed cluster. There are many configuration options available with the az aks create
command. For this post, I am creating three worker nodes using the Azure Standard_DS3_v2 VM type, which will give us a total of 12 vCPUs and 42 GB of memory. Anything smaller and all the Pods may not be schedulable. Instead of supplying an existing SSH key, I will let Azure create a new one. You should have no need to SSH into the worker nodes. I am also enabling the monitoring add-on. According to Azure, the add-on sets up Azure Monitor for containers, announced in December 2018, which monitors the performance of workloads deployed to Kubernetes environments hosted on AKS.
time az aks create \ --name aks-observability-demo \ --resource-group aks-observability-demo \ --node-count 3 \ --node-vm-size Standard_DS3_v2 \ --enable-addons monitoring \ --generate-ssh-keys \ --kubernetes-version 1.12.6
Using the time
command, we observe that the cluster took approximately 5m48s to provision; I have seen times up to almost 10 minutes. AKS provisioning is not as fast as GKE, which in my experience is at least 2x-3x faster than AKS for a similarly sized cluster.
After the cluster creation completes, retrieve your AKS cluster credentials.
az aks get-credentials \ --name aks-observability-demo \ --resource-group aks-observability-demo \ --overwrite-existing
Examine the Cluster
Use the following command to confirm the cluster is ready by examining the status of three worker nodes.
kubectl get nodes --output=wide
Observe that Azure currently uses Ubuntu 16.04.5 LTS for the worker node’s host operating system. If you recall, GKE offers both Ubuntu as well as a Container-Optimized OS from Google.
Kubernetes Dashboard
Unlike GKE, there is no custom AKS dashboard. Therefore, we will use the Kubernetes Web UI (dashboard), which is installed by default with AKS, unlike GKE. According to Azure, to make full use of the dashboard, since the AKS cluster uses RBAC, a ClusterRoleBinding must be created before you can correctly access the dashboard.
kubectl create clusterrolebinding kubernetes-dashboard \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:kubernetes-dashboard
Next, we must create a proxy tunnel on local port 8001
to the dashboard running on the AKS cluster. This CLI command creates a proxy between your local system and the Kubernetes API and opens your web browser to the Kubernetes dashboard.
az aks browse \ --name aks-observability-demo \ --resource-group aks-observability-demo
Although you should use the Azure CLI, PowerShell, or SDK for all your AKS configuration tasks, using the dashboard for monitoring the cluster and the resources running on it, is invaluable.
The Kubernetes dashboard also provides access to raw container logs. Azure Monitor provides the ability to construct complex log queries, but for quick troubleshooting, you may just want to see the raw logs a specific container is outputting, from the dashboard.
Azure Portal
Logging into the Azure Portal, we can observe the AKS cluster, within the new Resource Group.
In addition to the Azure Resource Group we created, there will be a second Resource Group created automatically during the creation of the AKS cluster. This group contains all the resources that compose the AKS cluster. These resources include the three worker node VM instances, and their corresponding storage disks and NICs. The group also includes a network security group, route table, virtual network, and an availability set.
Deploy Istio
From this point on, the process to deploy Istio Service Mesh and the Go-based microservices platform follows the previous post and use the exact same scripts. After modifying the Kubernetes resource files, to deploy Istio, use the bash script, part4_install_istio.sh. I have added a few more pauses in the script to account for the apparently slower response times from AKS as opposed to GKE. It definitely takes longer to spin up the Istio resources on AKS than on GKE, which can result in errors if you do not pause between each stage of the deployment process.
Using the Kubernetes dashboard, we can view the Istio resources running in the istio-system
Namespace, as shown below. Confirm that all resource Pods are running and healthy before deploying the Go-based microservices platform.
Deploy the Platform
Deploy the Go-based microservices platform, using bash deploy script, part5a_deploy_resources.sh.
The script deploys two replicas (Pods) of each of the eight microservices, Service-A through Service-H, and the Angular UI, to the dev
and test
Namespaces, for a total of 36 Pods. Each Pod will have the Istio sidecar proxy (Envoy Proxy) injected into it, alongside the microservice or UI.
Azure Load Balancer
If we return to the Resource Group created automatically when the AKS cluster was created, we will now see two additional resources. There is now an Azure Load Balancer and Public IP Address.
Similar to the GKE cluster in the last post, when the Istio Ingress Gateway is deployed as part of the platform, it is materialized as an Azure Load Balancer. The front-end of the load balancer is the new public IP address. The back-end of the load-balancer is a pool containing the three AKS worker node VMs. The load balancer is associated with a set of rules and health probes.
DNS
I have associated the new Azure public IP address, connected with the front-end of the load balancer, with the four subdomains I am using to represent the UI and the edge service, Service-A, in both Namespaces. If Azure is your primary Cloud provider, then Azure DNS is a good choice to manage your domain’s DNS records. For this demo, you will require your own domain.
Testing the Platform
With everything deployed, test the platform is responding and generate HTTP traffic for the observability tools to record. Similar to last time, I have chosen hey, a modern load generator and benchmarking tool, and a worthy replacement for Apache Bench (ab
). Unlike ab
, hey
supports HTTP/2. Below, I am running hey
directly from Azure Cloud Shell. The tool is simulating 10 concurrent users, generating a total of 500 HTTP GET requests to Service A.
# quick setup from Azure Shell using Bash go get -u github.com/rakyll/hey cd go/src/github.com/rakyll/hey/ go build ./hey -n 500 -c 10 -h2 http://api.dev.example-api.com/api/ping
We had 100% success with all 500 calls resulting in an HTTP 200 OK success status response code. Based on the results, we can observe the platform was capable of approximately 4 requests/second, with an average response time of 2.48 seconds and a mean time of 2.80 seconds. Almost all of that time was the result of waiting for the response, as the details indicate.
Logging
In this post, we have replaced GCP’s Stackdriver logging with Azure Monitor logs. According to Microsoft, Azure Monitor maximizes the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from Cloud and on-premises environments. In my opinion, Stackdriver is a superior solution for searching and correlating the logs of distributed applications running on Kubernetes. I find the interface and query language of Stackdriver easier and more intuitive than Azure Monitor, which although powerful, requires substantial query knowledge to obtain meaningful results. For example, here is a query to view the log entries from the services in the dev
Namespace, within the last day.
let startTimestamp = ago(1d); KubePodInventory | where TimeGenerated > startTimestamp | where ClusterName =~ "aks-observability-demo" | where Namespace == "dev" | where Name contains "service-" | distinct ContainerID | join ( ContainerLog | where TimeGenerated > startTimestamp ) on ContainerID | project LogEntrySource, LogEntry, TimeGenerated, Name | order by TimeGenerated desc | render table
Below, we see the Logs interface with the search query and log entry results.
Below, we see a detailed view of a single log entry from Service A.
Observability Tools
The previous post goes into greater detail on the features of each of the observability tools provided by Istio, including Prometheus, Grafana, Jaeger, and Kiali.
We can use the exact same kubectl port-forward
commands to connect to the tools on AKS as we did on GKE. According to Google, Kubernetes port forwarding allows using a resource name, such as a service name, to select a matching pod to port forward to since Kubernetes v1.10. We forward a local port to a port on the tool’s pod.
# Grafana kubectl port-forward -n istio-system \ $(kubectl get pod -n istio-system -l app=grafana \ -o jsonpath='{.items[0].metadata.name}') 3000:3000 & # Prometheus kubectl -n istio-system port-forward \ $(kubectl -n istio-system get pod -l app=prometheus \ -o jsonpath='{.items[0].metadata.name}') 9090:9090 & # Jaeger kubectl port-forward -n istio-system \ $(kubectl get pod -n istio-system -l app=jaeger \ -o jsonpath='{.items[0].metadata.name}') 16686:16686 & # Kiali kubectl -n istio-system port-forward \ $(kubectl -n istio-system get pod -l app=kiali \ -o jsonpath='{.items[0].metadata.name}') 20001:20001 &
Prometheus and Grafana
Prometheus is a completely open source and community-driven systems monitoring and alerting toolkit originally built at SoundCloud, circa 2012. Interestingly, Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second hosted-project, after Kubernetes.
Grafana describes itself as the leading open source software for time series analytics. According to Grafana Labs, Grafana allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. You can easily create, explore, and share visually-rich, data-driven dashboards. Grafana also users to visually define alert rules for your most important metrics. Grafana will continuously evaluate rules and can send notifications.
According to Istio, the Grafana add-on is a pre-configured instance of Grafana. The Grafana Docker base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. Below, we see one of the pre-configured dashboards, the Istio Service Dashboard.
Jaeger
According to their website, Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance and latency optimization. The Jaeger website contains a good overview of Jaeger’s architecture and general tracing-related terminology.
Below, we see a typical, distributed trace of the services, starting ingress gateway and passing across the upstream service dependencies.
Kaili
According to their website, Kiali provides answers to the questions: What are the microservices in my Istio service mesh, and how are they connected? Kiali works with Istio, in OpenShift or Kubernetes, to visualize the service mesh topology, to provide visibility into features like circuit breakers, request rates and more. It offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads.
There is a common Kubernetes Secret that controls access to the Kiali API and UI. The default login is admin
, the password is 1f2d1e2e67df
.
Below, we see a detailed view of our platform, running in the dev
namespace, on AKS.
Delete AKS Cluster
Once you are finished with this demo, use the following two commands to tear down the AKS cluster and remove the cluster context from your local configuration.
time az aks delete \ --name aks-observability-demo \ --resource-group aks-observability-demo \ --yes kubectl config delete-context aks-observability-demo
Conclusion
In this brief, follow-up post, we have explored how the current set of observability tools, part of the latest version of Istio Service Mesh, integrates with Azure Kubernetes Service (AKS).
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
Building and Integrating LUIS-enabled Chatbots with Slack, using Azure Bot Service, Bot Builder SDK, and Cosmos DB
Posted by Gary A. Stafford in Azure, Cloud, Continuous Delivery, JavaScript, Software Development on August 27, 2018
Introduction
In this post, we will explore the development of a machine learning-based LUIS-enabled chatbot using the Azure Bot Service and the BotBuilder SDK. We will enhance the chatbot’s functionality with Azure’s Cloud services, including Cosmos DB and Blob Storage. Once built, we will integrate our chatbot across multiple channels, including Web Chat and Slack.
If you want to compare Azure’s current chatbot technologies with those of AWS and Google, in addition to this post, please read my previous two posts in this series, Building Serverless Actions for Google Assistant with Google Cloud Functions, Cloud Datastore, and Cloud Storage and Building Asynchronous, Serverless Alexa Skills with AWS Lambda, DynamoDB, S3, and Node.js. All three of the article’s demonstrations are written in Node.js, all three leverage their cloud platform’s machine learning-based Natural Language Understanding services, and all three take advantage of NoSQL database and storage services available on their respective cloud platforms.
Technology Stack
Here is a brief overview of the key Microsoft technologies we will incorporate into our bot’s architecture.
LUIS
The machine learning-based Language Understanding Intelligent Service (LUIS) is part of Azure’s Cognitive Services, used to build Natural Language Understanding (NLU) into apps, bots, and IoT devices. According to Microsoft, LUIS allows you to quickly create enterprise-ready, custom machine learning models that continuously improve.
Designed to identify valuable information in conversations, Language Understanding interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. Language Understanding integrates seamlessly with the Speech service for instant Speech-to-Intent processing, and with the Azure Bot Service, making it easy to create a sophisticated bot. A LUIS bot contains a domain-specific natural language model, which you design.
Azure Bot Service
The Azure Bot Service provides an integrated environment that is purpose-built for bot development, enabling you to build, connect, test, deploy, and manage intelligent bots, all from one place. Bot Service leverages the Bot Builder SDK.
Bot Builder SDK
The Bot Builder SDK allows you to build, connect, deploy and manage bots, which interact with users, across multiple channels, from your app or website to Facebook, Messenger, Kik, Skype, Slack, Microsoft Teams, Telegram, SMS, Twilio, Cortana, and Skype. Currently, the SDK is available for C# and Node.js. For this post, we will use the current Bot Builder Node.js SDK v3 release to write our chatbot.
Cosmos DB
According to Microsoft, Cosmos DB is a globally distributed, multi-model database-as-a-service, designed for low latency and scalable applications anywhere in the world. Cosmos DB supports multiple data models, including document, columnar, and graph. Cosmos also supports numerous database SDKs, including MongoDB, Cassandra, and Gremlin DB. We will use the MongoDB SDK to store our documents in Cosmos DB, used by our chatbot.
Azure Blob Storage
According to Microsoft, Azure’s storage-as-a-service, Blob Storage, provides massively scalable object storage for any type of unstructured data, images, videos, audio, documents, and more. We will be using Blob Storage to store publically-accessible images, used by our chatbot.
Azure Application Insights
According to Microsoft, Azure’s Application Insights provides comprehensive, actionable insights through application performance management (APM) and instant analytics. Quickly analyze application telemetry, allowing the detection of anomalies, application failure, performance changes. Application Insights will enable us to monitor our chatbot’s key metrics.
High-Level Architecture
A chatbot user interacts with the chatbot through a number of available channels, such as the Web, Slack, and Skype. The channels communicate with the Web App Bot, part of Azure Bot Service, and running on Azure’s App Service, the fully-managed platform for cloud apps. LUIS integration allows the chatbot to learn and understand the user’s intent based on our own domain-specific natural language model.
Through Azure’s App Service platform, our chatbot is able to retrieve data from Cosmos DB and images from Blob Storage. Our chatbot’s telemetry is available through Azure’s Application Insights.
Azure Resources
Another way to understand our chatbot architecture is by examining the Azure resources necessary to build the chatbot. Below is an example of all the Azure resources that will be created as a result of building a LUIS-enabled bot, which has been integrated with Cosmos DB, Blob Storage, and Application Insights.
Chatbot Demonstration
As a demonstration, we will build an informational chatbot, the Azure Tech Facts Chatbot. The bot will respond to the user with interesting facts about Azure, Microsoft’s Cloud computing platform. Note this is not intended to be an official Microsoft bot and is only used for demonstration purposes.
Source Code
All open-sourced code for this post can be found on GitHub. The code samples in this post are displayed as GitHub Gists, which may not display correctly on some mobile and social media browsers. Links to the gists are also provided.
Development Process
This post will focus on the development and integration of a chatbot with the LUIS, Azure platform services, and channels, such as Web Chat and Slack. The post is not intended to be a general how-to article on developing Azure chatbots or the use of the Azure Cloud Platform.
Building the chatbot will involve the following steps.
- Design the chatbot’s conversation flow;
- Provision a Cosmos DB instance and import the Azure Facts documents;
- Provision Azure Storage and upload the images as blobs into Azure Storage;
- Create the new LUIS-enabled Web App Bot with Azure’s Bot Service;
- Define the chatbot’s Intents, Entities, and Utterances with LUIS;
- Train and publish the LUIS app;
- Deploy and test the chatbot;
- Integrate the chatbot with Web Chat and Slack Channels;
The post assumes you have an existing Azure account and a working knowledge of Azure. Let’s explore each step in more detail.
Cost of Azure Bots!
Be aware, you will be charged for Azure Cloud services when building this bot. Unlike an Alexa Custom Skill or an Action for Google Assistant, an Azure chatbot is not a serverless application. A common feature of serverless platforms, you only pay for the compute time you consume. There typically is no charge when your code is not running. This means, unlike AWS and Google Cloud Platform, you will pay for Azure resources you provision, whether or not you use them.
Developing this demo’s chatbot on the Azure platform, with little or no activity most of the time, cost me about $5/day. On AWS or GCP, a similar project would cost pennies per day or less (like, $0). Currently, in my opinion, Azure does not have a very competitive model for building bots, or for serverless computing in general, beyond Azure Functions, when compared to Google and AWS.
Conversational Flow
The first step in developing a chatbot is designing the conversation flow of the between the user and the bot. Defining the conversation flow is essential to developing the bot’s programmatic logic and training the domain-specific natural language model for the machine learning-based services the bot is integrated with, in this case, LUIS. What are all the ways the user might explicitly invoke our chatbot? What are all the ways the user might implicitly invoke our chatbot and provide intent to the bot? Taking the time to map out the possible conversational interactions is essential.
With more advanced bots, like Alexa, Actions for Google Assistant, and Azure Bots, we also have to consider the visual design of the conversational interfaces. In addition to simple voice and text responses, these bots are capable of responding with a rich array of UX elements, including what are generically known as ‘Cards’. Cards come in varying complexity and may contain elements such as text, title, sub-titles, text, video, audio, buttons, and links. Azure Bot Service offers several different cards for specific use cases.
Channel Design
Another layer of complexity with bots is designing for channels into which they integrate. There is a substantial visual difference in a conversational exchange displayed on Facebook Messanger, as compared to Slack, or Skype, Microsoft Teams, GroupMe, or within a web browser. Producing an effective conversational flow presentation across multiple channels a design challenge.
We will be focusing on two channels for delivery of our bot, the Web Chat and Slack channels. We will want to design the conversational flow and visual elements to be effective and engaging across both channels. The added complexity with both channels, they both have mobile and web-based interfaces. We will ensure our design works with the compact real-estate of an average-sized mobile device screen, as well as average-sized laptop’s screen.
Web Chat Channel Design
Below are two views of our chatbot, delivered through the Web Chat channel. To the left, is an example of the bot responding with ThumbnailCard UX elements. The ThumbnailCards contain a title, sub-title, text, small image, and a button with a link. Below and to the right is an example of the bot responding with a HeroCard. The HeroCard contains the same elements as the ThumbnailCard but takes up about twice the space with a significantly larger image.
Slack Channel Design
Below are three views of our chatbot, delivered through the Slack channel, in this case, the mobile iOS version of the Slack app. Even here on a larger iPhone 8s, there is not a lot of real estate. On the right is the same HeroCard as we saw above in the Web Chat channel. In the middle are the same ThumbnailCards. On the right is a simple text-only response. Although the text-only bot responses are not as rich as the cards, you are able to display more of the conversational flow on a single mobile screen.
Lastly, below we see our chatbot delivered through the Slack for Mac desktop app. Within the single view, we see an example of a HeroCard (top), ThumbnailCard (center), and a text-only response (bottom). Notice how the larger UI of the desktop Slack app changes the look and feel of the chatbot conversational flow.
In my opinion, the ThumbnailCards work well in the Web Chat channel and Slack channel’s desktop app, while the text-only responses seem to work best with the smaller footprint of the Slack channel’s mobile client. To work across a number of channels, our final bot will contain a mix of ThumbnailCards and text-only responses.
Cosmos DB
As an alternative to Microsoft’s Cognitive Service, QnA Maker, we will use Cosmos DB to house the responses to user’s requests for facts about Azure. When a user asks our informational chatbot for a fact about Azure, the bot will query Cosmos DB, passing a single unique string value, the fact the user is requesting. In response, Cosmos DB will return a JSON Document, containing field-and-value pairs with the fact’s title, image name, and textual information, as shown below.
There are a few ways to create the new Cosmos DB database and collection, which will hold our documents, we will use the Azure CLI. According to Microsoft, the Azure CLI 2.0 is Microsoft’s cross-platform command line interface (CLI) for managing Azure resources. You can use it in your browser with Azure Cloud Shell, or install it on macOS, Linux, or Windows, and run it from the command line. (gist).
#!/usr/bin/env sh | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
set -ex | |
LOCATION="<value_goes_here>" | |
RESOURCE_GROUP="<value_goes_here>" | |
ACCOUNT_NAME="<value_goes_here>" | |
COSMOS_ACCOUNT_KEY="<value_goes_here>" | |
DB_NAME="<value_goes_here>" | |
COLLECTION_NAME="<value_goes_here>" | |
az login | |
az cosmosdb create \ | |
--name ${ACCOUNT_NAME} \ | |
--resource-group ${RESOURCE_GROUP} \ | |
--location "East US=0" \ | |
--kind MongoDB | |
az cosmosdb database create \ | |
--name ${ACCOUNT_NAME} \ | |
--db-name ${DB_NAME} \ | |
--resource-group-name ${RESOURCE_GROUP} \ | |
az cosmosdb collection create \ | |
--collection-name ${COLLECTION_NAME} \ | |
--name ${ACCOUNT_NAME} \ | |
--db-name ${DB_NAME} \ | |
--resource-group-name ${RESOURCE_GROUP} \ | |
--throughput 400 \ | |
--key ${COSMOS_ACCOUNT_KEY}\ | |
--verbose --debug |
There are a few ways for us to get our Azure facts documents into Cosmos DB. Since we are writing our chatbot in Node.js, I also chose to write a Cosmos DB facts import script in Node.js, cosmos-db-data.js. Since we are using Cosmos DB as a MongoDB datastore, all the script requires is the official MongoDB driver for Node.js. Using the MongoDB driver’s db.collection.insertMany() method, we can upload an entire array of Azure fact document objects with one call. For security, we have set the Cosmos DB connection string as an environment variable, which the script expects to find at runtime (gist).
// author: Gary A. Stafford | |
// site: https://programmaticponderings.com | |
// license: MIT License | |
// Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-samples | |
// Insert Azure Facts into Cosmos DB Collection | |
'use strict'; | |
/* CONSTANTS */ | |
const mongoClient = require('mongodb').MongoClient; | |
const assert = require('assert'); | |
const COSMOS_DB_CONN_STR = process.env.COSMOS_DB_CONN_STR; | |
const DB_COLLECTION = "azuretechfacts"; | |
const azureFacts = [ | |
{ | |
"fact": "certifications", | |
"title": "Azure Certifications", | |
"image": "image-06.png", | |
"response": "As of June 2018, Microsoft offered ten Azure certification exams, allowing IT professionals the ability to differentiate themselves and validate their knowledge and skills." | |
}, | |
{ | |
"fact": "cognitive", | |
"title": "Cognitive Services", | |
"image": "image-09.png", | |
"response": "Azure's intelligent algorithms allow apps to see, hear, speak, understand and interpret user needs through natural methods of communication." | |
}, | |
{ | |
"fact": "competition", | |
"title": "Azure's Competition", | |
"image": "image-05.png", | |
"response": "According to the G2 Crowd website, Azure's Cloud competitors include Amazon Web Services (AWS), Digital Ocean, Google Compute Engine (GCE), and Rackspace." | |
}, | |
{ | |
"fact": "compliance", | |
"title": "Compliance", | |
"image": "image-06.png", | |
"response": "Microsoft provides the most comprehensive set of compliance offerings (including certifications and attestations) of any cloud service provider." | |
}, | |
{ | |
"fact": "cosmos", | |
"title": "Azure Cosmos DB", | |
"image": "image-17.png", | |
"response": "According to Microsoft, Cosmos DB is a globally distributed, multi-model database service, designed for low latency and scalable applications anywhere in the world, with native support for NoSQL." | |
}, | |
{ | |
"fact": "kubernetes", | |
"title": "Azure Kubernetes Service (AKS)", | |
"image": "image-18.png", | |
"response": "According to Microsoft, Azure Kubernetes Service (AKS) is a fully managed Kubernetes container orchestration service, which simplifies Kubernetes management, deployment, and operations." | |
} | |
]; | |
const insertDocuments = function (db, callback) { | |
db.collection(DB_COLLECTION).insertMany(azureFacts, function (err, result) { | |
assert.equal(err, null); | |
console.log(`Inserted documents into the ${DB_COLLECTION} collection.`); | |
callback(); | |
}); | |
}; | |
const findDocuments = function (db, callback) { | |
const cursor = db.collection(DB_COLLECTION).find(); | |
cursor.each(function (err, doc) { | |
assert.equal(err, null); | |
if (doc != null) { | |
console.dir(doc); | |
} else { | |
callback(); | |
} | |
}); | |
}; | |
const deleteDocuments = function (db, callback) { | |
db.collection(DB_COLLECTION).deleteMany( | |
{}, | |
function (err, results) { | |
console.log(results); | |
callback(); | |
} | |
); | |
}; | |
mongoClient.connect(COSMOS_DB_CONN_STR, function (err, client) { | |
assert.equal(null, err); | |
const db = client.db(DB_COLLECTION); | |
deleteDocuments(db, function () { | |
insertDocuments(db, function () { | |
findDocuments(db, function () { | |
client.close(); | |
}); | |
}); | |
}); | |
}); |
Azure Blob Storage
When a user asks our informational chatbot for a fact about Azure, the bot will query Cosmos DB. One of the values returned is an image name. The image itself is stored on Azure Blob Storage.
The image, actually an Azure icon available from Microsoft, is then displayed in the ThumbnailCard or HeroCard shown earlier.
According to Microsoft, an Azure storage account provides a unique namespace in the cloud to store and access your data objects in Azure Storage. A storage account contains any blobs, files, queues, tables, and disks that you create under that account. A container organizes a set of blobs, similar to a folder in a file system. All blobs reside within a container. Similar to Cosmos DB, there are a few ways to create a new Azure Storage account and a blob storage container, which will hold our images. Once again, we will use the Azure CLI (gist).
#!/usr/bin/env sh | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
set -ex | |
## CONSTANTS ## | |
ACCOUNT_NAME="<value_goes_here>" | |
LOCATION="<value_goes_here>" | |
RESOURCE_GROUP="<value_goes_here>" | |
STORAGE_ACCOUNT="<value_goes_here>" | |
STORAGE_CONTAINER="<value_goes_here>" | |
az login | |
az storage account create \ | |
--name ${STORAGE_ACCOUNT} \ | |
--default-action Allow \ | |
--kind Storage \ | |
--location ${LOCATION} \ | |
--resource-group ${RESOURCE_GROUP} \ | |
--sku Standard_LRS | |
az storage container create \ | |
--name ${STORAGE_CONTAINER} \ | |
--fail-on-exist \ | |
--public-access blob \ | |
--account-name ${STORAGE_ACCOUNT} \ | |
--account-key ${ACCOUNT_KEY} |
Once the storage account and container are created using the Azure CLI, to upload the images, included with the GitHub project, by using the Azure CLI’s storage blob upload-batch
command (gist).
#!/usr/bin/env sh | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
set -ex | |
## CONSTANTS ## | |
ACCOUNT_NAME="<value_goes_here>" | |
STORAGE_ACCOUNT="<value_goes_here>" | |
STORAGE_CONTAINER="<value_goes_here>" | |
az storage blob upload-batch \ | |
--account-name ${STORAGE_ACCOUNT} \ | |
--account-key ${ACCOUNT_KEY} \ | |
--destination ${STORAGE_CONTAINER} \ | |
--source ../azure-tech-facts-google/pics/ \ | |
--pattern image-*.png | |
# --dryrun |
Web App Chatbot
To create the LUIS-enabled chatbot, we can use the Azure Bot Service, available on the Azure Portal. A Web App Bot is one of a variety of bots available from Azure’s Bot Service, which is part of Azure’s larger and quickly growing suite of AI and Machine Learning Cognitive Services. A Web App Bot is an Azure Bot Service Bot deployed to an Azure App Service Web App. An App Service Web App is a fully managed platform that lets you build, deploy, and scale enterprise-grade web apps.
To create a LUIS-enabled chatbot, choose the Language Understanding Bot template, from the Node.js SDK Language options. This will provide a complete project and boilerplate bot template, written in Node.js, for you to start developing with. I chose to use the SDK v3, as v4 is still in preview and subject to change.
Azure Resource Manager
A great DevOps features of the Azure Platform is Azure’s ability to generate Azure Resource Manager (ARM) templates and the associated automation scripts in PowerShell, .NET, Ruby, and the CLI. This allows engineers to programmatically build and provision services on the Azure platform, without having to write the code themselves.
To build our chatbot, you can continue from the Azure Portal as I did, or download the ARM template and scripts, and run them locally. Once you have created the chatbot, you will have the option to download the source code as a ZIP file from the Bot Management Build console. I prefer to use the JetBrains WebStorm IDE to develop my Node.js-based bots, and GitHub to store my source code.
Application Settings
As part of developing the chatbot, you will need to add two additional application settings to the Azure App Service the chatbot is running within. The Cosmos DB connection string (COSMOS_DB_CONN_STR
) and the URL of your blob storage container (ICON_STORAGE_URL
) will both be referenced from within our bot, as an environment variable. You can manually add the key/value pairs from the Azure Portal (shown below), or programmatically.
The chatbot’s code, in the app.js file, is divided into three sections: Constants and Global Variables, Intent Handlers, and Helper Functions. Let’s look at each section and its functionality.
Constants
Below is the Constants used by the chatbot. I have preserved Azure’s boilerplate template comments in the app.js file. The comments are helpful in understanding the detailed inner-workings of the chatbot code (gist).
// author: Gary A. Stafford | |
// site: https://programmaticponderings.com | |
// license: MIT License | |
// description: Azure Tech Facts LUIS-enabled Chatbot | |
'use strict'; | |
/*----------------------------------------------------------------------------- | |
A simple Language Understanding (LUIS) bot for the Microsoft Bot Framework. | |
-----------------------------------------------------------------------------*/ | |
/* CONSTANTS AND GLOBAL VARIABLES */ | |
const restify = require('restify'); | |
const builder = require('botbuilder'); | |
const botbuilder_azure = require("botbuilder-azure"); | |
const mongoClient = require('mongodb').MongoClient; | |
const COSMOS_DB_CONN_STR = process.env.COSMOS_DB_CONN_STR; | |
const DB_COLLECTION = "azuretechfacts"; | |
const ICON_STORAGE_URL = process.env.ICON_STORAGE_URL; | |
// Setup Restify Server | |
const server = restify.createServer(); | |
server.listen(process.env.port || process.env.PORT || 3978, function () { | |
console.log('%s listening to %s', server.name, server.url); | |
}); | |
// Create chat connector for communicating with the Bot Framework Service | |
const connector = new builder.ChatConnector({ | |
appId: process.env.MicrosoftAppId, | |
appPassword: process.env.MicrosoftAppPassword, | |
openIdMetadata: process.env.BotOpenIdMetadata | |
}); | |
// Listen for messages from users | |
server.post('/api/messages', connector.listen()); | |
/*---------------------------------------------------------------------------------------- | |
* Bot Storage: This is a great spot to register the private state storage for your bot. | |
* We provide adapters for Azure Table, CosmosDb, SQL Azure, or you can implement your own! | |
* For samples and documentation, see: https://github.com/Microsoft/BotBuilder-Azure | |
* ---------------------------------------------------------------------------------------- */ | |
const tableName = 'botdata'; | |
const azureTableClient = new botbuilder_azure.AzureTableClient(tableName, process.env['AzureWebJobsStorage']); | |
const tableStorage = new botbuilder_azure.AzureBotStorage({gzipData: false}, azureTableClient); | |
// Create your bot with a function to receive messages from the user | |
// This default message handler is invoked if the user's utterance doesn't | |
// match any intents handled by other dialogs. | |
const bot = new builder.UniversalBot(connector, function (session, args) { | |
const DEFAULT_RESPONSE = `Sorry, I didn't understand: _'${session.message.text}'_.`; | |
session.send(DEFAULT_RESPONSE).endDialog(); | |
}); | |
bot.set('storage', tableStorage); | |
// Make sure you add code to validate these fields | |
const luisAppId = process.env.LuisAppId; | |
const luisAPIKey = process.env.LuisAPIKey; | |
const luisAPIHostName = process.env.LuisAPIHostName || 'westus.api.cognitive.microsoft.com'; | |
const LuisModelUrl = 'https://' + luisAPIHostName + '/luis/v2.0/apps/' + luisAppId + '?subscription-key=' + luisAPIKey; | |
// Create a recognizer that gets intents from LUIS, and add it to the bot | |
const recognizer = new builder.LuisRecognizer(LuisModelUrl); | |
bot.recognizer(recognizer); |
Notice that using the LUIS-enabled Language Understanding template, Azure has provisioned a LUIS.ai app and integrated it with our chatbot. More about LUIS, next.
Intent Handlers
The next part of our chatbot’s code handles intents. Our chatbot’s intents include Greeting, Help, Cancel, and AzureFacts. The Greeting intent handler defines how the bot handles greeting a new user when they make an explicit invocation of the chatbot (without intent). The Help intent handler defines how the chatbot handles a request for help. The Cancel intent handler defines how the bot handles a user’s desire to quit, or if an unknown error occurs with our bot. The AzureFact intent handler, handles implicit invocations of the chatbot (with intent), by returning the requested Azure fact. We will use LUIS to train the AzureFacts intent in the next part of this post.
Each intent handler can return a different type of response to the user. For this demo, we will have the Greeting, Help, and AzureFacts handlers return a ThumbnailCard, while the Cancel handler simply returns a text message (gist).
/* INTENT HANDLERS */ | |
// Add a dialog for each intent that the LUIS app recognizes. | |
// See https://docs.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-recognize-intent-luis | |
bot.dialog('GreetingDialog', | |
(session) => { | |
const WELCOME_TEXT_LONG = `You can say things like: \n` + | |
`_'Tell me about Azure certifications.'_ \n` + | |
`_'When was Azure released?'_ \n` + | |
`_'Give me a random fact.'_`; | |
let botResponse = { | |
title: 'What would you like to know about Microsoft Azure?', | |
response: WELCOME_TEXT_LONG, | |
image: 'image-16.png' | |
}; | |
let card = createThumbnailCard(session, botResponse); | |
let msg = new builder.Message(session).addAttachment(card); | |
// let msg = botResponse.response; | |
session.send(msg).endDialog(); | |
} | |
).triggerAction({ | |
matches: 'Greeting' | |
}); | |
bot.dialog('HelpDialog', | |
(session) => { | |
const FACTS_LIST = "Certifications, Cognitive Services, Competition, Compliance, First Offering, Functions, " + | |
"Geographies, Global Infrastructure, Platforms, Categories, Products, Regions, and Release Date"; | |
const botResponse = { | |
title: 'Need a little help?', | |
response: `Current facts include: ${FACTS_LIST}.`, | |
image: 'image-15.png' | |
}; | |
let card = createThumbnailCard(session, botResponse); | |
let msg = new builder.Message(session).addAttachment(card); | |
session.send(msg).endDialog(); | |
} | |
).triggerAction({ | |
matches: 'Help' | |
}); | |
bot.dialog('CancelDialog', | |
(session) => { | |
const CANCEL_RESPONSE = 'Goodbye.'; | |
session.send(CANCEL_RESPONSE).endDialog(); | |
} | |
).triggerAction({ | |
matches: 'Cancel' | |
}); | |
bot.dialog('AzureFactsDialog', | |
(session, args) => { | |
let query; | |
let entity = args.intent.entities[0]; | |
let msg = new builder.Message(session); | |
if (entity === undefined) { // unknown Facts entity was requested | |
msg = 'Sorry, you requested an unknown fact.'; | |
console.log(msg); | |
session.send(msg).endDialog(); | |
} else { | |
query = entity.resolution.values[0]; | |
console.log(`Entity: ${JSON.stringify(entity)}`); | |
buildFactResponse(query, function (document) { | |
if (!document) { | |
msg = `Sorry, seems we are missing the fact, '${query}'.`; | |
console.log(msg); | |
} else { | |
let card = createThumbnailCard(session, document); | |
msg = new builder.Message(session).addAttachment(card); | |
} | |
session.send(msg).endDialog(); | |
}); | |
} | |
} | |
).triggerAction({ | |
matches: 'AzureFacts' | |
}); |
Helper Functions
The last part of our chatbot’s code are the helper functions the intent handlers call. The functions include a function to return a random fact if the user requests one, selectRandomFact()
. There are two functions, which return a ThumbnailCard or a HeroCard, depending on the request, createHeroCard(session, botResponse)
and createThumbnailCard(session, botResponse)
.
The buildFactResponse(factToQuery, callback)
function is called by the AzureFacts intent handler. This function passes the fact from the user (i.e. certifications) and a callback to the findFact(factToQuery, callback)
function. The findFact
function handles calling Cosmos DB, using MongoDB Node.JS Driver’s db.collection().findOne method. The function also returns a callback (gist).
/* HELPER FUNCTIONS */ | |
function selectRandomFact() { | |
const FACTS_ARRAY = ['description', 'released', 'global', 'regions', | |
'geographies', 'platforms', 'categories', 'products', 'cognitive', | |
'compliance', 'first', 'certifications', 'competition', 'functions']; | |
return FACTS_ARRAY[Math.floor(Math.random() * FACTS_ARRAY.length)]; | |
} | |
function buildFactResponse(factToQuery, callback) { | |
if (factToQuery.toString().trim() === 'random') { | |
factToQuery = selectRandomFact(); | |
} | |
mongoClient.connect(COSMOS_DB_CONN_STR, function (err, client) { | |
const db = client.db(DB_COLLECTION); | |
findFact(db, factToQuery, function (document) { | |
client.close(); | |
if (!document) { | |
console.log(`No document returned for value of ${factToQuery}?`); | |
} | |
return callback(document); | |
}); | |
}); | |
} | |
function createHeroCard(session, botResponse) { | |
return new builder.HeroCard(session) | |
.title('Azure Tech Facts') | |
.subtitle(botResponse.title) | |
.text(botResponse.response) | |
.images([ | |
builder.CardImage.create(session, `${ICON_STORAGE_URL}/${botResponse.image}`) | |
]) | |
.buttons([ | |
builder.CardAction.openUrl(session, 'https://azure.microsoft.com', 'Learn more...') | |
]); | |
} | |
function createThumbnailCard(session, botResponse) { | |
return new builder.ThumbnailCard(session) | |
.title('Azure Tech Facts') | |
.subtitle(botResponse.title) | |
.text(botResponse.response) | |
.images([ | |
builder.CardImage.create(session, `${ICON_STORAGE_URL}/${botResponse.image}`) | |
]) | |
.buttons([ | |
builder.CardAction.openUrl(session, 'https://azure.microsoft.com', 'Learn more...') | |
]); | |
} | |
function findFact(db, factToQuery, callback) { | |
console.log(`fact to query: ${factToQuery}`); | |
db.collection(DB_COLLECTION).findOne({"fact": factToQuery}) | |
.then(function (document) { | |
if (!document) { | |
console.log(`No document found for fact '${factToQuery}'`); | |
} | |
console.log(`Document found: ${JSON.stringify(document)}`); | |
return callback(document); | |
}); | |
} |
LUIS
We will use LUIS to add a perceived degree of intelligence to our chatbot, helping it understand the domain-specific natural language model of our bot. If you have built Alexa Skills or Actions for Google Assitant, LUIS apps work almost identically. The concepts of Intents, Intent Handlers, Entities, and Utterances are universal to all three platforms.
Intents are how LUIS determines what a user wants to do. LUIS will parse user utterances, understand the user’s intent, and pass that intent onto our chatbot, to be handled by the proper intent handler. The bot will then respond accordingly to that intent — with a greeting, with the requested fact, or by providing help.
Entities
LUIS describes an entity as being like a variable, used to capture and pass important information. We will start by defining our AzureFacts Intent’s Facts Entities. The Facts entities represent each type of fact a user might request. The requested fact is extracted from the user’s utterances and passed to the chatbot. LUIS allows us to import entities as JSON. I have included a set of Facts entities to import, in the azure-facts-entities.json file, within the project on GitHub (gist).
[ | |
{ | |
"canonicalForm": "certifications", | |
"list": [ | |
"certification", | |
"certification exam", | |
"certification exams" | |
] | |
}, | |
{ | |
"canonicalForm": "cognitive", | |
"list": [ | |
"cognitive services", | |
"cognitive service" | |
] | |
}, | |
{ | |
"canonicalForm": "competition", | |
"list": [ | |
"competitors", | |
"competitor" | |
] | |
}, | |
{ | |
"canonicalForm": "cosmos", | |
"list": [ | |
"cosmosdb", | |
"cosmos db", | |
"nosql", | |
"mongodb" | |
] | |
}, | |
{ | |
"canonicalForm": "kubernetes", | |
"list": [ | |
"aks", | |
"kubernetes service", | |
"docker", | |
"containers" | |
] | |
} | |
] |
Each entity includes a primary canonical form, as well as possible synonyms the user may utter in their invocation. If the user utters a synonym, LUIS will understand the intent and pass the canonical form of the entity to the chatbot. For example, if we said ‘tell me about Azure AKS,’ LUIS understands the phrasing, identifies the correct intent, AzureFacts intent, substitutes the synonym, ‘AKS’, with the canonical form of the Facts entity, ‘kubernetes’, and passes the top scoring intent to be handled and the value ‘kubernetes’ to our bot. The bot would then query for the document associated with ‘kubernetes’ in Cosmos DB, and return a response.
Utterances
Once we have created and associated our Facts entities with our AzureFacts intent, we need to input a few possible phrases a user may utter to invoke our chatbot. Microsoft actually recommends not coding too many utterance examples, as part of their best practices. Below you see an elementary list of possible phrases associated with the AzureFacts intent. You also see the blue highlighted word, ‘Facts’ in each utterance, a reference to the Facts entities. LUIS understands that this position in the phrasing represents a Facts entity value.
Patterns
Patterns, according to Microsoft, are designed to improve the accuracy of LUIS, when several utterances are very similar. By providing a pattern for the utterances, LUIS can have higher confidence in the predictions.
The topic of training your LUIS app is well beyond the scope of this post. Microsoft has an excellent series of articles, I strongly suggest reading. They will greatly assist in improving the accuracy of your LUIS chatbot.
Once you have completed building and training your intents, entities, phrases, and other items, you must publish your LUIS app for it to be accessed from your chatbot. Publish allows LUIS to be queried from an HTTP endpoint. The LUIS interface will enable you to publish both a Staging and a Production copy of the app. For brevity, I published directly to Production. If you recall our chatbot’s application settings, earlier, the settings include a luisAppId
, luisAppKey
, and a luisAppIdHostName
. Together these form the HTTP endpoint, LuisModelUrl
, through which the chatbot will access the LUIS app.
Using the LUIS API endpoint, we can test our LUIS app, independent of our chatbot. This can be very useful for troubleshooting bot issues. Below, we see an example utterance of ‘tell me about Azure Functions.’ LUIS has correctly deduced the user intent, assigning the AzureFacts intent with the highest prediction score. LUIS also identified the correct Entity, ‘functions,’ which it would typically return to the chatbot.
Deployment
With our chatbot developed and our LUIS app built, trained, and published, we can deploy our bot to the Azure platform. There are a few ways to deploy our chatbot, depending on your platform and language choice. I chose to use the DevOps practice of Continuous Deployment, offered in the Azure Bot Management console.
With Continuous Deployment, every time we commit a change to GitHub, a webhook fires, and my chatbot is deployed to the Azure platform. If you have high confidence in your changes through testing, you could choose to commit and deploy directly.
Alternately, you might choose a safer approach, using feature branch or PR requests. In which case your chatbot will be deployed upon a successful merge of the feature branch or PR request to master.
Manual Testing
Azure provides the ability to test your bot, from the Azure portal. Using the Bot Management Test in Web Chat console, you can test your bot using the Web Chat channel. We will talk about different kinds of channel later in the post. This is an easy and quick way to manually test your chatbot.
For more sophisticated, automated testing of your chatbot, there are a handful of frameworks available, including bot-tester, which integrates with mocha and chai. Stuart Harrison published an excellent article on testing with the bot-tester framework, Building a test-driven chatbot for the Microsoft Bot Framework in Node.js.
Log Streaming
As part of testing and troubleshooting our chatbot in Production, we will need to review application logs occasionally. Azure offers their Log streaming feature. To access log streams, you must turn on application logging and chose a log level, in the Azure Monitoring Diagnostic logs console, which is off by default.
Once Application logging is active, you may review logs in the Monitoring Log stream console. Log streaming will automatically be turned off in 12 hours and times our after 30 minutes of inactivity. I personally find application logging and access to logs, more difficult and far less intuitive on the Azure platform, than on AWS or Google Cloud.
Metrics
As part of testing and eventually monitoring our chatbot in Production, the Azure App Service Overview console provides basic telemetry about the App Service, on which the bot is running. With Azure Application Insights, we can drill down into finer-grain application telemetry.
Web Integration
With our chatbot built, trained, deployed and tested, we can integrate it with multiple channels. Channels represent all how our users might interact with our chatbot, Web Chat, Slack, Skype, Facebook Messager, and so forth. Microsoft describes channels as a connection between the Bot Framework and communication apps. For this post, we will look at two channels, Web Chat and Slack.
Enabling Web Chat is probably the easiest of the channels. When you create a bot with Bot Service, the Web Chat channel is automatically configured for you. You used it to test your bot, earlier in the post. Displaying your chatbot through the Web Chat channel, embedded in your website, requires a secret, for which, you have two options. Option one is to keep your secret hidden, exchange your secret for a token, and generate the secret. Option two, according to Microsoft, is to embed the web chat control in your website using the secret. This method will allow other developers to easily embed your bot into their websites.
Embedding Web Chat in your website allows your website users to interact directly with your chatbot. Shown below, I have embedded our chatbot’s Web Chat channel in my website. It will enable a user to interact with the bot, independent of the website’s primary content. Here, a user could ask for more information on a particular topic they found of interest in the article, such as Azure Kubernetes Service (AKS). The Web Chat window is collapsible when not in use.
The Web Chat is embedded using an HTML iframe tag. The HTML code to embed the Web Chat channel is included in the Web Chat Channel Configuration console, shown above. I found an effective piece of JavaScript code by Anthony Cu, in his post, Embed the Bot Framework Web Chat as a Collapsible Window. I modified Anthony’s code to fit my own website, moving the collapsable Web Chat iframe to the bottom right corner and adjusting the dimensions of the frame, as not to intrude on the site’s central content area. I’ve included the code and a simulated HTML page in the GitHub project.
Slack Integration
To integrate your chatbot with the Slack channel, I will assume you have an existing Slack Workspace with sufficient admin rights to create and deploy Slack Apps to that Workspace.
To integrate our chatbot with Slack, we need to create a new Bot App for Slack. The Slack App will be configured to interact with our deployed bot on Azure securely. Microsoft has an excellent set of easy-to-follow documentation on connecting a bot to Slack. I am only going to summarize the steps here.
Once your Slack App is created, the Slack App contains a set of credentials.
Those Slack App credentials are then shared with the chatbot, in the Azure Slack Channel integration console. This ensures secure communication between your chatbot and your Slack App.
Part of creating your Slack App, is configuring Event Subscriptions. The Microsoft documentation outlines six bot events that need to be configured. By subscribing to bot events, your app will be notified of user activities at the URL you specify.
You also configure a Bot User. Adding a Bot User allows you to assign a username for your bot and choose whether it is always shown as online. This is the username you will see within your Slack app.
Once the Slack App is built, credentials are exchanged, events are configured, and the Bot User is created, you finish by authorizing your Slack App to interact with users within your Slack Workspace. Below I am authorizing the Azure Tech Facts Bot Slack App to interact with users in my Programmatic Ponderings Slack Workspace.
Below we see the Azure Tech Facts Bot Slack App deployed to my Programmatic Ponderings Slack Workspace. The Slack App is shown in the Slack for Mac desktop app.
Similarly, we see the same Azure Tech Facts Bot Slack App being used in the Slack iOS App.
Conclusion
In this brief post, we saw how to develop a Machine learning-based LUIS-enabled chatbot using the Azure Bot Service and the BotBuilder SDK. We enhanced the bot’s functionality with two of Azure’s Cloud services, Cosmos DB and Blob Storage. Once built, we integrated our chatbot with the Web Chat and Slack channels.
This was a very simple demonstration of a LUIS chatbot. The true power of intelligent bots comes from further integrating bots with Azure AI and Machine Learning Services, such as Azure’s Cognitive Services. Additionally, Azure cloud platform offers other more traditional cloud services, in addition to Cosmos DB and Blob Storage, to extend the feature and functionally of bots, such as messaging, IoT, Azure serverless Functions, and AKS-based microservices.
Azure is a trademark of Microsoft
The image in the background of Azure icon copyright: kran77 / 123RF Stock Photo
All opinions expressed in this post are my own and not necessarily the views of my current or past employers, their clients, or Microsoft.
Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB
Posted by Gary A. Stafford in Azure, Cloud, Java Development, Software Development on December 10, 2017
An earlier post, Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ, demonstrated the use of a message-based, event-driven, decoupled architectural approach for communications between microservices, using Spring AMQP and RabbitMQ. This distributed computing model is known as eventual consistency. To paraphrase microservices.io, ‘using an event-driven, eventually consistent approach, each service publishes an event whenever it updates its data. Other services subscribe to events. When an event is received, a service (subscriber) updates its data.’
That earlier post illustrated a fairly simple example application, the Voter API, consisting of a set of three Spring Boot microservices backed by MongoDB and RabbitMQ, and fronted by an API Gateway built with HAProxy. All API components were containerized using Docker and designed for use with Docker CE for AWS as the Container-as-a-Service (CaaS) platform.
Optimizing for Kubernetes on Azure
This post will demonstrate how a modern application, such as the Voter API, is optimized for Kubernetes in the Cloud (Kubernetes-as-a-Service), in this case, AKS, Azure’s new public preview of Managed Kubernetes for Azure Container Service. According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. I wrote about AKS in detail, in my last post, First Impressions of AKS, Azure’s New Managed Kubernetes Container Service.
In addition to migrating to AKS, the Voter API will take advantage of additional enterprise-grade Azure’s resources, including Azure’s Service Bus and Cosmos DB, replacements for the Voter API’s RabbitMQ and MongoDB. There are several architectural options for the Voter API’s messaging and NoSQL data source requirements when moving to Azure.
- Keep Dockerized RabbitMQ and MongoDB – Easy to deploy to Kubernetes, but not easily scalable, highly-available, or manageable. Would require storage optimized Azure VMs for nodes, node affinity, and persistent storage for data.
- Replace with Cloud-based Non-Azure Equivalents – Use SaaS-based equivalents, such as CloudAMQP (RabbitMQ-as-a-Service) and MongoDB Atlas, which will provide scalability, high-availability, and manageability.
- Replace with Azure Service Bus and Cosmos DB – Provides all the advantages of SaaS-based equivalents, and additionally as Azure resources, benefits from being in the Azure Cloud alongside AKS.
Source Code
The Kubernetes resource files and deployment scripts used in this post are all available on GitHub. This is the only project you need to clone to reproduce the AKS example in this post.
git clone \ --branch master --single-branch --depth 1 --no-tags \ https://github.com/garystafford/azure-aks-sb-cosmosdb-demo.git
The Docker images for the three Spring microservices deployed to AKS, Voter, Candidate, and Election, are available on Docker Hub. Optionally, the source code, including Dockerfiles, for the Voter, Candidate, and Election microservices, as well as the Voter Client are available on GitHub, in the kub-aks
branch.
git clone \ --branch kub-aks --single-branch --depth 1 --no-tags \ https://github.com/garystafford/candidate-service.git git clone \ --branch kub-aks --single-branch --depth 1 --no-tags \ https://github.com/garystafford/election-service.git git clone \ --branch kub-aks --single-branch --depth 1 --no-tags \ https://github.com/garystafford/voter-service.git git clone \ --branch kub-aks --single-branch --depth 1 --no-tags \ https://github.com/garystafford/voter-client.git
Azure Service Bus
To demonstrate the capabilities of Azure’s Service Bus, the Voter API’s Spring microservice’s source code has been re-written to work with Azure Service Bus instead of RabbitMQ. A future post will explore the microservice’s messaging code. It is more likely that a large application, written specifically for a technology that is easily portable such as RabbitMQ or MongoDB, would likely remain on that technology, even if the application was lifted and shifted to the Cloud or moved between Cloud Service Providers (CSPs). Something important to keep in mind when choosing modern technologies – portability.
Service Bus is Azure’s reliable cloud Messaging-as-a-Service (MaaS). Service Bus is an original Azure resource offering, available for several years. The core components of the Service Bus messaging infrastructure are queues, topics, and subscriptions. According to Microsoft, ‘the primary difference is that topics support publish/subscribe capabilities that can be used for sophisticated content-based routing and delivery logic, including sending to multiple recipients.’
Since the three Voter API’s microservices are not required to produce messages for more than one other service consumer, Service Bus queues are sufficient, as opposed to a pub/sub model using Service Bus topics.
Cosmos DB
Cosmos DB, Microsoft’s globally distributed, multi-model database, offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs). Ideal for the Voter API, Cosmos DB supports MongoDB’s data models through the MongoDB API, a MongoDB database service built on top of Cosmos DB. The MongoDB API is compatible with existing MongoDB libraries, drivers, tools, and applications. Therefore, there are no code changes required to convert the Voter API from MongoDB to Cosmos DB. I simply had to change the database connection string.
NGINX Ingress Controller
Although the Voter API’s HAProxy-based API Gateway could be deployed to AKS, it is not optimal for Kubernetes. Instead, the Voter API will use an NGINX-based Ingress Controller. NGINX will serve as an API Gateway, as HAProxy did, previously.
According to NGINX, ‘an Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for your Kubernetes services. Such a load balancer usually exposes your services to clients outside of your Kubernetes cluster.’
An Ingress resource requires an Ingress Controller to function. Continuing from NGINX, ‘an Ingress Controller is an application that monitors Ingress resources via the Kubernetes API and updates the configuration of a load balancer in case of any changes. Different load balancers require different Ingress controller implementations. In the case of software load balancers, such as NGINX, an Ingress controller is deployed in a pod along with a load balancer.’
There are currently two NGINX-based Ingress Controllers available, one from Kubernetes and one directly from NGINX. Both being equal, for this post, I chose the Kubernetes version, without RBAC (Kubernetes offers a version with and without RBAC). RBAC should always be used for actual cluster security. There are several advantages of using either version of the NGINX Ingress Controller for Kubernetes, including Layer 4 TCP and UDP and Layer 7 HTTP load balancing, reverse proxying, ease of SSL termination, dynamically-configurable path-based rules, and support for multiple hostnames.
Azure Web App
Lastly, the Voter Client application, not really part of the Voter API, but useful for demonstration purposes, will be converted from a containerized application to an Azure Web App. Since it is not part of the Voter API, separating the Client application from AKS makes better architectural sense. Web Apps are a powerful, richly-featured, yet incredibly simple way to host applications and services on Azure. For more information on using Azure Web Apps, read my recent post, Developing Applications for the Cloud with Azure App Services and MongoDB Atlas.
Revised Component Architecture
Below is a simplified component diagram of the new architecture, including Azure Service Bus, Cosmos DB, and the NGINX Ingress Controller. The new architecture looks similar to the previous architecture, but as you will see, it is actually very different.
Process Flow
To understand the role of each API component, let’s look at one of the event-driven, decoupled process flows, the creation of a new election candidate. In the simplified flow diagram below, an API consumer executes an HTTP POST request containing the new candidate object as JSON. The Candidate microservice receives the HTTP request and creates a new document in the Cosmos DB Voter database. A Spring RepositoryEventHandler
within the Candidate microservice responds to the document creation and publishes a Create Candidate event message, containing the new candidate object as JSON, to the Azure Service Bus Candidate Queue.
Independently, the Voter microservice is listening to the Candidate Queue. Whenever a new message is produced by the Candidate microservice, the Voter microservice retrieves the message off the queue. The Voter microservice then transforms the new candidate object contained in the incoming message to its own candidate data model and creates a new document in its own Voter database.
The same process flows exist between the Election and the Candidate microservices. The Candidate microservice maintains current elections in its database, which are retrieved from the Election queue.
Data Models
It is useful to understand, the Candidate microservice’s candidate domain model is not necessarily identical to the Voter microservice’s candidate domain model. Each microservice may choose to maintain its own representation of a vote, a candidate, and an election. The Voter service transforms the new candidate object in the incoming message based on its own needs. In this case, the Voter microservice is only interested in a subset of the total fields in the Candidate microservice’s model. This is the beauty of decoupling microservices, their domain models, and their datastores.
Other Events
The versions of the Voter API microservices used for this post only support Election Created events and Candidate Created events. They do not handle Delete or Update events, which would be necessary to be fully functional. For example, if a candidate withdraws from an election, the Voter service would need to be notified so no one places votes for that candidate. This would normally happen through a Candidate Delete or Candidate Update event.
Provisioning Azure Service Bus
First, the Azure Service Bus is provisioned. Provisioning the Service Bus may be accomplished using several different methods, including manually using the Azure Portal or programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform. I chose to provision the Azure Service Bus and the two queues using the Azure Portal for expediency. I chose the Basic Service Bus Tier of service, of which there are three tiers, Basic, Standard, and Premium.
The application requires two queues, the candidate.queue
, and the election.queue
.
Provisioning Cosmos DB
Next, Cosmos DB is provisioned. Like Azure Service Bus, Cosmos DB may be provisioned using several methods, including manually using the Azure Portal, programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform, or using the Azure CLI, which was my choice.
az cosmosdb create \ --name cosmosdb_instance_name_goes_here \ --resource-group resource_group_name_goes_here \ --location "East US=0" \ --kind MongoDB
The post’s Cosmos DB instance exists within the single East US Region, with no failover. In a real Production environment, you would configure Cosmos DB with multi-region failover. I chose MongoDB as the type of Cosmos DB database account to create. The allowed values are GlobalDocumentDB, MongoDB, Parse. All other settings were left to the default values.
The three Spring microservices each have their own database. You do not have to create the databases in advance of consuming the Voter API. The databases and the database collections will be automatically created when new documents are first inserted by the microservices. Below, the three databases and their collections have been created and populated with documents.
The GitHub project repository also contains three shell scripts to generate sample vote, candidate, and election documents. The scripts will delete any previous documents from the database collections and generate new sets of sample documents. To use, you will have to update the scripts with your own Voter API URL.
MongoDB Aggregation Pipeline
Each of the three Spring microservices uses Spring Data MongoDB, which takes advantage of MongoDB’s Aggregation Framework. According to MongoDB, ‘the aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result.’ Below is an example of aggregation from the Candidate microservice’s VoterContoller
class.
Aggregation aggregation = Aggregation.newAggregation( match(Criteria.where("election").is(election)), group("candidate").count().as("votes"), project("votes").and("candidate").previousOperation(), sort(Sort.Direction.DESC, "votes") );
To use MongoDB’s aggregation framework with Cosmos DB, it is currently necessary to activate the MongoDB Aggregation Pipeline Preview Feature of Cosmos DB. The feature can be activated from the Azure Portal, as shown below.
Cosmos DB Emulator
Be warned, Cosmos DB can be very expensive, even without database traffic or any Production-grade bells and whistles. Be careful when spinning up instances on Azure for learning purposes, the cost adds up quickly! In less than ten days, while writing this post, my cost was almost US$100 for the Voter API’s Cosmos DB instance.
I strongly recommend downloading the free Azure Cosmos DB Emulator to develop and test applications from your local machine. Although certainly not as convenient, it will save you the cost of developing for Cosmos DB directly on Azure.
With Cosmos DB, you pay for reserved throughput provisioned and data stored in containers (a collection of documents or a table or a graph). Yes, that’s right, Azure charges you per MongoDB collection, not even per database. Azure Cosmos DB’s current pricing model seems less than ideal for microservice architectures, each with their own database instance.
By default the reserved throughput, billed as Request Units (RU) per second or RU/s, is set to 1,000 RU/s per collection. For development and testing, you can reduce each collection to a minimum of 400 RU/s. The Voter API creates five collections at 1,000 RU/s or 5,000 RU/s total. Reducing this to a total of 2,000 RU/s makes Cosmos DB marginally more affordable to explore.
Building the AKS Cluster
An existing Azure Resource Group is required for AKS. I chose to use the latest available version of Kubernetes, 1.8.2.
# login to azure az login \ --username your_username \ --password your_password # create resource group az group create \ --resource-group resource_group_name_goes_here \ --location eastus # create aks cluster az aks create \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here \ --ssh-key-value path_to_your_public_key \ --kubernetes-version 1.8.2 # get credentials to access aks cluster az aks get-credentials \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here # display cluster's worker nodes kubectl get nodes --output=wide
By default, AKS will provision a three-node Kubernetes cluster using Azure’s Standard D1 v2 Virtual Machines. According to Microsoft, ‘D series VMs are general purpose VM sizes provide balanced CPU-to-memory ratio. They are ideal for testing and development, small to medium databases, and low to medium traffic web servers.’ Azure D1 v2 VM’s are based on Linux OS images, currently Debian 8 (Jessie), with 1 vCPU and 3.5 GB of memory. By default with AKS, each VM receives 30 GiB of Standard HDD attached storage.
You should always select the type and quantity of the cluster’s VMs and their attached storage, optimized for estimated traffic volumes and the specific workloads you are running. This can be done using the --node-count
, --node-vm-size
, and --node-osdisk-size
arguments with the az aks create
command.
Deployment
The Voter API resources are deployed to its own Kubernetes Namespace, voter-api
. The NGINX Ingress Controller resources are deployed to a different namespace, ingress-nginx
. Separate namespaces help organize individual Kubernetes resources and separate different concerns.
Voter API
First, the voter-api
namespace is created. Then, five required Kubernetes Secrets are created within the namespace. These secrets all contain sensitive information, such as passwords, that should not be shared. There is one secret for each of the three Cosmos DB database connection strings, one secret for the Azure Service Bus connection string, and one secret for the Let’s Encrypt SSL/TLS certificate and private key, used for secure HTTPS access to the Voter API.
Secrets
The Voter API’s secrets are used to populate environment variables within the pod’s containers. The environment variables are then available for use within the containers. Below is a snippet of the Voter pods resource file showing how the Cosmos DB and Service Bus connection strings secrets are used to populate environment variables.
env: - name: AZURE_SERVICE_BUS_CONNECTION_STRING valueFrom: secretKeyRef: name: azure-service-bus key: connection-string - name: SPRING_DATA_MONGODB_URI valueFrom: secretKeyRef: name: azure-cosmosdb-voter key: connection-string
Shown below, the Cosmos DB and Service Bus connection strings secrets have been injected into the Voter container and made available as environment variables to the microservice’s executable JAR file on start-up. As environment variables, the secrets are visible in plain text. Access to containers should be tightly controlled through Kubernetes RBAC and Azure AD, to ensure sensitive information, such as secrets, remain confidential.
Next, the three Kubernetes ReplicaSet resources, corresponding to the three Spring microservices, are created using Deployment controllers. According to Kubernetes, a Deployment that configures a ReplicaSet is now the recommended way to set up replication. The Deployments specify three replicas of each of the three Spring Services, resulting in a total of nine Kubernetes Pods.
Each pod, by default, will be scheduled on a different node if possible. According to Kubernetes, ‘the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.).’ Note below how each of the three microservice’s three replicas has been scheduled on a different node in the three-node AKS cluster.
Next, the three corresponding Kubernetes ClusterIP-type Services are created. And lastly, the Kubernetes Ingress is created. According to Kubernetes, the Ingress resource is an API object that manages external access to the services in a cluster, typically HTTP. Ingress provides load balancing, SSL termination, and name-based virtual hosting.
The Ingress configuration contains the routing rules used with the NGINX Ingress Controller. Shown below are the routing rules for each of the three microservices within the Voter API. Incoming API requests are routed to the appropriate pod and service port by NGINX.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: voter-ingress namespace: voter-api annotations: ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - api.voter-demo.com secretName: api-voter-demo-secret rules: - http: paths: - path: /candidate backend: serviceName: candidate servicePort: 8080 - path: /election backend: serviceName: election servicePort: 8080 - path: /voter backend: serviceName: voter servicePort: 8080
The screengrab below shows all of the Voter API resources created on AKS.
NGINX Ingress Controller
After completing the deployment of the Voter API, the NGINX Ingress Controller is created. It starts with creating the ingress-nginx
namespace. Next, the NGINX Ingress Controller is created, consisting of the NGINX Ingress Controller, three Kubernetes ConfigMap resources, and a default back-end application. The Controller and backend each have their own Service resources. Like the Voter API, each has three replicas, for a total of six pods. Together, the Ingress resource and NGINX Ingress Controller manage traffic to the Spring microservices.
The screengrab below shows all of the NGINX Ingress Controller resources created on AKS.
The NGINX Ingress Controller Service, shown above, has an external public IP address associated with itself. This is because that Service is of the type, Load Balancer. External requests to the Voter API will be routed through the NGINX Ingress Controller, on this IP address.
kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app: ingress-nginx spec: externalTrafficPolicy: Local type: LoadBalancer selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https
If you are only using HTTPS, not HTTP, then the references to HTTP and port 80 in the Ingress configuration are unnecessary. The NGINX Ingress Controller’s resources are explained in detail in the GitHub documentation, along with further configuration instructions.
DNS
To provide convenient access to the Voter API and the Voter Client, my domain, voter-demo.com
, is associated with the public IP address associated with the Voter API Ingress Controller and with the public IP address associated with the Voter Client Azure Web App. DNS configuration is done through Azure’s DNS Zone resource.
The two TXT
type records might not look as familiar as the SOA
, NS
, and A
type records. The TXT
records are required to associate the domain entries with the Voter Client Azure Web App. Browsing to http://www.voter-demo.com or simply http://voter-demo.com brings up the Voter Client.
The Client sends and receives data via the Voter API, available securely at https://api.voter-demo.com.
Routing API Requests
With the Pods, Services, Ingress, and NGINX Ingress Controller created and configured, as well as the Azure Layer 4 Load Balancer and DNS Zone, HTTP requests from API consumers are properly and securely routed to the appropriate microservices. In the example below, three back-to-back requests are made to the voter/info
API endpoint. HTTP requests are properly routed to one of the three Voter pod replicas using the default round-robin algorithm, as proven by the observing the different hostnames (pod names) and the IP addresses (private pod IPs) in each HTTP response.
Final Architecture
Shown below is the final Voter API Azure architecture. To simplify the diagram, I have deliberately left out the three microservice’s ClusterIP-type Services, the three default back-end application pods, and the default back-end application’s ClusterIP-type Service. All resources shown below are within the single East US Azure region, except DNS, which is a global resource.
Shown below is the new Azure Resource Group created by Azure during the AKS provisioning process. The Resource Group contains the various resources required to support the AKS cluster, NGINX Ingress Controller, and the Voter API. Necessary Azure resources were automatically provisioned when I provisioned AKS and when I created the new Voter API and NGINX resources.
In addition to the Resource Group above, the original Resource Group contains the AKS Container Service resource itself, Service Bus, Cosmos DB, and the DNS Zone resource.
The Voter Client Web App, consisting of the Azure App Service and App Service plan resource, is located in a third, separate Resource Group, not shown here.
Cleaning Up AKS
A nice feature of AKS, running a az aks delete
command will delete all the Azure resources created as part of provisioning AKS, the API, and the Ingress Controller. You will have to delete the Cosmos DB, Service Bus, and DNS Zone resources, separately.
az aks delete \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here
Conclusion
Taking advantage of Kubernetes with AKS, and the array of Azure’s enterprise-grade resources, the Voter API was shifted from a simple Docker architecture to a production-ready solution. The Voter API is now easier to manage, horizontally scalable, fault-tolerant, and marginally more secure. It is capable of reliably supporting dozens more microservices, with multiple replicas. The Voter API will handle a high volume of data transactions and event messages.
There is much more that needs to be done to productionalize the Voter API on AKS, including:
- Add multi-region failover of Cosmos DB
- Upgrade to Service Bus Standard or Premium Tier
- Optimized Azure VMs and storage for anticipated traffic volumes and application-specific workloads
- Implement Kubernetes RBAC
- Add Monitoring, logging, and alerting with Envoy or similar
- Secure end-to-end TLS communications with Itsio or similar
- Secure the API with OAuth and Azure AD
- Automate everything with DevOps – AKS provisioning, testing code, creating resources, updating microservices, and managing data
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.
First Impressions of AKS, Azure’s New Managed Kubernetes Container Service
Posted by Gary A. Stafford in Azure, Software Development on November 20, 2017
Kubernetes as a Service
On October 24, 2017, less than a month prior to writing this post, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. According to Gabe Monroy, PM Lead, Containers @ Microsoft Azure, in a blog post, AKS ‘features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling.’ Monroy goes on to say, ‘with AKS, customers get the benefit of open source Kubernetes without complexity and operational overhead.’
Unquestionably, Kubernetes has become the leading Container-as-a-Service (CaaS) choice, at least for now. Along with the release of AKS by Microsoft, there have been other recent announcements, which reinforce Kubernetes dominance. In late September, Rancher Labs announced the release of Rancher 2.0. According to Rancher, Rancher 2.0 would be based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. Even AWS seems to be warming up to Kubernetes, despite their own ECS, according to sources. There are rumors AWS will announce a Kubernetes offering at AWS re:Invent 2017, starting a week from now.
Previewing AKS
Being a big fan of both Azure and Kubernetes, I decided to give AKS a try. I chose to deploy an existing, simple, multi-tier web application, which I had used in several previous posts, including Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. All the code used for this post is available on GitHub.
Sample Application
The application, the Voter application, is composed of an AngularJS frontend client-side UI, and two Java Spring Boot microservices, both backed by individual MongoDB databases, and fronted with an HAProxy-based API Gateway. The AngularJS UI calls the API Gateway, which in turn calls the Spring services. The two microservices communicate with each other using HTTP-based inter-process communication (IPC). Although I would prefer event-based service-to-service IPC, HTTP-based IPC was simpler to implement, for this post.
Interestingly, the Voter application was designed using Docker Community Edition for Mac and deployed to AWS using Docker Community Edition for AWS. Not only would this be my chance to preview AKS, but also an opportunity to compare the ease of developing for Docker CE on AWS using a Mac, to developing for Kubernetes with AKS using Docker Community Edition for Windows.
Required Software
In order to develop for AKS on my Windows 10 Enterprise workstation, I first made sure I had the latest copies of the following software:
- Docker CE for Windows (v17.11.0)
- Azure CLI (v2.0.21)
- kubectl (v1.8.3)
- kompose (v1.4.0)
- Windows PowerShell (v5.1)
If you are following along with the post, make sure you have the latest version of the Azure CLI, minimally 2.0.21, according to the Azure CLI release notes. Also, I happen to be running the latest version of Docker CE from the Edge Channel. However, either channel’s latest release of Docker CE for Windows should work for this post. Using PowerShell is optional. I prefer PowerShell over working from the Windows Command Prompt, if for nothing else than to preserve my command history, by default.
Kubernetes Resources with Kompose
Originally developed for Docker CE, the Voter application stack was defined in a single Docker Compose file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
version: '3' | |
services: | |
mongodb: | |
image: mongo:latest | |
command: | |
– –smallfiles | |
hostname: mongodb | |
ports: | |
– 27017:27017/tcp | |
networks: | |
– voter_overlay_net | |
volumes: | |
– voter_data_vol:/data/db | |
candidate: | |
image: garystafford/candidate-service:0.2.28 | |
depends_on: | |
– mongodb | |
hostname: candidate | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
voter: | |
image: garystafford/voter-service:0.2.104 | |
depends_on: | |
– mongodb | |
– candidate | |
hostname: voter | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
client: | |
image: garystafford/voter-client:0.2.44 | |
depends_on: | |
– voter | |
hostname: client | |
ports: | |
– 80:8080/tcp | |
networks: | |
– voter_overlay_net | |
gateway: | |
image: garystafford/voter-api-gateway:0.2.24 | |
depends_on: | |
– voter | |
hostname: gateway | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
networks: | |
voter_overlay_net: | |
driver: overlay | |
volumes: | |
voter_data_vol: |
To work on AKS, the application stack’s configuration needs to be reproduced as Kubernetes configuration files. Instead of writing the configuration files manually, I chose to use kompose. Kompose is described on its website as ‘a conversion tool for Docker Compose to container orchestrators such as Kubernetes.’ Using kompose, I was able to automatically convert the Docker Compose file into analogous Kubernetes resource configuration files.
kompose convert -f docker-compose.yml
Each Docker service in the Docker Compose file was translated into a separate Kubernetes Deployment resource configuration file, as well as a corresponding Service resource configuration file.
For the AngularJS Client Service and the HAProxy API Gateway Service, I had to modify the Service configuration files to switch the Service type to a Load Balancer (type: LoadBalancer
). Being a Load Balancer, Kubernetes will assign a publically accessible IP address to each Service; the reasons for which are explained later in the post.
The MongoDB service requires a persistent storage volume. To accomplish this with Kubernetes, kompose created a PersistentVolumeClaims resource configuration file. I did have to create a corresponding PersistentVolume resource configuration file. It was also necessary to modify the PersistentVolumeClaims resource configuration file, specifying the Storage Class Name as manual, to correspond to the AKS Storage Class configuration (storageClassName: manual
).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kind: PersistentVolume | |
apiVersion: v1 | |
metadata: | |
name: voter-data-vol | |
labels: | |
type: local | |
spec: | |
storageClassName: manual | |
capacity: | |
storage: 10Gi | |
accessModes: | |
– ReadWriteOnce | |
hostPath: | |
path: "/tmp/data" |
From the original Docker Compose file, containing five Docker services, I ended up with a dozen individual Kubernetes resource configuration files. Individual configuration files are optimal for fine-grain management of Kubernetes resources. The Docker Compose file and the Kubernetes resource configuration files are included in the GitHub project.
git clone \ --branch master --single-branch --depth 1 --no-tags \ https://github.com/garystafford/azure-aks-demo.git
Creating AKS Resources
New AKS Feature Flag
According to Microsoft, to start with AKS, while still a preview, creating new clusters requires a feature flag on your subscription.
az provider register -n Microsoft.ContainerService
Using a brand new Azure account for this demo, I also needed to activate two additional feature flags.
az provider register -n Microsoft.Network az provider register -n Microsoft.Compute
If you are missing required features flags, you will see errors, similar to. the below error.
Operation failed with status: ’Bad Request’. Details: Required resource provider registrations Microsoft.Compute,Microsoft.Network are missing.
Resource Group
AKS requires an Azure Resource Group for AKS. I chose to create a new Resource Group, using the Azure CLI.
az group create \ --resource-group resource_group_name_goes_here \ --location eastus
New Kubernetes Cluster
Using the aks
feature of the Azure CLI version 2.0.21 or later, I provisioned a new Kubernetes cluster. By default, Azure will create a 3-node cluster. You can override the default number of nodes using the --node-count
parameter; I chose one node. The version of Kubernetes you choose is also configurable using the --kubernetes-version
parameter. I selected the latest Kubernetes version available with AKS, 1.8.2.
az aks create \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here \ --node-count 1 \ --generate-ssh-keys \ --kubernetes-version 1.8.2
The newly created Azure Resource Group and AKS Kubernetes Cluster were both then visible on the Azure Portal.
In addition to the new Resource Group I created, Azure also created a second Resource Group containing seven Azure resources. These Azure resources include a Virtual Machine (the single Kubernetes node), Network Security Group, Network Interface, Virtual Network, Route Table, Disk, and an Availability Group.
With AKS up and running, I used another Azure CLI command to create a proxy connection to the Kubernetes Dashboard, which was deployed automatically and was running within the new AKS Cluster. The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.
az aks browse \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here
Although no applications were deployed to AKS, yet, there were several Kubernetes components running within the AKS Cluster. Kubernetes components running within the kube-system Namespace, included heapster, kube-dns, kubernetes-dashboard, kube-proxy, kube-svc-redirect, and tunnelfront.
Deploying the Application
MongoDB should be deployed first. Both the Voter and Candidate microservices depend on MongoDB. MongoDB is composed of four Kubernetes resources, a Deployment resource, Service resource, PersistentVolumeClaim resource, and PersistentVolume resource. I used kubectl
, the command line interface for running commands against Kubernetes clusters, to create the four MongoDB resources, from the configuration files.
kubectl create \ -f voter-data-vol-persistentvolume.yaml \ -f voter-data-vol-persistentvolumeclaim.yaml \ -f mongodb-deployment.yaml \ -f mongodb-service.yaml
After MongoDB was deployed and running, I created the four remaining Deployment resources, Client, Gateway, Voter, and Candidate, from the Deployment resource configuration files. According to Kubernetes, ‘a Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.’
Lastly, I created the remaining Service resources from the Service resource configuration files. According to Kubernetes, ‘a Service is an abstraction which defines a logical set of Pods and a policy by which to access them.’
Switching back to the Kubernetes Dashboard, the Voter application components were now visible.
There were five Kubernetes Pods, one for each application component. Since there is only one Node in the Kubernetes Cluster, all five Pods were deployed to the same Node. There were also five corresponding Kubernetes Deployments.
Similarly, there were five corresponding Kubernetes ReplicaSets, the next-generation Replication Controller. There were also five corresponding Kubernetes Services. Note the Gateway and Client Services have an External Endpoint (External IP) associated with them. The IPs were created as a result of adding the Load Balancer Service type to their Service resource configuration files, mentioned earlier.
Lastly, note the Persistent Disk Claim for MongoDB, which had been successfully bound.
Switching back to the Azure Portal, following the application deployment, there were now three additional resources in the AKS Resource Group, a new Azure Load Balancer and two new Public IP Addresses. The Load Balancer is used to balance the Client and Gateway Services, which both have public IP addresses.
To confirm the Gateway, Voter, and Candidate Services were reachable, using the public IP address of the Gateway Service, I browsed to HAProxy’s Statistics web page. Note the two backends, candidate and voter. The green color means HAProxy was able to successfully connect to both of these Services.
Accessing the Application
The Voter application’s AngularJS UI frontend can be accessed using the Client Service’s public IP address. However, this would not be very user-friendly. Even if I brought up the UI, using the public IP, the UI would be unable to connect to the HAProxy API Gateway, and subsequently, the Voter or Candidate Services. Based on the Client’s current configuration, the Client is expecting to find the Gateway at api.voter-demo.com:8080
.
To make accessing the Client more user-friendly, and to ensure the Client has access to the Gateway, I provisioned an Azure DNS Zone resource for my domain, voter-demo.com
. I assigned the DNS Zone to the AKS Resource Group.
Within the new DNS Zone, I created three DNS records. The first record, an Alias (A) record, associated voter-demo.com
with the public IP address of the Client Service. I added a second Alias (A) record for the www
subdomain, also associating it with the public IP address of the Client Service. The third Alias (A) record associated the api
subdomain with the public IP address of the Gateway Service.
At a high level, the voter application’s routing architecture looks as follows. The client’s requests to the primary domain or to the api
subdomain are resolved to one of the two public IP addresses configured in the load balancer’s frontend. The requests are passed to the load balancer’s backend pool, containing the single Azure VM, which is the single Kubernetes node, and onto the client or gateway Kubernetes Service. From there, requests are routed to one the appropriate Kubernetes Pods, containing the containerized application components, client or gateway.
Browsing to either http://voter-demo.com
or http://www.voter-demo.com
should bring up the Voter app UI (oh look, Hillary won this time…).
Using Chrome’s Developer Tools, observe when a new vote is placed, an HTTP POST
is made to the gateway, on the /voter/votes
endpoint, http://api.voter-demo.com:8080/voter/votes
. The Gateway then proxies this request to the Voter Service, at http://voter:8080/voter/votes
. Since the Gateway and Voter Services both run within the same Cluster, the Gateway is able to use the Voter service’s name to address it, using Kubernetes kube-dns.
Conclusion
In the past, I have developed, deployed, and managed containerized applications, using Rancher, AWS, AWS ECS, native Kubernetes, RedHat OpenShift, Docker Enterprise Edition, and Docker Community Edition. Based on my experience, and given my limited testing with Azure’s public preview of AKS, I am very impressed. Creating the Kubernetes Cluster could not have been easier. Scaling the Cluster, adding Persistent Volumes, and upgrading Kubernetes, is equally as easy. Conveniently, AKS integrates with other Kubernetes tools, like kubectl, kompose, and Helm.
I did run into some minor issues with the AKS Preview, such as being unable to connect to an earlier Cluster, after upgrading from the default Kubernetes version to 1.82. I also experience frequent disconnects when proxying to the Kubernetes Dashboard. I am sure the AKS Preview bugs will be worked out by the time AKS is officially released.
In addition to the many advantages of Kubernetes as a CaaS, a huge advantage of using AKS is the ability to easily integrate Azure’s many enterprise-grade compute, networking, database, caching, storage, and messaging resources. It would require minimal effort to swap out the Voter application’s single containerized version of MongoDB with a highly performant and available instance of Azure Cosmos DB. Similarly, it would be relatively easy to swap out the single containerized version of HAProxy with a fully-featured and secure instance of Azure API Management. The current version of the Voter application replies on RabbitMQ for service-to-service IPC versus this earlier application version’s reliance on HTTP-based IPC. It would be fairly simple to swap RabbitMQ for Azure Service Bus.
Lastly, AKS easily integrates with leading Development and DevOps tooling and processes. Building, managing, and deploying applications to AKS, is possible with Visual Studio, VSTS, Jenkins, Terraform, and Chef, according to Microsoft.
References
A few good references to get started with AKS:
- Container Orchestration Simplified with Managed Kubernetes in Azure Container Service (AKS)
- Deploy an Azure Container Service (AKS) cluster
- Prepare application for Azure Container Service (AKS)
- Kubernetes Basics
- kubectl Cheat Sheet
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.
Developing Applications for the Cloud with Azure App Services and MongoDB Atlas
Posted by Gary A. Stafford in .NET Development, Azure, Cloud, Software Development on November 1, 2017
Shift Left Cloud
The continued growth of compute services by leading Cloud Service Providers (CSPs) like Microsoft, Amazon, and Google are transforming the architecture of modern software applications, as well as the software development lifecycle (SDLC). Self-service access to fully-managed, reasonably-priced, secure compute has significantly increased developer productivity. At the same time, cloud-based access to cutting-edge technologies, like Artificial Intelligence (AI), Internet Of Things (IoT), Machine Learning, and Data Analytics, has accelerated the capabilities of modern applications. Finally, as CSPs become increasingly platform agnostic, Developers are no longer limited to a single technology stack or operating system. Today, Developers are solving complex problems with multi-cloud, multi-OS polyglot solutions.
Developers now leverage the Cloud from the very start of the software development process; shift left Cloud, if you will*. Developers are no longer limited to building and testing software on local workstations or on-premise servers, then throwing it over the wall to Operations for deployment to the Cloud. Developers using Azure, AWS, and GCP, develop, build, test, and deploy their code directly to the Cloud. Existing organizations are rapidly moving development environments from on-premise to the Cloud. New organizations are born in the Cloud, without the burden of legacy on-premise data-centers and servers under desks to manage.
Example Application
To demonstrate the ease of developing a modern application for the Cloud, let’s explore a simple API-based, NoSQL-backed web application. The application, The .NET Diner, simulates a rudimentary restaurant menu ordering interface. It consists of a single-page application (SPA) and two microservices backed by MongoDB. For simplicity, there is no API Gateway between the UI and the two services, as normally would be in place. An earlier version of this application was used in two previous posts, including Cloud-based Continuous Integration and Deployment for .NET Development.
The original restaurant order application was written with JQuery and RESTful .NET WCF Services. The new application, used in this post, has been completely re-written and modernized. The web-based user interface (UI) is written with Google’s Angular 4 framework using TypeScript. The UI relies on a microservices-based API, built with C# using Microsoft’s Web API 2 and .NET 4.7. The services rely on MongoDB for data persistence.
All code for this project is available on GitHub within two projects, one for the Angular UI and another for the C# services. The entire application can easily be built and run locally on Windows using MongoDB Community Edition. Alternately, to run the application in the Cloud, you will require an Azure and MongoDB Atlas account.
This post is primarily about the development experience. For brevity, the post will not delve into security, DevOps practices for CI/CD, and the complexities of staging and releasing code to Production.
Cross-Platform Development
The API, consisting of a set of C# microservices, was developed with Microsoft Visual Studio Community 2017 on Windows 10. Visual Studio touts itself as a full-featured Integrated Development Environment (IDE) for Android, iOS, Windows, web, and cloud. Visual Studio is easily integrated with Azure, AWS, and Google, through the use of Extensions. Visual Studio is an ideal IDE for cloud-centric application development.
To simulate a typical mixed-platform development environment, the Angular UI front-end was developed with JetBrains WebStorm 2017 on Mac OS X. WebStorm touts itself as a powerful IDE for modern JavaScript development.
Other tools used to develop the application include Git and GitHub for source code, MongoDB Community Edition for local database development, and Postman for API development and testing, both locally and on Azure. All the development tools used in the post are cross-platform. Versions of WebStorm, Visual Studio, MongoDB, Postman, Git, Node.js, npm, and Bash are all available for Mac, Windows, and Linux. Cross-platform flexibility is key when developing modern multi-OS polyglot applications.
Postman
Postman was used to build, test, and document the application’s API. Postman is an excellent choice for developing RESTful APIs. With Postman, you define Collections of HTTP requests for each of your APIs. You then define Environments, such as Development, Test, and Production, against which you will execute the Collections of HTTP requests. Each environment consists of environment-specific variables. Those variables can be used to define API URLs and as request parameters.
Not only can you define static variables, Postman’s runtime, built on Node.js, is scriptable using JavaScript, allowing you to programmatically set dynamic variables, based on the results of HTTP requests and responses, as demonstrated below.
Postman also allows you to write and run automated API integration tests, as well as perform load testing, as shown below.
Azure App Services
The Angular browser-based UI and the C# microservices will be deployed to Azure using the Azure App Service. Azure App Service is nearly identical to AWS Elastic BeanStalk and Google App Engine. According to Microsoft, Azure App Service allows Developers to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps, running on Windows or Linux, using .NET, .NET Core, Java, Ruby, Node.js, PHP, and Python.
App Service is a fully-managed, turn-key platform. Azure takes care of infrastructure maintenance and load balancing. App Service easily integrates with other Azure services, such as API Management, Queue Storage, Azure Active Directory (AD), Cosmos DB, and Application Insights. Microsoft suggests evaluating the following four criteria when considering Azure App Services:
- You want to deploy a web application that’s accessible through the Internet.
- You want to automatically scale your web application according to demand without needing to redeploy.
- You don’t want to maintain server infrastructure (including software updates).
- You don’t need any machine-level customizations on the servers that host your web application.
There are currently four types of Azure App Services, which are Web Apps, Web Apps for Containers, Mobile Apps, and API Apps. The application in this post will use the Azure Web Apps for the Angular browser-based UI and Azure API Apps for the C# microservices.
MongoDB Atlas
Each of the C# microservices has separate MongoDB database. In the Cloud, the services use MongoDB Atlas, a secure, highly-available, and scalable cloud-hosted MongoDB service. Cloud-based databases, like Atlas, are often referred to as Database as a Service (DBaaS). Atlas is a Cloud-based alternative to traditional on-premise databases, as well as equivalent CSP-based solutions, such as Amazon DynamoDB, GCP Cloud Bigtable, and Azure Cosmos DB.
Atlas is an example of a SaaS who offer a single service or small set of closely related services, as an alternative to the big CSP’s equivalent services. Similar providers in this category include CloudAMQP (RabbitMQ as a Service), ClearDB (MySQL DBaaS), Akamai (Content Delivery Network), and Oracle Database Cloud Service (Oracle Database, RAC, and Exadata as a Service). Many of these providers, such as Atlas, are themselves hosted on AWS or other CSPs.
There are three pricing levels for MongoDB Atlas: Free, Essential, and Professional. To follow along with this post, the Free level is sufficient. According to MongoDB, with the Free account level, you get 512 MB of storage with shared RAM, a highly-available 3-node replica set, end-to-end encryption, secure authentication, fully managed upgrades, monitoring and alerts, and a management API. Atlas provides the ability to upgrade your account and CSP specifics at any time.
Once you register for an Atlas account, you will be able to log into Atlas, set up your users, whitelist your IP addresses for security, and obtain necessary connection information. You will need this connection information in the next section to configure the Azure API Apps.
With the Free Atlas tier, you can view detailed Metrics about database cluster activity. However, with the free tier, you do not get access to Real-Time data insights or the ability to use the Data Explorer to view your data through the Atlas UI.
Azure API Apps
The example application’s API consists of two RESTful microservices built with C#, the RestaurantMenu
service and RestaurantOrder
service. Both services are deployed as Azure API Apps. API Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.
Microsoft Visual Studio has done an excellent job providing Extensions to make cloud integration a breeze. I will be using Visual Studio Tools for Azure in this post. Similar to how you create a Publish Profile for deploying applications to Internet Information Services (IIS), you create a Publish Profile for Azure App Services. Using the step-by-step user interface, you create a Microsft Azure App Service Web Deploy Publish Profile for each service. To create a new Profile, choose the Microsoft Azure App Service Target.
You must be connected to your Azure account to create the Publish Profile. Give the service an App Name, choose your Subscription, and select or create a Resource Group and an App Service Plan.
The App Service Plan defines the Location and Size for your API App container; these will determine the cost of the compute. I suggest putting the two API Apps and the Web App in the same location, in this case, East US.
The Publish Profile is now available for deploying the services to Azure. No command line interaction is required. The services can be built and published to Azure with a single click from within Visual Studio.
Configuration Management
Azure App Services is highly configurable. For example, each API App requires a different configuration, in each environment, to connect to different instances of MongoDB Atlas databases. For security, sensitive Atlas credentials are not stored in the source code. The Atlas URL and sensitive credentials are stored in App Settings on Azure. For this post, the settings were input directly into the Azure UI, as shown below. You will need to input your own Atlas URL and credentials.
The compiled C# services expect certain environment variables to be present at runtime to connect to MongoDB Atlas. These are provided through Azure’s App Settings. Access to the App Settings in Azure should be tightly controlled through Azure AD and fine-grained Azure Role-Based Access Control (RBAC) service.
CORS
If you want to deploy the application from this post to Azure, there is one code change you will need to make to each service, which deals with Cross-Origin Resource Sharing (CORS). The services are currently configured to only accept traffic from my temporary Angular UI App Service’s URL. You will need to adjust the CORS configuration in the \App_Start\WebApiConfig.cs
file in each service, to match your own App Service’s new URL.
Angular UI Web App
The Angular UI application will be deployed as an Azure Web App, one of four types of Azure App Services, mentioned previously. According to Microsoft, Web Apps allow Developers to program in their favorite languages, including .NET, Java, Node.js, PHP, and Python on Windows or .NET Core, Node.js, PHP or Ruby on Linux. Web Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.
Using the Azure Portal, setting up a new Web App for the Angular UI is simple.
Provide an App Name, Subscription, Resource Group, OS Type, and select whether or not you want Application Insights enabled for the Web App.
Although an amazing IDE for web development, WebStorm lacks some of the direct integrations with Azure, AWS, and Google, available with other IDE’s, like Visual Studio. Since the Angular application was developed in WebStorm on Mac, we will take advantage of Azure App Service’s Continuous Deployment feature.
Azure Web Apps can be deployed automatically from most common source code management platforms, including Visual Studio Team Services (VSTS), GitHub, Bitbucket, OneDrive, and local Git repositories.
For this post, I chose GitHub. To configure deployment from GitHub, select the GitHub Account, Organization, Project, and Branch from which Azure will deploy the Angular Web App.
Configuring GitHub in the Azure Portal, Azure becomes an Authorized OAuth App on the GitHub account. Azure creates a Webhook, which fires each time files are pushed (git push
) to the dist
branch of the GitHub project’s repository.
Using the ng build --dev --env=prod
command, the Angular UI application must be first transpiled from TypeScript to JavaScript and bundled for deployment. The ng build
command can be run from within WebStorm or from the command line.
The the --env=prod
flag ensures that the Production environment configuration, containing the correct Azure API endpoints, issued transpiled into the build. This configuration is stored in the \src\environments\environment.prod.ts
file, shown below. You will need to update these two endpoints to your own endpoints from the two API Apps you previously deployed to Azure.
Optionally, the code should be optimized for Production, by replacing the --dev
flag with the --prod
flag. Amongst other optimizations, the Production version of the code is uglified using UglifyJS. Note the difference in the build files shown below for Production, as compared to files above for Development.
Since I chose GitHub for deployment to Azure, I used Git to manually push the local build files to the dist
branch on GitHub.
Every time the webhook fires, Azure pulls and deploys the new build, overwriting the previously deployed version, as shown below.
To stage new code and not immediately overwrite running code, Azure has a feature called Deployment slots. According to Microsoft, Deployment slots allow Developers to deploy different versions of Web Apps to different URLs. You can test a certain version and then swap content and configuration between slots. This is likely how you would eventually deploy your code to Production.
Up and Running
Below, the three Azure App Services are shown in the Azure Portal, successfully deployed and running. Note their Status, App Type, Location, and Subscription.
Before exploring the deployed UI, the two Azure API Apps should be tested using Postman. Once the API is confirmed to be working properly, populated by making an HTTP Post request to the menuitems
API, the RestaurantOrderService
Azure API Service. When the HTTP Post request is made, the RestaurantOrderService
stores a set of menu items in the RestaurantMenu
Atlas MongoDB database, in the menus
collection.
The Angular UI, the RestaurantWeb
Azure Web App, is viewed by using the URL provided in the Web App’s Overview
tab. The menu items displayed in the drop-down are supplied by an HTTP GET request to the menuitems
API, provided by the RestaurantMenuService
Azure API Service.
Your order is placed through an HTTP Post request to the orders
API, the RestaurantOrderService
Azure API Service. The RestaurantOrderService
stores the order in the RestaurantOrder
Atlas MongoDB database, in the orders
collection. The order details are returned in the response body and displayed in the UI.
Once you have the development version of the application successfully up and running on Atlas and Azure, you can start to configure, test, and deploy additional application versions, as App Services, into higher Azure environments, such as Test, Performance, and eventually, Production.
Monitoring
Azure provides in-depth monitoring and performance analytics capabilities for your deployed applications with services like Application Insights. With Azure’s monitoring resources, you can monitor the live performance of your application and set up alerts for application errors and performance issues. Real-time monitoring useful when conducting performance tests. Developers can analyze response time of each API method and optimize the application, Azure configuration, and MongoDB databases, before deploying to Production.
Conclusion
This post demonstrated how the Cloud has shifted application development to a Cloud-first model. Future posts will demonstrate how an application, such as the app featured in this post, is secured, and how it is continuously built, tested, and deployed, using DevOps practices.
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.
Cloud-based Continuous Integration and Deployment for .NET Development
Posted by Gary A. Stafford in .NET Development, Build Automation, Client-Side Development, DevOps, Enterprise Software Development, PowerShell Scripting, Software Development on May 25, 2014
Introduction
Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.
If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBees, JIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.
There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using Git, GitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.
Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.
GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.
AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.
Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support. AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.
Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.
Sample Solution
The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014’) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.
The updated VS Solution contains the following four Projects:
- Restaurant – C# Class Library
- RestaurantUnitTests – Unit Test Project
- RestaurantWcfService – C# WCF Service Application
- RestaurantDemoSite – Web Site (JS/HTML5)
The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.
As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.
Installing and Configuring the Solution
The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.
# Create environment variables [Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", ` "{YOUR HOSTNAME HERE}", "User") [Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", ` "{YOUR USERNME HERE}", "User") [Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", ` "{YOUR PASSWORD HERE}", "User") # Create new restaurant orders JSON file directory $newDirectory = "c:\RestaurantOrders" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new website directory $newDirectory = "c:\RestaurantDemoSite" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new WCF service directory $newDirectory = "c:\MenuWcfRestService" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create main website in IIS $newSite = "MenuWcfRestService" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9250 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create WCF service website in IIS $newSite = "RestaurantDemoSite" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9255 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" }
Cloud-Based Continuous Integration and Delivery
Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.
Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:
AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’). As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.
VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.
Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.
# Create new restaurant orders JSON file directory $newDirectory = "c:\RestaurantOrders" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new website directory $newDirectory = "c:\RestaurantDemoSite" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new WCF service directory $newDirectory = "c:\MenuWcfRestService" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create main website in IIS $newSite = "MenuWcfRestService" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9250 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create WCF service website in IIS $newSite = "RestaurantDemoSite" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9255 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create new local non-admin User and Group for Web Deploy # Main variables (Change these!) [string]$userName = "USER_NAME_HERE" # mjones [string]$fullName = "FULL USER NAME HERE" # Mike Jones [string]$password = "USER_PASSWORD_HERE" # pa$$w0RD! [string]$groupName = "GROUP_NAME_HERE" # Development # Create new local user account [ADSI]$server = "WinNT://$Env:COMPUTERNAME" $newUser = $server.Create("User", $userName) $newUser.SetPassword($password) $newUser.Put("FullName", "$fullName") $newUser.Put("Description", "$fullName User Account") # Assign flags to user [int]$ADS_UF_PASSWD_CANT_CHANGE = 64 [int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536 [int]$COMBINED_FLAG_VALUE = 65600 $flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE $newUser.put("userFlags", $flags) $newUser.SetInfo() # Create new local group $newGroup=$server.Create("Group", $groupName) $newGroup.Put("Description","$groupName Group") $newGroup.SetInfo() # Assign user to group [string]$serverPath = $server.Path $group = [ADSI]"$serverPath/$groupName, group" $group.Add("$serverPath/$userName, user") # Assign local non-admin User in IIS for Web Deploy [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") [Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(` $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE) [Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(` $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)
Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure. Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.
To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.
The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.
Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).
As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.
<?xml version="1.0" encoding="utf-8"?> <!-- This file is used by the publish/package process of your Web project. You can customize the behavior of this process by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121. --> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <WebPublishMethod>FileSystem</WebPublishMethod> <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration> <LastUsedPlatform>Any CPU</LastUsedPlatform> <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish> <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish> <ExcludeApp_Data>True</ExcludeApp_Data> <publishUrl>C:\MenuWcfRestService</publishUrl> <DeleteExistingFiles>True</DeleteExistingFiles> </PropertyGroup> </Project>
A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.
<?xml version="1.0" encoding="utf-8"?> <!-- This file is used by the publish/package process of your Web project. You can customize the behavior of this process by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121. --> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <WebPublishMethod>MSDeploy</WebPublishMethod> <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration> <LastUsedPlatform>Any CPU</LastUsedPlatform> <SiteUrlToLaunchAfterPublish /> <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish> <ExcludeApp_Data>True</ExcludeApp_Data> <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL> <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath> <RemoteSitePhysicalPath /> <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer> <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod> <EnableMSDeployBackup>True</EnableMSDeployBackup> <UserName>$(AZURE_VM_USERNAME)</UserName> <_SavePWD>False</_SavePWD> <_DestinationType>AzureVirtualMachine</_DestinationType> </PropertyGroup> </Project>
The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…
We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles. AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.
To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.
# Build WCF service # (AppVeyor config ignores website Project in Solution) msbuild Restaurant\Restaurant.sln ` /p:Configuration=AppVeyor /verbosity:minimal /nologo # Build website msbuild Restaurant\RestaurantDemoSite\website.publishproj ` /p:Configuration=Release /verbosity:minimal /nologo Write-Host "*** Solution builds complete."
# Deploy WCF service # (AppVeyor config ignores website Project in Solution) msbuild Restaurant\Restaurant.sln ` /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor ` /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD ` /verbosity:minimal /nologo # Deploy website msbuild Restaurant\RestaurantDemoSite\website.publishproj ` /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release ` /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD ` /verbosity:minimal /nologo Write-Host "*** Solution deployments complete."
Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.
Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.
Links
- Introduction to Web Deploy (see ‘How does it work?’ diagram on non-admin deployments)
- ASP.NET Web Deployment using Visual Studio: Command Line Deployment
- IIS: Enable IIS remote management
- Sayed Ibrahim Hashimi’s Blog
- Continuous Integration and Continuous Delivery
- How to: Edit Deployment Settings in Publish Profile (.pubxml) Files