Archive for category Azure

End-to-End Data Discovery, Observability, and Governance on AWS with LinkedIn’s Open-source DataHub

Use DataHub’s data catalog capabilities to collect, organize, enrich, and search for metadata across multiple platforms

Introduction

According to Shirshanka Das, Founder of LinkedIn DataHub, Apache Gobblin, and Acryl Data, one of the simplest definitions for a data catalog can be found on the Oracle website: “Simply put, a data catalog is an organized inventory of data assets in the organization. It uses metadata to help organizations manage their data. It also helps data professionals collect, organize, access, and enrich metadata to support data discovery and governance.

Another succinct description of a data catalog’s purpose comes from Alation: “a collection of metadata, combined with data management and search tools, that helps analysts and other data users to find the data that they need, serves as an inventory of available data, and provides information to evaluate the fitness of data for intended uses.

Working with many organizations in the area of Analytics, one of the more common requests I receive regards choosing and implementing a data catalog. Organizations have datasources hosted in corporate data centers, on AWS, by SaaS providers, and with other Cloud Service Providers. Several of these organizations have recently gravitated to DataHub, the open-source metadata platform for the modern data stack, originally developed by LinkedIn.

View of DataHub’s home screen showing a variety of datasources

In this post, we will explore the capabilities of DataHub to build a centralized data catalog on AWS for datasources hosted in multiple AWS accounts, SaaS providers, cloud service providers, and corporate data centers. I will demonstrate how to build a DataHub data catalog using out-of-the-box data source plugins for automated metadata ingestion.

Another example of searching for cataloged entities in DataHub’s browser-based UI

Data Catalog Competitors

Data catalogs are not new; technologies such as data dictionaries have been around as far back as the 1980’s. Gartner publishes their Metadata Management (EMM) Solutions Reviews and Ratings and Metadata Management Magic Quadrant. These reports contain a comprehensive list of traditional commercial enterprise players, modern cloud-native SaaS vendors, and Cloud Service Provider (CSP) offerings. DBMS Tools also hosts a comprehensive list of 30 data catalogs. A sampling of current data catalogs includes:

Open Source Software

Commercial

Cloud Service Providers

Data Catalog Features

DataHub describes itself as “a modern data catalog built to enable end-to-end data discovery, data observability, and data governance.” Sorting through vendor’s marketing jargon and hype, standard features of leading data catalogs include:

  • Metadata ingestion
  • Data discovery
  • Data governance
  • Data observability
  • Data lineage
  • Data dictionary
  • Data classification
  • Usage/popularity statistics
  • Sensitive data handling
  • Data fitness (aka data quality or data profiling)
  • Manage both technical and business metadata
  • Business glossary
  • Tagging
  • Natively supported datasource integrations
  • Advanced metadata search
  • Fine-grain authentication and authorization
  • UI- and API-based interaction

Datasources

When considering a data catalog solution, in my experience, the most common datasources that customers want to discover, inventory, and search include:

  • Relational databases and other OLTP datasources such as PostgreSQL, MySQL, Microsoft SQL Server, and Oracle
  • Cloud Data Warehouses and other OLAP datasources such as Amazon Redshift, Snowflake, and Google BigQuery
  • NoSQL datasources such as MongoDB, MongoDB Atlas, and Azure Cosmos DB
  • Persistent event-streaming platforms such as Apache Kafka (Amazon MSK and Confluent)
  • Distributed storage datasets (e.g., Data Lakes) such as Amazon S3, Apache Hive, and AWS Glue Data Catalogs
  • Business Intelligence (BI), dashboards, and data visualization sources such as Looker, Tableau, and Microsoft Power BI
  • ETL sources, such as Apache Spark, Apache Airflow, Apache NiFi, and dbt

DataHub on AWS

DataHub’s convenient AWS setup guide covers options to deploy DataHub to AWS. For this post, I have hosted DataHub on Kubernetes, using Amazon Elastic Kubernetes Service (Amazon EKS). Alternately, you could choose Google Kubernetes Engine (GKE) on Google Cloud or Azure Kubernetes Service (AKS) on Microsoft Azure.

Conveniently, DataHub offers a Helm chart, making deployment to Kubernetes straightforward. Furthermore, Helm charts are easily integrated with popular CI/CD tools. For this post, I’ve used ArgoCD, the declarative GitOps continuous delivery tool for Kubernetes, to deploy the DataHub Helm charts to Amazon EKS.

ArgoCD UI showing DataHub and its dependencies deployed to Amazon EKS

According to the documentation, DataHub consists of four main components: GMS, MAE Consumer (optional), MCE Consumer (optional), and Frontend. Kubernetes deployment for each of the components is defined as sub-charts under the main DataHub Helm chart.

External Storage Layer Dependencies

Four external storage layer dependencies power the main DataHub components: Kafka, Local DB (MySQL, Postgres, or MariaDB), Search Index (Elasticsearch), and Graph Index (Neo4j or Elasticsearch). DataHub has provided a separate DataHub Prerequisites Helm chart for the dependencies. The dependencies must be deployed before deploying DataHub.

Alternately, you can substitute AWS managed services for the external storage layer dependencies, which is also detailed in the Deploying to AWS documentation. AWS managed service dependency substitutions include Amazon RDS for MySQL, Amazon OpenSearch (fka Amazon Elasticsearch), and Amazon Managed Streaming for Apache Kafka (Amazon MSK). According to DataHub, support for using AWS Neptune as the Graph Index is coming soon.

DataHub CLI and Plug-ins

DataHub comes with the datahub CLI, allowing you to perform many common operations on the command line. You can install and use the DataHub CLI within your development environment or integrate it with your CI/CD tooling.

Available DataHub CLI commands

DataHub uses a plugin architecture. Plugins allow you to install only the datasource dependencies you need. For example, if you want to ingest metadata from Amazon Athena, just install the Athena plugin: pip install 'acryl-datahub[athena]'. DataHub Source, Sink, and Transformer plugins can be displayed using the datahub check plugins CLI command.

Example list of DataHub Source plugins installed
Example list of DataHub Sink and Transformer plugins installed

Secure Metadata Ingestion

Often, datasources are not externally accessible for security reasons. Further, many datasources may not be accessible to individual users, especially in higher environments like UAT, Staging, and Production. They are only accessible to applications or CI/CD tooling. To overcome these limitations when extracting metadata with DataHub, I prefer to perform my DataHub-related development and testing locally but execute all DataHub ingestion securely on AWS.

In my local development environment, I use JetBrains PyCharm to author the Python and YAML-based DataHub configuration files and ingestion pipeline recipes, then commit those files to git and push them to a private GitHub repository. Finally, I use GitHub Actions to test DataHub files.

To run DataHub ingestion jobs and push the results to DataHub running in Kubernetes on Amazon EKS, I have built a custom Python-based Docker container. The container runs the DataHub CLI, required DataHub plugins, and any additional Python dependencies. The container’s pod has the appropriate AWS IAM permissions, using IAM Roles for Service Accounts (IRSA), to securely access datasources to ingest and the DataHub application.

Schedule and Monitor Pipelines

Scheduling and managing multiple metadata ingestion jobs on AWS is best handled with Apache Airflow with Amazon Managed Workflows for Apache Airflow (Amazon MWAA). Ingestion jobs run as Airflow DAG tasks, which call the EKS-based DataHub CLI container. With MWAA, datasource connections, credentials, and other sensitive configurations can be kept secure and not be exposed externally or in plain text.

When running the ingestion pipelines on AWS with DataHub, all communications between AWS-based datasources, ingestion jobs running in Airflow, and DataHub, should use secure private IP addressing and DNS resolution instead of transferring metadata over the Internet. Make sure to create all the necessary VPC peering connections, network route table configurations, and VPC endpoints to connect all relevant services.

SaaS services such as Snowflake or MongoDB Atlas, services provided by other Cloud Service Providers such as Google Cloud and Microsoft Azure, and datasources in corporate datasources require alternate networking and security strategies to access metadata securely.

AWS-based DataHub high-level architecture

Markup or Code?

According to the documentation, a DataHub recipe is a configuration file that tells ingestion scripts where to pull data from (source) and where to put it (sink). Recipes normally contain a source, sink, and transformers configuration section. Mark-up language-based job automation written in YAML, JSON, or Domain Specific Languages (DSLs) is often an alternative to writing code. DataHub recipes can be written in YAML. The example recipe shown below is used to ingest metadata from an Amazon RDS for PostgreSQL database, running on AWS.

YAML-based recipes can also use automatic environment variable expansion for convenience, automation, and security. It is considered best practice to secure sensitive configuration values, such as database credentials, in a secure location and reference them as environment variables. For example, note the server: ${DATAHUB_REST_ENDPOINT} entry in the sink section below. The DATAHUB_REST_ENDPOINT environment variable is set ahead of time and re-used for all ingestion jobs. Sensitive database connection information has also been variablized and stored separately.

# Purpose: DataHub example recipe for PostgreSQL datasource
# Author: Gary A. Stafford
# Date: March 2022
# see https://datahubproject.io/docs/metadata-ingestion/source_docs/postgres
source:
type: postgres
config:
# Coordinates
host_port: ${DB_HOST_PORT}
database: tickit
# Credentials
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
# Options
profiling:
enabled: true
# Environment
env: DEV
# see https://datahubproject.io/docs/metadata-ingestion/transformers/#adding-a-set-of-tags
transformers:
type: "simple_add_dataset_tags"
config:
tag_urns:
"urn:li:tag:AWS"
"urn:li:tag:${ACCOUNT_ID}"
"urn:li:tag:us-east-1"
type: "pattern_add_dataset_terms"
config:
term_pattern:
rules:
".*users.*": ["urn:li:glossaryTerm:Classification.Sensitive"]
type: "simple_add_dataset_ownership"
config:
owner_urns:
"urn:li:corpuser:Database Administrators"
ownership_type: "DATAOWNER"
# see https://datahubproject.io/docs/metadata-ingestion/sink_docs/datahub for complete documentation
sink:
type: "datahub-rest"
config:
server: ${DATAHUB_REST_ENDPOINT}
# see https://datahubproject.io/docs/metadata-ingestion/source_docs/reporting_telemetry/
pipeline_name: "postgres-pipeline-tickit"
reporting:
type: "datahub"
config:
datahub_api:
server: ${DATAHUB_REST_ENDPOINT}

Using Python

You can configure and run a pipeline entirely from within a custom Python script using DataHub’s Python API as an alternative to YAML. Below, we see two nearly identical ingestion recipes to the YAML above, written in Python. Writing ingestion pipeline logic programmatically gives you increased flexibility for automation, error checking, unit-testing, and notification. Below is a basic pipeline written in Python. The code is functional, but not very Pythonic, secure, scalable, or Production ready.

# Purpose: Simple programmatic DataHub pipline example
# Author: Gary A. Stafford
# Date: March 2022
# Reference: https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/library/programatic_pipeline.py
from datahub.ingestion.run.pipeline import Pipeline
# The pipeline configuration is similar to the recipe YAML files provided to the CLI tool.
pipeline = Pipeline.create(
{
"run_id": "postgres-run",
"source": {
"type": "postgres",
"config": {
"host_port": "demo-instance.abcd1234.us-east-1.rds.amazonaws.com:5432",
"database": "tickit",
"username": "datahub",
"password": "My5up3r53cr3tPa55w0rd",
"env": "DEV",
"profiling": {
"enabled": "true"
}
}
},
"transformers": [
{
"type": "simple_add_dataset_tags",
"config": {
"tag_urns": [
f"urn:li:tag:AWS",
f"urn:li:tag:111222333444",
f"urn:li:tag:us-east-1"
]
}
},
{
"type": "pattern_add_dataset_terms",
"config": {
"term_pattern": {
"rules": {
".*users.*": [
"urn:li:glossaryTerm:Classification.Sensitive"
]
}
}
}
},
{
"type": "simple_add_dataset_ownership",
"config": {
"owner_urns": [
f"urn:li:corpuser:Database Administrators"
],
"ownership_type": "DATAOWNER"
}
}
],
"sink": {
"type": "datahub-rest",
"config": {
"server": "http://192.168.111.222:33333"
}
}
}
)
# Run the pipeline and report the results.
pipeline.run()
pipeline.pretty_print_summary()

The second version of the same pipeline is more Production ready. The code is more Pythonic in nature and makes use of error checking, logging, and the AWS Systems Manager (SSM) Parameter Store. Like recipes written in YAML, environment variables can be used for convenience and security. In this example, commonly reused and sensitive connection configuration items have been extracted and placed in the SSM Parameter Store. Additional configuration is pulled from the environment, such as AWS Account ID and AWS Region. The script loads these values at runtime.

# Purpose: Programmatic DataHub pipline example
# Author: Gary A. Stafford
# Date: March 2022
import json
import logging
import boto3
from botocore.exceptions import ClientError
from datahub.ingestion.run.pipeline import Pipeline
logging.basicConfig(
format="[%(asctime)s] %(levelname)s – %(message)s", level=logging.INFO
)
def main():
sts_client = boto3.client("sts")
params = get_parameters()
params["owner"] = "Database Administrators"
params["environment"] = "DEV"
params["database"] = "tickit"
params["region"] = sts_client.meta.region_name
params["account"] = sts_client.get_caller_identity()["Account"]
logging.info(f"Params: {json.dumps(params, indent=4, sort_keys=True)}")
ingestion_pipeline = create_pipeline(params)
run_pipeline(ingestion_pipeline)
def create_pipeline(params) -> Pipeline:
"""Constructs a Pipeline for a PostgreSQL Source and a DataHub Sink
:return: instance of datahub.ingestion.run.pipeline
"""
pipeline = Pipeline.create(
{
"run_id": "postgres-run",
"source": {
"type": "postgres",
"config": {
"host_port": params.get("/datahub_demo/postgres_host_port_tickit"),
"database": params.get("database"),
"username": params.get("/datahub_demo/postgres_username_tickit"),
"password": params.get("/datahub_demo/postgres_password_tickit"),
"profiling": {
"enabled": "true"
},
"env": params.get("environment"),
}
},
"transformers": [
{
"type": "simple_add_dataset_tags",
"config": {
"tag_urns": [
f"urn:li:tag:{params.get('account')}",
f"urn:li:tag:{params.get('region')}"
]
}
},
{
"type": "pattern_add_dataset_terms",
"config": {
"term_pattern": {
"rules": {
".*users.*": [
"urn:li:glossaryTerm:Classification.Sensitive"
]
}
}
}
},
{
"type": "simple_add_dataset_ownership",
"config": {
"owner_urns": [
f"urn:li:corpuser:{params.get('owner')}"
],
"ownership_type": "DATAOWNER"
}
}
],
"sink": {
"type": "datahub-rest",
"config": {
"server": params.get("/datahub_demo/datahub_rest_endpoint_public")
}
}
}
)
return pipeline
def run_pipeline(pipeline):
"""Runs the ingestion pipeline and prints summary of the results
:param pipeline: instance of datahub.ingestion.run.pipeline
:return:
"""
pipeline.run()
pipeline.pretty_print_summary()
def get_parameters() -> dict:
"""
Load parameter values from AWS Systems Manager (SSM) Parameter Store
:return: dict of parameter k/v's
"""
ssm_client = boto3.client("ssm")
params: dict = {}
try:
# make a single SSM API call for all parameters
response = ssm_client.get_parameters_by_path(
Path="/datahub_demo"
)
# create a dictionary of parameter k/v's
for param in response.get("Parameters"):
params[param["Name"]] = param["Value"]
logging.debug(f"Params: {params}")
except ClientError as e:
logging.error(e)
exit(1)
return params
if __name__ == '__main__':
main()

Sinking to DataHub

When syncing metadata to DataHub, you have two choices, the GMS REST API or Kafka. According to DataHub, the advantage of the REST-based interface is that any errors can immediately be reported. On the other hand, the advantage of the Kafka-based interface is that it is asynchronous and can handle higher throughput. For this post, I am DataHub’s REST API.

DataHub ingestion pipeline results for a Microsoft SQL Server datasource
Another example of a DataHub ingestion pipeline results for a Google BigQuery datasource

Column-level Metadata

In addition to column names and data types, it is possible to extract column descriptions and key types from certain datasources. Column descriptions, tags, and glossary terms can also be input through the DataHub UI. Below, we see an example of an Amazon Redshift fact table, whose table and column descriptions were ingested as part of the metadata.

Amazon Redshift fact table showing column-level metadata, tags, owners, and documentation

Business Glossary

DataHub can assign business glossary terms to entities. The DataHub Business Glossary plugin pulls business glossary metadata from a YAML-based configuration file.

# see sample: https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/bootstrap_data/business_glossary.yml
version: 1
source: DataHub
owners:
users:
datahub
url: "https://github.com/datahub-project/datahub/"
nodes:
name: Classification
description: A set of terms related to Data Classification
terms:
name: Sensitive
description: Sensitive Data
custom_properties:
is_confidential: false
name: Confidential
description: Confidential Data
custom_properties:
is_confidential: true
name: HighlyConfidential
description: Highly Confidential Data
custom_properties:
is_confidential: true
name: PersonalInformation
description: All terms related to personal information
owners:
users:
datahub
terms:
name: ID
description: An individual's unqiue identifier
inherits:
Classification.Sensitive
name: Name
description: An individual's Name
inherits:
Classification.Sensitive
name: SSN
description: An individual's SSN
inherits:
Classification.Confidential
name: DriverLicense
description: An individual's Driver License ID
inherits:
Classification.Confidential
name: Email
description: An individual's email address
inherits:
Classification.Confidential
name: Address
description: A physical address
name: Gender
description: The gender identity of the individual
inherits:
Classification.Sensitive

Business glossary terms can be reviewed in the Glossary Terms tab of the DataHub’s UI. Below, we see the three terms associated with the Classification glossary node: Confidential, HighlyConfidential, and Sensitive.

Example of a related set of terms in DataHub’s Business Glossary

We can search for entities inventoried in DataHub using their assigned business glossary terms.

Dataset search results based on a term in DataHub’s Business Glossary

Finally, we see an example of an AWS Athena data catalog table with business glossary terms applied to columns within the table’s schema.

AWS Athena table showing column-level descriptions, glossary terms, tags, owners, and documentation

SQL-based Profiler

DataHub also can extract statistics about entities in DataHub using the SQL-based Profiler. According to the DataHub documentation, the Profiler can extract the following:

  • Row and column counts for each table
  • Column null counts and proportions
  • Column distinct counts and proportions
  • Column min, max, mean, median, standard deviation, quantile values
  • Column histograms or frequencies of unique values

In addition, we can also track the historical stats for each profiled entity each time metadata is ingested.

Amazon Redshift fact table showing SQL-based profiler column-level statistics
Another example, a Google BigQuery table showing SQL-based profiler column-level statistics

Data Lineage

DataHub’s data lineage features allow us to view upstream and downstream relationships between different types of entities. DataHub can trace lineage across multiple platforms, datasets, pipelines, charts, and dashboards.

Below, we see a simple example of dataset entity-to-entity lineage in Amazon Redshift and then Apache Spark on Amazon EMR. The fact table has a downstream relationship to four database views. The views are based on SQL queries that include the upstream table as a datasource.

Visual lineage view of Amazon Redshift fact table and its four downstream view dependencies
Another visual lineage example of an Apache Spark job with Apache Hive tables as both the source and sink

DataHub Analytics

DataHub provides basic metadata quality and usage analytics in the DataHub UI: user activity, counts of datasource types, business glossary terms, environments, and actions.

Examples of DataHub’s metadata quality and usage analytics capabilities
More examples of DataHub’s metadata quality and usage analytics capabilities

Conclusion

In this post, we explored the features of a data catalog and learned about some of the leading commercial and open-source data catalogs. Next, we learned how DataHub could collect, organize, enrich, and search metadata across multiple datasources. Lastly, we discovered how easy it is to catalog metadata from datasources spread across multiple CSP, SaaS providers, and corporate data centers, and centralize those results in DataHub.

In addition to the basic features reviewed in this post, DataHub offers a growing number of additional capabilities, including GraphQL and Timeline APIs, robust authentication and authorization, application monitoring observability, and Great Expectations integration. All these qualities make DataHub an excellent choice for a data catalog.


This blog represents my own viewpoints and not of my employer, Amazon Web Services (AWS). All product names, logos, and brands are the property of their respective owners.

, , , , ,

Leave a comment

Azure Kubernetes Service (AKS) Observability with Istio Service Mesh

In the last two-part post, Kubernetes-based Microservice Observability with Istio Service Mesh, we deployed Istio, along with its observability tools, Prometheus, Grafana, Jaeger, and Kiali, to Google Kubernetes Engine (GKE). Following that post, I received several questions about using Istio’s observability tools with other popular managed Kubernetes platforms, primarily Azure Kubernetes Service (AKS). In most cases, including with AKS, both Istio and the observability tools are compatible.

In this short follow-up of the last post, we will replace the GKE-specific cluster setup commands, found in part one of the last post, with new commands to provision a similar AKS cluster on Azure. The new AKS cluster will run Istio 1.1.3, released 4/15/2019, alongside the latest available version of AKS (Kubernetes), 1.12.6. We will replace Google’s Stackdriver logging with Azure Monitor logs. We will retain the external MongoDB Atlas cluster and the external CloudAMQP cluster dependencies.

Previous articles about AKS include First Impressions of AKS, Azure’s New Managed Kubernetes Container Service (November 2017) and Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB (December 2017).

Source Code

All source code for this post is available on GitHub in two projects. The Go-based microservices source code, all Kubernetes resources, and all deployment scripts are located in the k8s-istio-observe-backend project repository.

git clone \
  --branch master --single-branch \
  --depth 1 --no-tags \
  https://github.com/garystafford/k8s-istio-observe-backend.git

The Angular UI TypeScript-based source code is located in the k8s-istio-observe-frontend repository. You will not need to clone the Angular UI project for this post’s demonstration.

Setup

This post assumes you have a Microsoft Azure account with the necessary resource providers registered, and the Azure Command-Line Interface (CLI), az, installed and available to your command shell. You will also need Helm and Istio 1.1.3 installed and configured, which is covered in the last post.

screen_shot_2019-03-27_at_1_35_46_pm

Start by logging into Azure from your command shell.

az login \
  --username {{ your_username_here }} \
  --password {{ your_password_here }}

Resource Providers

If you are new to Azure or AKS, you may need to register some additional resource providers to complete this demonstration.

az provider list --output table

screen_shot_2019-03-27_at_5_37_46_pm

If you are missing required resource providers, you will see errors similar to the one shown below. Simply activate the particular provider corresponding to the error.

Operation failed with status:'Bad Request'. 
Details: Required resource provider registrations 
Microsoft.Compute, Microsoft.Network are missing.

To register the necessary providers, use the Azure CLI or the Azure Portal UI.

az provider register --namespace Microsoft.ContainerService
az provider register --namespace Microsoft.Network
az provider register --namespace Microsoft.Compute

Resource Group

AKS requires an Azure Resource Group. According to Azure, a resource group is a container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. I chose to create a new resource group associated with my closest geographic Azure Region, East US, using the Azure CLI.

az group create \
  --resource-group aks-observability-demo \
  --location eastus

screen_shot_2019-03-26_at_6_54_39_pm

Create the AKS Cluster

Before creating the GKE cluster, check for the latest versions of AKS. At the time of this post, the latest versions of AKS was 1.12.6.

az aks get-versions \
  --location eastus \
  --output table

screen_shot_2019-03-26_at_6_56_38_pm

Using the latest GKE version, create the GKE managed cluster. There are many configuration options available with the az aks create command. For this post, I am creating three worker nodes using the Azure Standard_DS3_v2 VM type, which will give us a total of 12 vCPUs and 42 GB of memory. Anything smaller and all the Pods may not be schedulable. Instead of supplying an existing SSH key, I will let Azure create a new one. You should have no need to SSH into the worker nodes. I am also enabling the monitoring add-on. According to Azure, the add-on sets up Azure Monitor for containers, announced in December 2018, which monitors the performance of workloads deployed to Kubernetes environments hosted on AKS.

time az aks create \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --node-count 3 \
  --node-vm-size Standard_DS3_v2 \
  --enable-addons monitoring \
  --generate-ssh-keys \
  --kubernetes-version 1.12.6

Using the time command, we observe that the cluster took approximately 5m48s to provision; I have seen times up to almost 10 minutes. AKS provisioning is not as fast as GKE, which in my experience is at least 2x-3x faster than AKS for a similarly sized cluster.

screen_shot_2019-03-26_at_7_03_49_pm

After the cluster creation completes, retrieve your AKS cluster credentials.

az aks get-credentials \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --overwrite-existing

Examine the Cluster

Use the following command to confirm the cluster is ready by examining the status of three worker nodes.

kubectl get nodes --output=wide

screen_shot_2019-03-27_at_6_06_10_pm.png

Observe that Azure currently uses Ubuntu 16.04.5 LTS for the worker node’s host operating system. If you recall, GKE offers both Ubuntu as well as a Container-Optimized OS from Google.

Kubernetes Dashboard

Unlike GKE, there is no custom AKS dashboard. Therefore, we will use the Kubernetes Web UI (dashboard), which is installed by default with AKS, unlike GKE. According to Azure, to make full use of the dashboard, since the AKS cluster uses RBAC, a ClusterRoleBinding must be created before you can correctly access the dashboard.

kubectl create clusterrolebinding kubernetes-dashboard \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:kubernetes-dashboard

Next, we must create a proxy tunnel on local port 8001 to the dashboard running on the AKS cluster. This CLI command creates a proxy between your local system and the Kubernetes API and opens your web browser to the Kubernetes dashboard.

az aks browse \
  --name aks-observability-demo \
  --resource-group aks-observability-demo

screen_shot_2019-03-26_at_7_08_54_pm

Although you should use the Azure CLI, PowerShell, or SDK for all your AKS configuration tasks, using the dashboard for monitoring the cluster and the resources running on it, is invaluable.

screen_shot_2019-03-26_at_7_06_57_pm

The Kubernetes dashboard also provides access to raw container logs. Azure Monitor provides the ability to construct complex log queries, but for quick troubleshooting, you may just want to see the raw logs a specific container is outputting, from the dashboard.

screen_shot_2019-03-29_at_9_23_57_pm

Azure Portal

Logging into the Azure Portal, we can observe the AKS cluster, within the new Resource Group.

screen_shot_2019-03-26_at_7_08_25_pm

In addition to the Azure Resource Group we created, there will be a second Resource Group created automatically during the creation of the AKS cluster. This group contains all the resources that compose the AKS cluster. These resources include the three worker node VM instances, and their corresponding storage disks and NICs. The group also includes a network security group, route table, virtual network, and an availability set.

screen_shot_2019-03-26_at_7_08_04_pm

Deploy Istio

From this point on, the process to deploy Istio Service Mesh and the Go-based microservices platform follows the previous post and use the exact same scripts. After modifying the Kubernetes resource files, to deploy Istio, use the bash script, part4_install_istio.sh. I have added a few more pauses in the script to account for the apparently slower response times from AKS as opposed to GKE. It definitely takes longer to spin up the Istio resources on AKS than on GKE, which can result in errors if you do not pause between each stage of the deployment process.

screen_shot_2019-03-26_at_7_11_44_pm

screen_shot_2019-03-26_at_7_18_26_pm

Using the Kubernetes dashboard, we can view the Istio resources running in the istio-system Namespace, as shown below. Confirm that all resource Pods are running and healthy before deploying the Go-based microservices platform.

screen_shot_2019-03-26_at_7_16_50_pm

Deploy the Platform

Deploy the Go-based microservices platform, using bash deploy script, part5a_deploy_resources.sh.

screen_shot_2019-03-26_at_7_20_05_pm

The script deploys two replicas (Pods) of each of the eight microservices, Service-A through Service-H, and the Angular UI, to the dev and test Namespaces, for a total of 36 Pods. Each Pod will have the Istio sidecar proxy (Envoy Proxy) injected into it, alongside the microservice or UI.

screen_shot_2019-03-26_at_7_21_24_pm

Azure Load Balancer

If we return to the Resource Group created automatically when the AKS cluster was created, we will now see two additional resources. There is now an Azure Load Balancer and Public IP Address.

screen_shot_2019-03-26_at_7_21_56_pm

Similar to the GKE cluster in the last post, when the Istio Ingress Gateway is deployed as part of the platform, it is materialized as an Azure Load Balancer. The front-end of the load balancer is the new public IP address. The back-end of the load-balancer is a pool containing the three AKS worker node VMs. The load balancer is associated with a set of rules and health probes.

screen_shot_2019-03-26_at_7_22_51_pm

DNS

I have associated the new Azure public IP address, connected with the front-end of the load balancer, with the four subdomains I am using to represent the UI and the edge service, Service-A, in both Namespaces. If Azure is your primary Cloud provider, then Azure DNS is a good choice to manage your domain’s DNS records. For this demo, you will require your own domain.

screen_shot_2019-03-28_at_9_43_42_pm

Testing the Platform

With everything deployed, test the platform is responding and generate HTTP traffic for the observability tools to record. Similar to last time, I have chosen hey, a modern load generator and benchmarking tool, and a worthy replacement for Apache Bench (ab). Unlike ab, hey supports HTTP/2. Below, I am running hey directly from Azure Cloud Shell. The tool is simulating 10 concurrent users, generating a total of 500 HTTP GET requests to Service A.

# quick setup from Azure Shell using Bash
go get -u github.com/rakyll/hey
cd go/src/github.com/rakyll/hey/
go build
  
./hey -n 500 -c 10 -h2 http://api.dev.example-api.com/api/ping

We had 100% success with all 500 calls resulting in an HTTP 200 OK success status response code. Based on the results, we can observe the platform was capable of approximately 4 requests/second, with an average response time of 2.48 seconds and a mean time of 2.80 seconds. Almost all of that time was the result of waiting for the response, as the details indicate.

screen_shot_2019-03-26_at_7_57_03_pm

Logging

In this post, we have replaced GCP’s Stackdriver logging with Azure Monitor logs. According to Microsoft, Azure Monitor maximizes the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from Cloud and on-premises environments. In my opinion, Stackdriver is a superior solution for searching and correlating the logs of distributed applications running on Kubernetes. I find the interface and query language of Stackdriver easier and more intuitive than Azure Monitor, which although powerful, requires substantial query knowledge to obtain meaningful results. For example, here is a query to view the log entries from the services in the dev Namespace, within the last day.

let startTimestamp = ago(1d);
KubePodInventory
| where TimeGenerated > startTimestamp
| where ClusterName =~ "aks-observability-demo"
| where Namespace == "dev"
| where Name contains "service-"
| distinct ContainerID
| join
(
    ContainerLog
    | where TimeGenerated > startTimestamp
)
on ContainerID
| project LogEntrySource, LogEntry, TimeGenerated, Name
| order by TimeGenerated desc
| render table

Below, we see the Logs interface with the search query and log entry results.

screen_shot_2019-03-29_at_9_13_37_pm

Below, we see a detailed view of a single log entry from Service A.

screen_shot_2019-03-29_at_9_18_12_pm

Observability Tools

The previous post goes into greater detail on the features of each of the observability tools provided by Istio, including Prometheus, Grafana, Jaeger, and Kiali.

We can use the exact same kubectl port-forward commands to connect to the tools on AKS as we did on GKE. According to Google, Kubernetes port forwarding allows using a resource name, such as a service name, to select a matching pod to port forward to since Kubernetes v1.10. We forward a local port to a port on the tool’s pod.

# Grafana
kubectl port-forward -n istio-system \
  $(kubectl get pod -n istio-system -l app=grafana \
  -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
  
# Prometheus
kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=prometheus \
  -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
  
# Jaeger
kubectl port-forward -n istio-system \
$(kubectl get pod -n istio-system -l app=jaeger \
-o jsonpath='{.items[0].metadata.name}') 16686:16686 &
  
# Kiali
kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=kiali \
  -o jsonpath='{.items[0].metadata.name}') 20001:20001 &

screen_shot_2019-03-26_at_8_04_24_pm

Prometheus and Grafana

Prometheus is a completely open source and community-driven systems monitoring and alerting toolkit originally built at SoundCloud, circa 2012. Interestingly, Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second hosted-project, after Kubernetes.

Grafana describes itself as the leading open source software for time series analytics. According to Grafana Labs, Grafana allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. You can easily create, explore, and share visually-rich, data-driven dashboards. Grafana also users to visually define alert rules for your most important metrics. Grafana will continuously evaluate rules and can send notifications.

According to Istio, the Grafana add-on is a pre-configured instance of Grafana. The Grafana Docker base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. Below, we see one of the pre-configured dashboards, the Istio Service Dashboard.

screen_shot_2019-03-26_at_8_16_52_pm

Jaeger

According to their website, Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance and latency optimization. The Jaeger website contains a good overview of Jaeger’s architecture and general tracing-related terminology.

screen_shot_2019-03-26_at_8_03_31_pm

Below, we see a typical, distributed trace of the services, starting ingress gateway and passing across the upstream service dependencies.

screen_shot_2019-03-26_at_8_03_45_pm

Kaili

According to their website, Kiali provides answers to the questions: What are the microservices in my Istio service mesh, and how are they connected? Kiali works with Istio, in OpenShift or Kubernetes, to visualize the service mesh topology, to provide visibility into features like circuit breakers, request rates and more. It offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads.

There is a common Kubernetes Secret that controls access to the Kiali API and UI. The default login is admin, the password is 1f2d1e2e67df.

screen_shot_2019-03-26_at_7_59_17_pm

Below, we see a detailed view of our platform, running in the dev namespace, on AKS.

screen_shot_2019-03-26_at_8_02_38_pm

Delete AKS Cluster

Once you are finished with this demo, use the following two commands to tear down the AKS cluster and remove the cluster context from your local configuration.

time az aks delete \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --yes

kubectl config delete-context aks-observability-demo

Conclusion

In this brief, follow-up post, we have explored how the current set of observability tools, part of the latest version of Istio Service Mesh, integrates with Azure Kubernetes Service (AKS).

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , , , ,

1 Comment

Building and Integrating LUIS-enabled Chatbots with Slack, using Azure Bot Service, Bot Builder SDK, and Cosmos DB

Introduction

In this post, we will explore the development of a machine learning-based LUIS-enabled chatbot using the Azure Bot Service and the BotBuilder SDK. We will enhance the chatbot’s functionality with Azure’s Cloud services, including Cosmos DB and Blob Storage. Once built, we will integrate our chatbot across multiple channels, including Web Chat and Slack.

If you want to compare Azure’s current chatbot technologies with those of AWS and Google, in addition to this post, please read my previous two posts in this series, Building Serverless Actions for Google Assistant with Google Cloud Functions, Cloud Datastore, and Cloud Storage and Building Asynchronous, Serverless Alexa Skills with AWS Lambda, DynamoDB, S3, and Node.js. All three of the article’s demonstrations are written in Node.js, all three leverage their cloud platform’s machine learning-based Natural Language Understanding services, and all three take advantage of NoSQL database and storage services available on their respective cloud platforms.

Technology Stack

Here is a brief overview of the key Microsoft technologies we will incorporate into our bot’s architecture.

LUIS

The machine learning-based Language Understanding Intelligent Service (LUIS) is part of Azure’s Cognitive Services, used to build Natural Language Understanding (NLU) into apps, bots, and IoT devices. According to Microsoft, LUIS allows you to quickly create enterprise-ready, custom machine learning models that continuously improve.

Designed to identify valuable information in conversations, Language Understanding interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. Language Understanding integrates seamlessly with the Speech service for instant Speech-to-Intent processing, and with the Azure Bot Service, making it easy to create a sophisticated bot. A LUIS bot contains a domain-specific natural language model, which you design.

Azure Bot Service

The Azure Bot Service provides an integrated environment that is purpose-built for bot development, enabling you to build, connect, test, deploy, and manage intelligent bots, all from one place. Bot Service leverages the Bot Builder SDK.

Bot Builder SDK

The Bot Builder SDK allows you to build, connect, deploy and manage bots, which interact with users, across multiple channels, from your app or website to Facebook, Messenger, Kik, Skype, Slack, Microsoft Teams, Telegram, SMS, Twilio, Cortana, and Skype. Currently, the SDK is available for C# and Node.js. For this post, we will use the current Bot Builder Node.js SDK v3 release to write our chatbot.

Cosmos DB

According to Microsoft, Cosmos DB is a globally distributed, multi-model database-as-a-service, designed for low latency and scalable applications anywhere in the world. Cosmos DB supports multiple data models, including document, columnar, and graph. Cosmos also supports numerous database SDKs, including MongoDB, Cassandra, and Gremlin DB. We will use the MongoDB SDK to store our documents in Cosmos DB, used by our chatbot.

Azure Blob Storage

According to Microsoft, Azure’s storage-as-a-service, Blob Storage, provides massively scalable object storage for any type of unstructured data, images, videos, audio, documents, and more. We will be using Blob Storage to store publically-accessible images, used by our chatbot.

Azure Application Insights

According to Microsoft, Azure’s Application Insights provides comprehensive, actionable insights through application performance management (APM) and instant analytics. Quickly analyze application telemetry, allowing the detection of anomalies, application failure, performance changes. Application Insights will enable us to monitor our chatbot’s key metrics.

High-Level Architecture

A chatbot user interacts with the chatbot through a number of available channels, such as the Web, Slack, and Skype. The channels communicate with the Web App Bot, part of Azure Bot Service, and running on Azure’s App Service, the fully-managed platform for cloud apps. LUIS integration allows the chatbot to learn and understand the user’s intent based on our own domain-specific natural language model.

Through Azure’s App Service platform, our chatbot is able to retrieve data from Cosmos DB and images from Blob Storage. Our chatbot’s telemetry is available through Azure’s Application Insights.

Azure Chatbot Diagram

Azure Resources

Another way to understand our chatbot architecture is by examining the Azure resources necessary to build the chatbot. Below is an example of all the Azure resources that will be created as a result of building a LUIS-enabled bot, which has been integrated with Cosmos DB, Blob Storage, and Application Insights.

chatbot-10-resource-group

Chatbot Demonstration

As a demonstration, we will build an informational chatbot, the Azure Tech Facts Chatbot. The bot will respond to the user with interesting facts about Azure, Microsoft’s Cloud computing platform. Note this is not intended to be an official Microsoft bot and is only used for demonstration purposes.

Source Code

All open-sourced code for this post can be found on GitHub. The code samples in this post are displayed as GitHub Gists, which may not display correctly on some mobile and social media browsers. Links to the gists are also provided.

Development Process

This post will focus on the development and integration of a chatbot with the LUIS, Azure platform services, and channels, such as Web Chat and Slack. The post is not intended to be a general how-to article on developing Azure chatbots or the use of the Azure Cloud Platform.

Building the chatbot will involve the following steps.

  • Design the chatbot’s conversation flow;
  • Provision a Cosmos DB instance and import the Azure Facts documents;
  • Provision Azure Storage and upload the images as blobs into Azure Storage;
  • Create the new LUIS-enabled Web App Bot with Azure’s Bot Service;
  • Define the chatbot’s Intents, Entities, and Utterances with LUIS;
  • Train and publish the LUIS app;
  • Deploy and test the chatbot;
  • Integrate the chatbot with Web Chat and Slack Channels;

The post assumes you have an existing Azure account and a working knowledge of Azure. Let’s explore each step in more detail.

Cost of Azure Bots!

Be aware, you will be charged for Azure Cloud services when building this bot. Unlike an Alexa Custom Skill or an Action for Google Assistant, an Azure chatbot is not a serverless application. A common feature of serverless platforms, you only pay for the compute time you consume. There typically is no charge when your code is not running. This means, unlike AWS and Google Cloud Platform, you will pay for Azure resources you provision, whether or not you use them.

Developing this demo’s chatbot on the Azure platform, with little or no activity most of the time, cost me about $5/day. On AWS or GCP, a similar project would cost pennies per day or less (like, $0). Currently, in my opinion, Azure does not have a very competitive model for building bots, or for serverless computing in general, beyond Azure Functions, when compared to Google and AWS.

Conversational Flow

The first step in developing a chatbot is designing the conversation flow of the between the user and the bot. Defining the conversation flow is essential to developing the bot’s programmatic logic and training the domain-specific natural language model for the machine learning-based services the bot is integrated with, in this case, LUIS. What are all the ways the user might explicitly invoke our chatbot? What are all the ways the user might implicitly invoke our chatbot and provide intent to the bot? Taking the time to map out the possible conversational interactions is essential.

With more advanced bots, like Alexa, Actions for Google Assistant, and Azure Bots, we also have to consider the visual design of the conversational interfaces. In addition to simple voice and text responses, these bots are capable of responding with a rich array of UX elements, including what are generically known as ‘Cards’. Cards come in varying complexity and may contain elements such as text, title, sub-titles, text, video, audio, buttons, and links. Azure Bot Service offers several different cards for specific use cases.

Channel Design

Another layer of complexity with bots is designing for channels into which they integrate. There is a substantial visual difference in a conversational exchange displayed on Facebook Messanger, as compared to Slack, or Skype, Microsoft Teams, GroupMe, or within a web browser. Producing an effective conversational flow presentation across multiple channels a design challenge.

We will be focusing on two channels for delivery of our bot, the Web Chat and Slack channels. We will want to design the conversational flow and visual elements to be effective and engaging across both channels. The added complexity with both channels, they both have mobile and web-based interfaces. We will ensure our design works with the compact real-estate of an average-sized mobile device screen, as well as average-sized laptop’s screen.

Web Chat Channel Design

Below are two views of our chatbot, delivered through the Web Chat channel. To the left,  is an example of the bot responding with ThumbnailCard UX elements. The ThumbnailCards contain a title, sub-title, text, small image, and a button with a link. Below and to the right is an example of the bot responding with a HeroCard.  The HeroCard contains the same elements as the ThumbnailCard but takes up about twice the space with a significantly larger image.

conversational Model 3

Slack Channel Design

Below are three views of our chatbot, delivered through the Slack channel, in this case, the mobile iOS version of the Slack app. Even here on a larger iPhone 8s, there is not a lot of real estate. On the right is the same HeroCard as we saw above in the Web Chat channel. In the middle are the same ThumbnailCards. On the right is a simple text-only response. Although the text-only bot responses are not as rich as the cards, you are able to display more of the conversational flow on a single mobile screen.

Mobile Skype3

Lastly, below we see our chatbot delivered through the Slack for Mac desktop app. Within the single view, we see an example of a HeroCard (top), ThumbnailCard (center), and a text-only response (bottom). Notice how the larger UI of the desktop Slack app changes the look and feel of the chatbot conversational flow.

chatbot-51-slack-bot

In my opinion, the ThumbnailCards work well in the Web Chat channel and Slack channel’s desktop app, while the text-only responses seem to work best with the smaller footprint of the Slack channel’s mobile client. To work across a number of channels, our final bot will contain a mix of ThumbnailCards and text-only responses.

Cosmos DB

As an alternative to Microsoft’s Cognitive Service, QnA Maker, we will use Cosmos DB to house the responses to user’s requests for facts about Azure. When a user asks our informational chatbot for a fact about Azure, the bot will query Cosmos DB, passing a single unique string value, the fact the user is requesting. In response, Cosmos DB will return a JSON Document, containing field-and-value pairs with the fact’s title, image name, and textual information, as shown below.

chatbot-30-cosmos-db

There are a few ways to create the new Cosmos DB database and collection, which will hold our documents, we will use the Azure CLI. According to Microsoft, the Azure CLI 2.0 is Microsoft’s cross-platform command line interface (CLI) for managing Azure resources. You can use it in your browser with Azure Cloud Shell, or install it on macOS, Linux, or Windows, and run it from the command line. (gist).

#!/usr/bin/env sh
# author: Gary A. Stafford
# site: https://programmaticponderings.com
# license: MIT License
set -ex
LOCATION="<value_goes_here>"
RESOURCE_GROUP="<value_goes_here>"
ACCOUNT_NAME="<value_goes_here>"
COSMOS_ACCOUNT_KEY="<value_goes_here>"
DB_NAME="<value_goes_here>"
COLLECTION_NAME="<value_goes_here>"
az login
az cosmosdb create \
--name ${ACCOUNT_NAME} \
--resource-group ${RESOURCE_GROUP} \
--location "East US=0" \
--kind MongoDB
az cosmosdb database create \
--name ${ACCOUNT_NAME} \
--db-name ${DB_NAME} \
--resource-group-name ${RESOURCE_GROUP} \
az cosmosdb collection create \
--collection-name ${COLLECTION_NAME} \
--name ${ACCOUNT_NAME} \
--db-name ${DB_NAME} \
--resource-group-name ${RESOURCE_GROUP} \
--throughput 400 \
--key ${COSMOS_ACCOUNT_KEY}\
--verbose --debug

There are a few ways for us to get our Azure facts documents into Cosmos DB. Since we are writing our chatbot in Node.js, I also chose to write a Cosmos DB facts import script in Node.js, cosmos-db-data.js. Since we are using Cosmos DB as a MongoDB datastore, all the script requires is the official MongoDB driver for Node.js. Using the MongoDB driver’s db.collection.insertMany() method, we can upload an entire array of Azure fact document objects with one call. For security, we have set the Cosmos DB connection string as an environment variable, which the script expects to find at runtime (gist).

// author: Gary A. Stafford
// site: https://programmaticponderings.com
// license: MIT License
// Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-samples
// Insert Azure Facts into Cosmos DB Collection
'use strict';
/* CONSTANTS */
const mongoClient = require('mongodb').MongoClient;
const assert = require('assert');
const COSMOS_DB_CONN_STR = process.env.COSMOS_DB_CONN_STR;
const DB_COLLECTION = "azuretechfacts";
const azureFacts = [
{
"fact": "certifications",
"title": "Azure Certifications",
"image": "image-06.png",
"response": "As of June 2018, Microsoft offered ten Azure certification exams, allowing IT professionals the ability to differentiate themselves and validate their knowledge and skills."
},
{
"fact": "cognitive",
"title": "Cognitive Services",
"image": "image-09.png",
"response": "Azure's intelligent algorithms allow apps to see, hear, speak, understand and interpret user needs through natural methods of communication."
},
{
"fact": "competition",
"title": "Azure's Competition",
"image": "image-05.png",
"response": "According to the G2 Crowd website, Azure's Cloud competitors include Amazon Web Services (AWS), Digital Ocean, Google Compute Engine (GCE), and Rackspace."
},
{
"fact": "compliance",
"title": "Compliance",
"image": "image-06.png",
"response": "Microsoft provides the most comprehensive set of compliance offerings (including certifications and attestations) of any cloud service provider."
},
{
"fact": "cosmos",
"title": "Azure Cosmos DB",
"image": "image-17.png",
"response": "According to Microsoft, Cosmos DB is a globally distributed, multi-model database service, designed for low latency and scalable applications anywhere in the world, with native support for NoSQL."
},
{
"fact": "kubernetes",
"title": "Azure Kubernetes Service (AKS)",
"image": "image-18.png",
"response": "According to Microsoft, Azure Kubernetes Service (AKS) is a fully managed Kubernetes container orchestration service, which simplifies Kubernetes management, deployment, and operations."
}
];
const insertDocuments = function (db, callback) {
db.collection(DB_COLLECTION).insertMany(azureFacts, function (err, result) {
assert.equal(err, null);
console.log(`Inserted documents into the ${DB_COLLECTION} collection.`);
callback();
});
};
const findDocuments = function (db, callback) {
const cursor = db.collection(DB_COLLECTION).find();
cursor.each(function (err, doc) {
assert.equal(err, null);
if (doc != null) {
console.dir(doc);
} else {
callback();
}
});
};
const deleteDocuments = function (db, callback) {
db.collection(DB_COLLECTION).deleteMany(
{},
function (err, results) {
console.log(results);
callback();
}
);
};
mongoClient.connect(COSMOS_DB_CONN_STR, function (err, client) {
assert.equal(null, err);
const db = client.db(DB_COLLECTION);
deleteDocuments(db, function () {
insertDocuments(db, function () {
findDocuments(db, function () {
client.close();
});
});
});
});

Azure Blob Storage

When a user asks our informational chatbot for a fact about Azure, the bot will query Cosmos DB. One of the values returned is an image name. The image itself is stored on Azure Blob Storage.

chatbot-20-blob-storage

The image, actually an Azure icon available from Microsoft, is then displayed in the ThumbnailCard or HeroCard shown earlier.

chatbot-21-blob-storage

According to Microsoft, an Azure storage account provides a unique namespace in the cloud to store and access your data objects in Azure Storage. A storage account contains any blobs, files, queues, tables, and disks that you create under that account. A container organizes a set of blobs, similar to a folder in a file system. All blobs reside within a container. Similar to Cosmos DB, there are a few ways to create a new Azure Storage account and a blob storage container, which will hold our images. Once again, we will use the Azure CLI (gist).

#!/usr/bin/env sh
# author: Gary A. Stafford
# site: https://programmaticponderings.com
# license: MIT License
set -ex
## CONSTANTS ##
ACCOUNT_NAME="<value_goes_here>"
LOCATION="<value_goes_here>"
RESOURCE_GROUP="<value_goes_here>"
STORAGE_ACCOUNT="<value_goes_here>"
STORAGE_CONTAINER="<value_goes_here>"
az login
az storage account create \
--name ${STORAGE_ACCOUNT} \
--default-action Allow \
--kind Storage \
--location ${LOCATION} \
--resource-group ${RESOURCE_GROUP} \
--sku Standard_LRS
az storage container create \
--name ${STORAGE_CONTAINER} \
--fail-on-exist \
--public-access blob \
--account-name ${STORAGE_ACCOUNT} \
--account-key ${ACCOUNT_KEY}

Once the storage account and container are created using the Azure CLI, to upload the images, included with the GitHub project, by using the Azure CLI’s storage blob upload-batch command (gist).

#!/usr/bin/env sh
# author: Gary A. Stafford
# site: https://programmaticponderings.com
# license: MIT License
set -ex
## CONSTANTS ##
ACCOUNT_NAME="<value_goes_here>"
STORAGE_ACCOUNT="<value_goes_here>"
STORAGE_CONTAINER="<value_goes_here>"
az storage blob upload-batch \
--account-name ${STORAGE_ACCOUNT} \
--account-key ${ACCOUNT_KEY} \
--destination ${STORAGE_CONTAINER} \
--source ../azure-tech-facts-google/pics/ \
--pattern image-*.png
# --dryrun

Web App Chatbot

To create the LUIS-enabled chatbot, we can use the Azure Bot Service, available on the Azure Portal. A Web App Bot is one of a variety of bots available from Azure’s Bot Service, which is part of Azure’s larger and quickly growing suite of AI and Machine Learning Cognitive Services. A Web App Bot is an Azure Bot Service Bot deployed to an Azure App Service Web App. An App Service Web App is a fully managed platform that lets you build, deploy, and scale enterprise-grade web apps.

chatbot-84-create-chatbot.png

To create a LUIS-enabled chatbot, choose the Language Understanding Bot template, from the Node.js SDK Language options. This will provide a complete project and boilerplate bot template, written in Node.js, for you to start developing with. I chose to use the SDK v3, as v4 is still in preview and subject to change.

chatbot-80-create-chatbot

Azure Resource Manager

A great DevOps features of the Azure Platform is Azure’s ability to generate Azure Resource Manager (ARM) templates and the associated automation scripts in PowerShell, .NET, Ruby, and the CLI. This allows engineers to programmatically build and provision services on the Azure platform, without having to write the code themselves.

chatbot-83-create-chatbot

To build our chatbot, you can continue from the Azure Portal as I did, or download the ARM template and scripts, and run them locally. Once you have created the chatbot, you will have the option to download the source code as a ZIP file from the Bot Management Build console. I prefer to use the JetBrains WebStorm IDE to develop my Node.js-based bots, and GitHub to store my source code.

chatbot-14-gitflow

Application Settings

As part of developing the chatbot, you will need to add two additional application settings to the Azure App Service the chatbot is running within. The Cosmos DB connection string (COSMOS_DB_CONN_STR) and the URL of your blob storage container (ICON_STORAGE_URL) will both be referenced from within our bot, as an environment variable. You can manually add the key/value pairs from the Azure Portal (shown below), or programmatically.

chatbot-12-app-settings

The chatbot’s code, in the app.js file, is divided into three sections: Constants and Global Variables, Intent Handlers, and Helper Functions. Let’s look at each section and its functionality.

Constants

Below is the Constants used by the chatbot. I have preserved Azure’s boilerplate template comments in the app.js file. The comments are helpful in understanding the detailed inner-workings of the chatbot code (gist).

// author: Gary A. Stafford
// site: https://programmaticponderings.com
// license: MIT License
// description: Azure Tech Facts LUIS-enabled Chatbot
'use strict';
/*-----------------------------------------------------------------------------
A simple Language Understanding (LUIS) bot for the Microsoft Bot Framework.
-----------------------------------------------------------------------------*/
/* CONSTANTS AND GLOBAL VARIABLES */
const restify = require('restify');
const builder = require('botbuilder');
const botbuilder_azure = require("botbuilder-azure");
const mongoClient = require('mongodb').MongoClient;
const COSMOS_DB_CONN_STR = process.env.COSMOS_DB_CONN_STR;
const DB_COLLECTION = "azuretechfacts";
const ICON_STORAGE_URL = process.env.ICON_STORAGE_URL;
// Setup Restify Server
const server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 3978, function () {
console.log('%s listening to %s', server.name, server.url);
});
// Create chat connector for communicating with the Bot Framework Service
const connector = new builder.ChatConnector({
appId: process.env.MicrosoftAppId,
appPassword: process.env.MicrosoftAppPassword,
openIdMetadata: process.env.BotOpenIdMetadata
});
// Listen for messages from users
server.post('/api/messages', connector.listen());
/*----------------------------------------------------------------------------------------
* Bot Storage: This is a great spot to register the private state storage for your bot.
* We provide adapters for Azure Table, CosmosDb, SQL Azure, or you can implement your own!
* For samples and documentation, see: https://github.com/Microsoft/BotBuilder-Azure
* ---------------------------------------------------------------------------------------- */
const tableName = 'botdata';
const azureTableClient = new botbuilder_azure.AzureTableClient(tableName, process.env['AzureWebJobsStorage']);
const tableStorage = new botbuilder_azure.AzureBotStorage({gzipData: false}, azureTableClient);
// Create your bot with a function to receive messages from the user
// This default message handler is invoked if the user's utterance doesn't
// match any intents handled by other dialogs.
const bot = new builder.UniversalBot(connector, function (session, args) {
const DEFAULT_RESPONSE = `Sorry, I didn't understand: _'${session.message.text}'_.`;
session.send(DEFAULT_RESPONSE).endDialog();
});
bot.set('storage', tableStorage);
// Make sure you add code to validate these fields
const luisAppId = process.env.LuisAppId;
const luisAPIKey = process.env.LuisAPIKey;
const luisAPIHostName = process.env.LuisAPIHostName || 'westus.api.cognitive.microsoft.com';
const LuisModelUrl = 'https://&#39; + luisAPIHostName + '/luis/v2.0/apps/' + luisAppId + '?subscription-key=' + luisAPIKey;
// Create a recognizer that gets intents from LUIS, and add it to the bot
const recognizer = new builder.LuisRecognizer(LuisModelUrl);
bot.recognizer(recognizer);

Notice that using the LUIS-enabled Language Understanding template, Azure has provisioned a LUIS.ai app and integrated it with our chatbot. More about LUIS, next.

Intent Handlers

The next part of our chatbot’s code handles intents. Our chatbot’s intents include Greeting, Help, Cancel, and AzureFacts. The Greeting intent handler defines how the bot handles greeting a new user when they make an explicit invocation of the chatbot (without intent). The Help intent handler defines how the chatbot handles a request for help. The Cancel intent handler defines how the bot handles a user’s desire to quit, or if an unknown error occurs with our bot. The AzureFact intent handler, handles implicit invocations of the chatbot (with intent), by returning the requested Azure fact. We will use LUIS to train the AzureFacts intent in the next part of this post.

Each intent handler can return a different type of response to the user. For this demo, we will have the Greeting, Help, and AzureFacts handlers return a ThumbnailCard, while the Cancel handler simply returns a text message (gist).

/* INTENT HANDLERS */
// Add a dialog for each intent that the LUIS app recognizes.
// See https://docs.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-recognize-intent-luis
bot.dialog('GreetingDialog',
(session) => {
const WELCOME_TEXT_LONG = `You can say things like: \n` +
`_'Tell me about Azure certifications.'_ \n` +
`_'When was Azure released?'_ \n` +
`_'Give me a random fact.'_`;
let botResponse = {
title: 'What would you like to know about Microsoft Azure?',
response: WELCOME_TEXT_LONG,
image: 'image-16.png'
};
let card = createThumbnailCard(session, botResponse);
let msg = new builder.Message(session).addAttachment(card);
// let msg = botResponse.response;
session.send(msg).endDialog();
}
).triggerAction({
matches: 'Greeting'
});
bot.dialog('HelpDialog',
(session) => {
const FACTS_LIST = "Certifications, Cognitive Services, Competition, Compliance, First Offering, Functions, " +
"Geographies, Global Infrastructure, Platforms, Categories, Products, Regions, and Release Date";
const botResponse = {
title: 'Need a little help?',
response: `Current facts include: ${FACTS_LIST}.`,
image: 'image-15.png'
};
let card = createThumbnailCard(session, botResponse);
let msg = new builder.Message(session).addAttachment(card);
session.send(msg).endDialog();
}
).triggerAction({
matches: 'Help'
});
bot.dialog('CancelDialog',
(session) => {
const CANCEL_RESPONSE = 'Goodbye.';
session.send(CANCEL_RESPONSE).endDialog();
}
).triggerAction({
matches: 'Cancel'
});
bot.dialog('AzureFactsDialog',
(session, args) => {
let query;
let entity = args.intent.entities[0];
let msg = new builder.Message(session);
if (entity === undefined) { // unknown Facts entity was requested
msg = 'Sorry, you requested an unknown fact.';
console.log(msg);
session.send(msg).endDialog();
} else {
query = entity.resolution.values[0];
console.log(`Entity: ${JSON.stringify(entity)}`);
buildFactResponse(query, function (document) {
if (!document) {
msg = `Sorry, seems we are missing the fact, '${query}'.`;
console.log(msg);
} else {
let card = createThumbnailCard(session, document);
msg = new builder.Message(session).addAttachment(card);
}
session.send(msg).endDialog();
});
}
}
).triggerAction({
matches: 'AzureFacts'
});
view raw app-handlers.js hosted with ❤ by GitHub

Helper Functions

The last part of our chatbot’s code are the helper functions the intent handlers call. The functions include a function to return a random fact if the user requests one, selectRandomFact(). There are two functions, which return a ThumbnailCard or a HeroCard, depending on the request, createHeroCard(session, botResponse) and createThumbnailCard(session, botResponse).

The buildFactResponse(factToQuery, callback) function is called by the AzureFacts intent handler. This function passes the fact from the user (i.e. certifications) and a callback to the findFact(factToQuery, callback) function. The findFact function handles calling Cosmos DB, using MongoDB Node.JS Driver’s db.collection().findOne method. The function also returns a callback (gist).

/* HELPER FUNCTIONS */
function selectRandomFact() {
const FACTS_ARRAY = ['description', 'released', 'global', 'regions',
'geographies', 'platforms', 'categories', 'products', 'cognitive',
'compliance', 'first', 'certifications', 'competition', 'functions'];
return FACTS_ARRAY[Math.floor(Math.random() * FACTS_ARRAY.length)];
}
function buildFactResponse(factToQuery, callback) {
if (factToQuery.toString().trim() === 'random') {
factToQuery = selectRandomFact();
}
mongoClient.connect(COSMOS_DB_CONN_STR, function (err, client) {
const db = client.db(DB_COLLECTION);
findFact(db, factToQuery, function (document) {
client.close();
if (!document) {
console.log(`No document returned for value of ${factToQuery}?`);
}
return callback(document);
});
});
}
function createHeroCard(session, botResponse) {
return new builder.HeroCard(session)
.title('Azure Tech Facts')
.subtitle(botResponse.title)
.text(botResponse.response)
.images([
builder.CardImage.create(session, `${ICON_STORAGE_URL}/${botResponse.image}`)
])
.buttons([
builder.CardAction.openUrl(session, 'https://azure.microsoft.com&#39;, 'Learn more...')
]);
}
function createThumbnailCard(session, botResponse) {
return new builder.ThumbnailCard(session)
.title('Azure Tech Facts')
.subtitle(botResponse.title)
.text(botResponse.response)
.images([
builder.CardImage.create(session, `${ICON_STORAGE_URL}/${botResponse.image}`)
])
.buttons([
builder.CardAction.openUrl(session, 'https://azure.microsoft.com&#39;, 'Learn more...')
]);
}
function findFact(db, factToQuery, callback) {
console.log(`fact to query: ${factToQuery}`);
db.collection(DB_COLLECTION).findOne({"fact": factToQuery})
.then(function (document) {
if (!document) {
console.log(`No document found for fact '${factToQuery}'`);
}
console.log(`Document found: ${JSON.stringify(document)}`);
return callback(document);
});
}

LUIS

We will use LUIS to add a perceived degree of intelligence to our chatbot, helping it understand the domain-specific natural language model of our bot. If you have built Alexa Skills or Actions for Google Assitant, LUIS apps work almost identically. The concepts of Intents, Intent Handlers, Entities, and Utterances are universal to all three platforms.

Intents are how LUIS determines what a user wants to do. LUIS will parse user utterances, understand the user’s intent, and pass that intent onto our chatbot, to be handled by the proper intent handler. The bot will then respond accordingly to that intent — with a greeting, with the requested fact, or by providing help.

chatbot-01-intents

Entities

LUIS describes an entity as being like a variable, used to capture and pass important information. We will start by defining our AzureFacts Intent’s Facts Entities. The Facts entities represent each type of fact a user might request. The requested fact is extracted from the user’s utterances and passed to the chatbot. LUIS allows us to import entities as JSON. I have included a set of Facts entities to import, in the azure-facts-entities.json file, within the project on GitHub (gist).

[
{
"canonicalForm": "certifications",
"list": [
"certification",
"certification exam",
"certification exams"
]
},
{
"canonicalForm": "cognitive",
"list": [
"cognitive services",
"cognitive service"
]
},
{
"canonicalForm": "competition",
"list": [
"competitors",
"competitor"
]
},
{
"canonicalForm": "cosmos",
"list": [
"cosmosdb",
"cosmos db",
"nosql",
"mongodb"
]
},
{
"canonicalForm": "kubernetes",
"list": [
"aks",
"kubernetes service",
"docker",
"containers"
]
}
]

Each entity includes a primary canonical form, as well as possible synonyms the user may utter in their invocation. If the user utters a synonym, LUIS will understand the intent and pass the canonical form of the entity to the chatbot. For example, if we said ‘tell me about Azure AKS,’ LUIS understands the phrasing, identifies the correct intent, AzureFacts intent, substitutes the synonym, ‘AKS’, with the canonical form of the Facts entity, ‘kubernetes’, and passes the top scoring intent to be handled and the value ‘kubernetes’ to our bot. The bot would then query for the document associated with ‘kubernetes’ in Cosmos DB, and return a response.

chatbot-04-entities

Utterances

Once we have created and associated our Facts entities with our AzureFacts intent, we need to input a few possible phrases a user may utter to invoke our chatbot. Microsoft actually recommends not coding too many utterance examples, as part of their best practices. Below you see an elementary list of possible phrases associated with the AzureFacts intent. You also see the blue highlighted word, ‘Facts’ in each utterance, a reference to the Facts entities. LUIS understands that this position in the phrasing represents a Facts entity value.

chatbot-02-azure-intent

Patterns

Patterns, according to Microsoft, are designed to improve the accuracy of LUIS, when several utterances are very similar. By providing a pattern for the utterances, LUIS can have higher confidence in the predictions.

chatbot-03-patterns

The topic of training your LUIS app is well beyond the scope of this post. Microsoft has an excellent series of articles, I strongly suggest reading. They will greatly assist in improving the accuracy of your LUIS chatbot.

chatbot-05-training

Once you have completed building and training your intents, entities, phrases, and other items, you must publish your LUIS app for it to be accessed from your chatbot. Publish allows LUIS to be queried from an HTTP endpoint. The LUIS interface will enable you to publish both a Staging and a Production copy of the app. For brevity, I published directly to Production. If you recall our chatbot’s application settings, earlier, the settings include a luisAppIdluisAppKey, and a luisAppIdHostName. Together these form the HTTP endpoint, LuisModelUrl, through which the chatbot will access the LUIS app.

chatbot-06-publish

Using the LUIS API endpoint, we can test our LUIS app, independent of our chatbot. This can be very useful for troubleshooting bot issues. Below, we see an example utterance of ‘tell me about Azure Functions.’ LUIS has correctly deduced the user intent, assigning the AzureFacts intent with the highest prediction score. LUIS also identified the correct Entity, ‘functions,’ which it would typically return to the chatbot.

chatbot-07-luis-endpoint.png

Deployment

With our chatbot developed and our LUIS app built, trained, and published, we can deploy our bot to the Azure platform. There are a few ways to deploy our chatbot, depending on your platform and language choice. I chose to use the DevOps practice of Continuous Deployment, offered in the Azure Bot Management console.

chatbot-14-gitflow

With Continuous Deployment, every time we commit a change to GitHub, a webhook fires, and my chatbot is deployed to the Azure platform. If you have high confidence in your changes through testing, you could choose to commit and deploy directly.

chatbot-08-publish.png

Alternately, you might choose a safer approach, using feature branch or PR requests. In which case your chatbot will be deployed upon a successful merge of the feature branch or PR request to master.

Manual Testing

Azure provides the ability to test your bot, from the Azure portal. Using the Bot Management Test in Web Chat console, you can test your bot using the Web Chat channel. We will talk about different kinds of channel later in the post. This is an easy and quick way to manually test your chatbot.

chatbot-17-testing

For more sophisticated, automated testing of your chatbot, there are a handful of frameworks available, including bot-tester, which integrates with mocha and chai. Stuart Harrison published an excellent article on testing with the bot-tester framework, Building a test-driven chatbot for the Microsoft Bot Framework in Node.js.

Log Streaming

As part of testing and troubleshooting our chatbot in Production, we will need to review application logs occasionally. Azure offers their Log streaming feature. To access log streams, you must turn on application logging and chose a log level, in the Azure Monitoring Diagnostic logs console, which is off by default.

chatbot-18-log-stream.png

Once Application logging is active, you may review logs in the Monitoring Log stream console. Log streaming will automatically be turned off in 12 hours and times our after 30 minutes of inactivity. I personally find application logging and access to logs, more difficult and far less intuitive on the Azure platform, than on AWS or Google Cloud.

chatbot-13-log-stream-debugging

Metrics

As part of testing and eventually monitoring our chatbot in Production, the Azure App Service Overview console provides basic telemetry about the App Service, on which the bot is running. With Azure Application Insights, we can drill down into finer-grain application telemetry.

chatbot-11-app-service-overview

Web Integration

With our chatbot built, trained, deployed and tested, we can integrate it with multiple channels. Channels represent all how our users might interact with our chatbot, Web Chat, Slack, Skype, Facebook Messager, and so forth. Microsoft describes channels as a connection between the Bot Framework and communication apps. For this post, we will look at two channels, Web Chat and Slack.

chatbot-40-channels

Enabling Web Chat is probably the easiest of the channels. When you create a bot with Bot Service, the Web Chat channel is automatically configured for you. You used it to test your bot, earlier in the post. Displaying your chatbot through the Web Chat channel, embedded in your website, requires a secret, for which, you have two options. Option one is to keep your secret hidden, exchange your secret for a token, and generate the secret. Option two, according to Microsoft, is to embed the web chat control in your website using the secret. This method will allow other developers to easily embed your bot into their websites.

chatbot-61-web.png

Embedding Web Chat in your website allows your website users to interact directly with your chatbot. Shown below, I have embedded our chatbot’s Web Chat channel in my website. It will enable a user to interact with the bot, independent of the website’s primary content. Here, a user could ask for more information on a particular topic they found of interest in the article, such as Azure Kubernetes Service (AKS). The Web Chat window is collapsible when not in use.

chatbot-60-web

The Web Chat is embedded using an HTML iframe tag. The HTML code to embed the Web Chat channel is included in the Web Chat Channel Configuration console, shown above. I found an effective piece of JavaScript code by Anthony Cu, in his post, Embed the Bot Framework Web Chat as a Collapsible Window. I modified Anthony’s code to fit my own website, moving the collapsable Web Chat iframe to the bottom right corner and adjusting the dimensions of the frame, as not to intrude on the site’s central content area. I’ve included the code and a simulated HTML page in the GitHub project.

Slack Integration

To integrate your chatbot with the Slack channel, I will assume you have an existing Slack Workspace with sufficient admin rights to create and deploy Slack Apps to that Workspace.

To integrate our chatbot with Slack, we need to create a new Bot App for Slack. The Slack App will be configured to interact with our deployed bot on Azure securely. Microsoft has an excellent set of easy-to-follow documentation on connecting a bot to Slack. I am only going to summarize the steps here.

chatbot-53-slack-bot.png

Once your Slack App is created, the Slack App contains a set of credentials.

chatbot-49-slack-app.png

Those Slack App credentials are then shared with the chatbot, in the Azure Slack Channel integration console. This ensures secure communication between your chatbot and your Slack App.

chatbot-48-slack-app

Part of creating your Slack App, is configuring Event Subscriptions. The Microsoft documentation outlines six bot events that need to be configured. By subscribing to bot events, your app will be notified of user activities at the URL you specify.

chatbot-46-slack-app

You also configure a Bot User. Adding a Bot User allows you to assign a username for your bot and choose whether it is always shown as online. This is the username you will see within your Slack app.

chatbot-43-slack-app

Once the Slack App is built, credentials are exchanged, events are configured, and the Bot User is created, you finish by authorizing your Slack App to interact with users within your Slack Workspace. Below I am authorizing the Azure Tech Facts Bot Slack App to interact with users in my Programmatic Ponderings Slack Workspace.

chatbot-45-slack-app.png

Below we see the Azure Tech Facts Bot Slack App deployed to my Programmatic Ponderings Slack Workspace. The Slack App is shown in the Slack for Mac desktop app.

chatbot-50-slack-bot

Similarly, we see the same Azure Tech Facts Bot Slack App being used in the Slack iOS App.

Mobile Skype3.png

Conclusion

In this brief post, we saw how to develop a Machine learning-based LUIS-enabled chatbot using the Azure Bot Service and the BotBuilder SDK. We enhanced the bot’s functionality with two of Azure’s Cloud services, Cosmos DB and Blob Storage. Once built, we integrated our chatbot with the Web Chat and Slack channels.

This was a very simple demonstration of a LUIS chatbot. The true power of intelligent bots comes from further integrating bots with Azure AI and Machine Learning Services, such as Azure’s Cognitive Services. Additionally, Azure cloud platform offers other more traditional cloud services, in addition to Cosmos DB and Blob Storage, to extend the feature and functionally of bots, such as messaging, IoT, Azure serverless Functions, and AKS-based microservices.

Azure is a trademark of Microsoft
The image in the background of Azure icon copyright: kran77 / 123RF Stock Photo

All opinions expressed in this post are my own and not necessarily the views of my current or past employers, their clients, or Microsoft.

, , , , , , , , ,

5 Comments

Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB

An earlier post, Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ, demonstrated the use of a message-based, event-driven, decoupled architectural approach for communications between microservices, using Spring AMQP and RabbitMQ. This distributed computing model is known as eventual consistency. To paraphrase microservices.io, ‘using an event-driven, eventually consistent approach, each service publishes an event whenever it updates its data. Other services subscribe to events. When an event is received, a service (subscriber) updates its data.

That earlier post illustrated a fairly simple example application, the Voter API, consisting of a set of three Spring Boot microservices backed by MongoDB and RabbitMQ, and fronted by an API Gateway built with HAProxy. All API components were containerized using Docker and designed for use with Docker CE for AWS as the Container-as-a-Service (CaaS) platform.

Optimizing for Kubernetes on Azure

This post will demonstrate how a modern application, such as the Voter API, is optimized for Kubernetes in the Cloud (Kubernetes-as-a-Service), in this case, AKS, Azure’s new public preview of Managed Kubernetes for Azure Container Service. According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. I wrote about AKS in detail, in my last post, First Impressions of AKS, Azure’s New Managed Kubernetes Container Service.

In addition to migrating to AKS, the Voter API will take advantage of additional enterprise-grade Azure’s resources, including Azure’s Service Bus and Cosmos DB, replacements for the Voter API’s RabbitMQ and MongoDB. There are several architectural options for the Voter API’s messaging and NoSQL data source requirements when moving to Azure.

  1. Keep Dockerized RabbitMQ and MongoDB – Easy to deploy to Kubernetes, but not easily scalable, highly-available, or manageable. Would require storage optimized Azure VMs for nodes, node affinity, and persistent storage for data.
  2. Replace with Cloud-based Non-Azure Equivalents – Use SaaS-based equivalents, such as CloudAMQP (RabbitMQ-as-a-Service) and MongoDB Atlas, which will provide scalability, high-availability, and manageability.
  3. Replace with Azure Service Bus and Cosmos DB – Provides all the advantages of  SaaS-based equivalents, and additionally as Azure resources, benefits from being in the Azure Cloud alongside AKS.

Source Code

The Kubernetes resource files and deployment scripts used in this post are all available on GitHub. This is the only project you need to clone to reproduce the AKS example in this post.

git clone \
  --branch master --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/azure-aks-sb-cosmosdb-demo.git

The Docker images for the three Spring microservices deployed to AKS, Voter, Candidate, and Election, are available on Docker Hub. Optionally, the source code, including Dockerfiles, for the Voter, Candidate, and Election microservices, as well as the Voter Client are available on GitHub, in the kub-aks branch.

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/candidate-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/election-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/voter-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/voter-client.git

Azure Service Bus

To demonstrate the capabilities of Azure’s Service Bus, the Voter API’s Spring microservice’s source code has been re-written to work with Azure Service Bus instead of RabbitMQ. A future post will explore the microservice’s messaging code. It is more likely that a large application, written specifically for a technology that is easily portable such as RabbitMQ or MongoDB, would likely remain on that technology, even if the application was lifted and shifted to the Cloud or moved between Cloud Service Providers (CSPs). Something important to keep in mind when choosing modern technologies – portability.

Service Bus is Azure’s reliable cloud Messaging-as-a-Service (MaaS). Service Bus is an original Azure resource offering, available for several years. The core components of the Service Bus messaging infrastructure are queues, topics, and subscriptions. According to Microsoft, ‘the primary difference is that topics support publish/subscribe capabilities that can be used for sophisticated content-based routing and delivery logic, including sending to multiple recipients.

Since the three Voter API’s microservices are not required to produce messages for more than one other service consumer, Service Bus queues are sufficient, as opposed to a pub/sub model using Service Bus topics.

Cosmos DB

Cosmos DB, Microsoft’s globally distributed, multi-model database, offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs). Ideal for the Voter API, Cosmos DB supports MongoDB’s data models through the MongoDB API, a MongoDB database service built on top of Cosmos DB. The MongoDB API is compatible with existing MongoDB libraries, drivers, tools, and applications. Therefore, there are no code changes required to convert the Voter API from MongoDB to Cosmos DB. I simply had to change the database connection string.

NGINX Ingress Controller

Although the Voter API’s HAProxy-based API Gateway could be deployed to AKS, it is not optimal for Kubernetes. Instead, the Voter API will use an NGINX-based Ingress Controller. NGINX will serve as an API Gateway, as HAProxy did, previously.

According to NGINX, ‘an Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for your Kubernetes services. Such a load balancer usually exposes your services to clients outside of your Kubernetes cluster.

An Ingress resource requires an Ingress Controller to function. Continuing from NGINX, ‘an Ingress Controller is an application that monitors Ingress resources via the Kubernetes API and updates the configuration of a load balancer in case of any changes. Different load balancers require different Ingress controller implementations. In the case of software load balancers, such as NGINX, an Ingress controller is deployed in a pod along with a load balancer.

There are currently two NGINX-based Ingress Controllers available, one from Kubernetes and one directly from NGINX. Both being equal, for this post, I chose the Kubernetes version, without RBAC (Kubernetes offers a version with and without RBAC). RBAC should always be used for actual cluster security. There are several advantages of using either version of the NGINX Ingress Controller for Kubernetes, including Layer 4 TCP and UDP and Layer 7 HTTP load balancing, reverse proxying, ease of SSL termination, dynamically-configurable path-based rules, and support for multiple hostnames.

Azure Web App

Lastly, the Voter Client application, not really part of the Voter API, but useful for demonstration purposes, will be converted from a containerized application to an Azure Web App. Since it is not part of the Voter API, separating the Client application from AKS makes better architectural sense. Web Apps are a powerful, richly-featured, yet incredibly simple way to host applications and services on Azure. For more information on using Azure Web Apps, read my recent post, Developing Applications for the Cloud with Azure App Services and MongoDB Atlas.

Revised Component Architecture

Below is a simplified component diagram of the new architecture, including Azure Service Bus, Cosmos DB, and the NGINX Ingress Controller. The new architecture looks similar to the previous architecture, but as you will see, it is actually very different.

AKS-r8.png

Process Flow

To understand the role of each API component, let’s look at one of the event-driven, decoupled process flows, the creation of a new election candidate. In the simplified flow diagram below, an API consumer executes an HTTP POST request containing the new candidate object as JSON. The Candidate microservice receives the HTTP request and creates a new document in the Cosmos DB Voter database. A Spring RepositoryEventHandler within the Candidate microservice responds to the document creation and publishes a Create Candidate event message, containing the new candidate object as JSON, to the Azure Service Bus Candidate Queue.

Candidate_Producer

Independently, the Voter microservice is listening to the Candidate Queue. Whenever a new message is produced by the Candidate microservice, the Voter microservice retrieves the message off the queue. The Voter microservice then transforms the new candidate object contained in the incoming message to its own candidate data model and creates a new document in its own Voter database.

Voter_Consumer

The same process flows exist between the Election and the Candidate microservices. The Candidate microservice maintains current elections in its database, which are retrieved from the Election queue.

Data Models

It is useful to understand, the Candidate microservice’s candidate domain model is not necessarily identical to the Voter microservice’s candidate domain model. Each microservice may choose to maintain its own representation of a vote, a candidate, and an election. The Voter service transforms the new candidate object in the incoming message based on its own needs. In this case, the Voter microservice is only interested in a subset of the total fields in the Candidate microservice’s model. This is the beauty of decoupling microservices, their domain models, and their datastores.

Other Events

The versions of the Voter API microservices used for this post only support Election Created events and Candidate Created events. They do not handle Delete or Update events, which would be necessary to be fully functional. For example, if a candidate withdraws from an election, the Voter service would need to be notified so no one places votes for that candidate. This would normally happen through a Candidate Delete or Candidate Update event.

Provisioning Azure Service Bus

First, the Azure Service Bus is provisioned. Provisioning the Service Bus may be accomplished using several different methods, including manually using the Azure Portal or programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform. I chose to provision the Azure Service Bus and the two queues using the Azure Portal for expediency. I chose the Basic Service Bus Tier of service, of which there are three tiers, Basic, Standard, and Premium.

Azure_006_Full_ServiceBus

The application requires two queues, the candidate.queue, and the election.queue.

Azure_008_ServiceBus

Provisioning Cosmos DB

Next, Cosmos DB is provisioned. Like Azure Service Bus, Cosmos DB may be provisioned using several methods, including manually using the Azure Portal, programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform, or using the Azure CLI, which was my choice.

az cosmosdb create \
  --name cosmosdb_instance_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --location "East US=0" \
  --kind MongoDB

The post’s Cosmos DB instance exists within the single East US Region, with no failover. In a real Production environment, you would configure Cosmos DB with multi-region failover. I chose MongoDB as the type of Cosmos DB database account to create. The allowed values are GlobalDocumentDB, MongoDB, Parse. All other settings were left to the default values.

The three Spring microservices each have their own database. You do not have to create the databases in advance of consuming the Voter API. The databases and the database collections will be automatically created when new documents are first inserted by the microservices. Below, the three databases and their collections have been created and populated with documents.

Azure_001_CosmosDB

The GitHub project repository also contains three shell scripts to generate sample vote, candidate, and election documents. The scripts will delete any previous documents from the database collections and generate new sets of sample documents. To use, you will have to update the scripts with your own Voter API URL.

AKS_012_AddVotes2

MongoDB Aggregation Pipeline

Each of the three Spring microservices uses Spring Data MongoDB, which takes advantage of MongoDB’s Aggregation Framework. According to MongoDB, ‘the aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result.’ Below is an example of aggregation from the Candidate microservice’s VoterContoller class.

Aggregation aggregation = Aggregation.newAggregation(
    match(Criteria.where("election").is(election)),
    group("candidate").count().as("votes"),
    project("votes").and("candidate").previousOperation(),
    sort(Sort.Direction.DESC, "votes")
);

To use MongoDB’s aggregation framework with Cosmos DB, it is currently necessary to activate the MongoDB Aggregation Pipeline Preview Feature of Cosmos DB. The feature can be activated from the Azure Portal, as shown below.

Azure_002_CosmosDB

Cosmos DB Emulator

Be warned, Cosmos DB can be very expensive, even without database traffic or any Production-grade bells and whistles. Be careful when spinning up instances on Azure for learning purposes, the cost adds up quickly! In less than ten days, while writing this post, my cost was almost US$100 for the Voter API’s Cosmos DB instance.

I strongly recommend downloading the free Azure Cosmos DB Emulator to develop and test applications from your local machine. Although certainly not as convenient, it will save you the cost of developing for Cosmos DB directly on Azure.

With Cosmos DB, you pay for reserved throughput provisioned and data stored in containers (a collection of documents or a table or a graph). Yes, that’s right, Azure charges you per MongoDB collection, not even per database. Azure Cosmos DB’s current pricing model seems less than ideal for microservice architectures, each with their own database instance.

By default the reserved throughput, billed as Request Units (RU) per second or RU/s, is set to 1,000 RU/s per collection. For development and testing, you can reduce each collection to a minimum of 400 RU/s. The Voter API creates five collections at 1,000 RU/s or 5,000 RU/s total. Reducing this to a total of 2,000 RU/s makes Cosmos DB marginally more affordable to explore.

Azure_003_CosmosDB.PNG

Building the AKS Cluster

An existing Azure Resource Group is required for AKS. I chose to use the latest available version of Kubernetes, 1.8.2.

# login to azure
az login \
  --username your_username  \
  --password your_password

# create resource group
az group create \
  --resource-group resource_group_name_goes_here \
  --location eastus

# create aks cluster
az aks create \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --ssh-key-value path_to_your_public_key \
  --kubernetes-version 1.8.2

# get credentials to access aks cluster
az aks get-credentials \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

# display cluster's worker nodes
kubectl get nodes --output=wide

By default, AKS will provision a three-node Kubernetes cluster using Azure’s Standard D1 v2 Virtual Machines. According to Microsoft, ‘D series VMs are general purpose VM sizes provide balanced CPU-to-memory ratio. They are ideal for testing and development, small to medium databases, and low to medium traffic web servers.’ Azure D1 v2 VM’s are based on Linux OS images, currently Debian 8 (Jessie), with 1 vCPU and 3.5 GB of memory. By default with AKS, each VM receives 30 GiB of Standard HDD attached storage.

AKS_002_CreateCluster
You should always select the type and quantity of the cluster’s VMs and their attached storage, optimized for estimated traffic volumes and the specific workloads you are running. This can be done using the --node-count--node-vm-size, and --node-osdisk-size arguments with the az aks create command.

Deployment

The Voter API resources are deployed to its own Kubernetes Namespace, voter-api. The NGINX Ingress Controller resources are deployed to a different namespace, ingress-nginx. Separate namespaces help organize individual Kubernetes resources and separate different concerns.

Voter API

First, the voter-api namespace is created. Then, five required Kubernetes Secrets are created within the namespace. These secrets all contain sensitive information, such as passwords, that should not be shared. There is one secret for each of the three Cosmos DB database connection strings, one secret for the Azure Service Bus connection string, and one secret for the Let’s Encrypt SSL/TLS certificate and private key, used for secure HTTPS access to the Voter API.

AKS_004_CreateVoterAPI

Secrets

The Voter API’s secrets are used to populate environment variables within the pod’s containers. The environment variables are then available for use within the containers. Below is a snippet of the Voter pods resource file showing how the Cosmos DB and Service Bus connection strings secrets are used to populate environment variables.

env:
  - name: AZURE_SERVICE_BUS_CONNECTION_STRING
    valueFrom:
      secretKeyRef:
        name: azure-service-bus
        key: connection-string
  - name: SPRING_DATA_MONGODB_URI
    valueFrom:
      secretKeyRef:
        name: azure-cosmosdb-voter
        key: connection-string

Shown below, the Cosmos DB and Service Bus connection strings secrets have been injected into the Voter container and made available as environment variables to the microservice’s executable JAR file on start-up. As environment variables, the secrets are visible in plain text. Access to containers should be tightly controlled through Kubernetes RBAC and Azure AD, to ensure sensitive information, such as secrets, remain confidential.

AKS_014_EnvVars

Next, the three Kubernetes ReplicaSet resources, corresponding to the three Spring microservices, are created using Deployment controllers. According to Kubernetes, a Deployment that configures a ReplicaSet is now the recommended way to set up replication. The Deployments specify three replicas of each of the three Spring Services, resulting in a total of nine Kubernetes Pods.

Each pod, by default, will be scheduled on a different node if possible. According to Kubernetes, ‘the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.).’ Note below how each of the three microservice’s three replicas has been scheduled on a different node in the three-node AKS cluster.

AKS_005B_CreateVoterPods

Next, the three corresponding Kubernetes ClusterIP-type Services are created. And lastly, the Kubernetes Ingress is created. According to Kubernetes, the Ingress resource is an API object that manages external access to the services in a cluster, typically HTTP. Ingress provides load balancing, SSL termination, and name-based virtual hosting.

The Ingress configuration contains the routing rules used with the NGINX Ingress Controller. Shown below are the routing rules for each of the three microservices within the Voter API. Incoming API requests are routed to the appropriate pod and service port by NGINX.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: voter-ingress
  namespace: voter-api
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - api.voter-demo.com
    secretName: api-voter-demo-secret
  rules:
  - http:
      paths:
      - path: /candidate
        backend:
          serviceName: candidate
          servicePort: 8080
      - path: /election
        backend:
          serviceName: election
          servicePort: 8080
      - path: /voter
        backend:
          serviceName: voter
          servicePort: 8080

The screengrab below shows all of the Voter API resources created on AKS.

AKS_005B_CreateVoterAPI

NGINX Ingress Controller

After completing the deployment of the Voter API, the NGINX Ingress Controller is created. It starts with creating the ingress-nginx namespace. Next, the NGINX Ingress Controller is created, consisting of the NGINX Ingress Controller, three Kubernetes ConfigMap resources, and a default back-end application. The Controller and backend each have their own Service resources. Like the Voter API, each has three replicas, for a total of six pods. Together, the Ingress resource and NGINX Ingress Controller manage traffic to the Spring microservices.

The screengrab below shows all of the NGINX Ingress Controller resources created on AKS.

AKS_015_NGINX_Resources.PNG

The  NGINX Ingress Controller Service, shown above, has an external public IP address associated with itself.  This is because that Service is of the type, Load Balancer. External requests to the Voter API will be routed through the NGINX Ingress Controller, on this IP address.

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https

If you are only using HTTPS, not HTTP, then the references to HTTP and port 80 in the Ingress configuration are unnecessary. The NGINX Ingress Controller’s resources are explained in detail in the GitHub documentation, along with further configuration instructions.

DNS

To provide convenient access to the Voter API and the Voter Client, my domain, voter-demo.com, is associated with the public IP address associated with the Voter API Ingress Controller and with the public IP address associated with the Voter Client Azure Web App. DNS configuration is done through Azure’s DNS Zone resource.

AKS_008_FinalDNS

The two TXT type records might not look as familiar as the SOA, NS, and A type records. The TXT records are required to associate the domain entries with the Voter Client Azure Web App. Browsing to http://www.voter-demo.com or simply http://voter-demo.com brings up the Voter Client.

VoterClientAngular5_small.png

The Client sends and receives data via the Voter API, available securely at https://api.voter-demo.com.

Routing API Requests

With the Pods, Services, Ingress, and NGINX Ingress Controller created and configured, as well as the Azure Layer 4 Load Balancer and DNS Zone, HTTP requests from API consumers are properly and securely routed to the appropriate microservices. In the example below, three back-to-back requests are made to the voter/info API endpoint. HTTP requests are properly routed to one of the three Voter pod replicas using the default round-robin algorithm, as proven by the observing the different hostnames (pod names) and the IP addresses (private pod IPs) in each HTTP response.

AKS_013B_LoadBalancingReplicaSets.PNG

Final Architecture

Shown below is the final Voter API Azure architecture. To simplify the diagram, I have deliberately left out the three microservice’s ClusterIP-type Services, the three default back-end application pods, and the default back-end application’s ClusterIP-type Service. All resources shown below are within the single East US Azure region, except DNS, which is a global resource.

kub-aks-v10-small.png

Shown below is the new Azure Resource Group created by Azure during the AKS provisioning process. The Resource Group contains the various resources required to support the AKS cluster, NGINX Ingress Controller, and the Voter API. Necessary Azure resources were automatically provisioned when I provisioned AKS and when I created the new Voter API and NGINX resources.

AKS_006_FinalRG

In addition to the Resource Group above, the original Resource Group contains the AKS Container Service resource itself, Service Bus, Cosmos DB, and the DNS Zone resource.

AKS_007_FinalRG

The Voter Client Web App, consisting of the Azure App Service and App Service plan resource, is located in a third, separate Resource Group, not shown here.

Cleaning Up AKS

A nice feature of AKS, running a az aks delete command will delete all the Azure resources created as part of provisioning AKS, the API, and the Ingress Controller. You will have to delete the Cosmos DB, Service Bus, and DNS Zone resources, separately.

az aks delete \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

Conclusion

Taking advantage of Kubernetes with AKS, and the array of Azure’s enterprise-grade resources, the Voter API was shifted from a simple Docker architecture to a production-ready solution. The Voter API is now easier to manage, horizontally scalable, fault-tolerant, and marginally more secure. It is capable of reliably supporting dozens more microservices, with multiple replicas. The Voter API will handle a high volume of data transactions and event messages.

There is much more that needs to be done to productionalize the Voter API on AKS, including:

  • Add multi-region failover of Cosmos DB
  • Upgrade to Service Bus Standard or Premium Tier
  • Optimized Azure VMs and storage for anticipated traffic volumes and application-specific workloads
  • Implement Kubernetes RBAC
  • Add Monitoring, logging, and alerting with Envoy or similar
  • Secure end-to-end TLS communications with Itsio or similar
  • Secure the API with OAuth and Azure AD
  • Automate everything with DevOps – AKS provisioning, testing code, creating resources, updating microservices, and managing data

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , , , ,

4 Comments

First Impressions of AKS, Azure’s New Managed Kubernetes Container Service

Kubernetes as a Service

On October 24, 2017, less than a month prior to writing this post, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. According to PM Lead, Containers @ Microsoft Azure, in a blog post, AKS ‘features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling.’ Monroy goes on to say, ‘with AKS, customers get the benefit of open source Kubernetes without complexity and operational overhead.

Unquestionably, Kubernetes has become the leading Container-as-a-Service (CaaS) choice, at least for now. Along with the release of AKS by Microsoft, there have been other recent announcements, which reinforce Kubernetes dominance. In late September, Rancher Labs announced the release of Rancher 2.0. According to Rancher, Rancher 2.0 would be based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. Even AWS seems to be warming up to Kubernetes, despite their own ECS, according to sources. There are rumors AWS will announce a Kubernetes offering at AWS re:Invent 2017, starting a week from now.

Previewing AKS

Being a big fan of both Azure and Kubernetes, I decided to give AKS a try. I chose to deploy an existing, simple, multi-tier web application, which I had used in several previous posts, including Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. All the code used for this post is available on GitHub.

Sample Application

The application, the Voter application, is composed of an AngularJS frontend client-side UI, and two Java Spring Boot microservices, both backed by individual MongoDB databases, and fronted with an HAProxy-based API Gateway. The AngularJS UI calls the API Gateway, which in turn calls the Spring services. The two microservices communicate with each other using HTTP-based inter-process communication (IPC). Although I would prefer event-based service-to-service IPC, HTTP-based IPC was simpler to implement, for this post.

AKS v4

Interestingly, the Voter application was designed using Docker Community Edition for Mac and deployed to AWS using Docker Community Edition for AWS. Not only would this be my chance to preview AKS, but also an opportunity to compare the ease of developing for Docker CE on AWS using a Mac, to developing for Kubernetes with AKS using Docker Community Edition for Windows.

Required Software

In order to develop for AKS on my Windows 10 Enterprise workstation, I first made sure I had the latest copies of the following software:

If you are following along with the post, make sure you have the latest version of the Azure CLI, minimally 2.0.21, according to the Azure CLI release notes. Also, I happen to be running the latest version of Docker CE from the Edge Channel. However, either channel’s latest release of Docker CE for Windows should work for this post. Using PowerShell is optional. I prefer PowerShell over working from the Windows Command Prompt, if for nothing else than to preserve my command history, by default.

AKS_V2_04

Kubernetes Resources with Kompose

Originally developed for Docker CE, the Voter application stack was defined in a single Docker Compose file.


version: '3'
services:
mongodb:
image: mongo:latest
command:
–smallfiles
hostname: mongodb
ports:
27017:27017/tcp
networks:
voter_overlay_net
volumes:
voter_data_vol:/data/db
candidate:
image: garystafford/candidate-service:0.2.28
depends_on:
mongodb
hostname: candidate
ports:
8080:8080/tcp
networks:
voter_overlay_net
voter:
image: garystafford/voter-service:0.2.104
depends_on:
mongodb
candidate
hostname: voter
ports:
8080:8080/tcp
networks:
voter_overlay_net
client:
image: garystafford/voter-client:0.2.44
depends_on:
voter
hostname: client
ports:
80:8080/tcp
networks:
voter_overlay_net
gateway:
image: garystafford/voter-api-gateway:0.2.24
depends_on:
voter
hostname: gateway
ports:
8080:8080/tcp
networks:
voter_overlay_net
networks:
voter_overlay_net:
driver: overlay
volumes:
voter_data_vol:

To work on AKS, the application stack’s configuration needs to be reproduced as Kubernetes configuration files. Instead of writing the configuration files manually, I chose to use kompose. Kompose is described on its website as ‘a conversion tool for Docker Compose to container orchestrators such as Kubernetes.’ Using kompose, I was able to automatically convert the Docker Compose file into analogous Kubernetes resource configuration files.

kompose convert -f docker-compose.yml

Each Docker service in the Docker Compose file was translated into a separate Kubernetes Deployment resource configuration file, as well as a corresponding Service resource configuration file.

AKS_Demo_08

For the AngularJS Client Service and the HAProxy API Gateway Service, I had to modify the Service configuration files to switch the Service type to a Load Balancer (type: LoadBalancer). Being a Load Balancer, Kubernetes will assign a publically accessible IP address to each Service; the reasons for which are explained later in the post.

The MongoDB service requires a persistent storage volume. To accomplish this with Kubernetes, kompose created a PersistentVolumeClaims resource configuration file. I did have to create a corresponding PersistentVolume resource configuration file. It was also necessary to modify the PersistentVolumeClaims resource configuration file, specifying the Storage Class Name as manual, to correspond to the AKS Storage Class configuration (storageClassName: manual).


kind: PersistentVolume
apiVersion: v1
metadata:
name: voter-data-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
ReadWriteOnce
hostPath:
path: "/tmp/data"

From the original Docker Compose file, containing five Docker services, I ended up with a dozen individual Kubernetes resource configuration files. Individual configuration files are optimal for fine-grain management of Kubernetes resources. The Docker Compose file and the Kubernetes resource configuration files are included in the GitHub project.

git clone \
  --branch master --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/azure-aks-demo.git

Creating AKS Resources

New AKS Feature Flag

According to Microsoft, to start with AKS, while still a preview, creating new clusters requires a feature flag on your subscription.

az provider register -n Microsoft.ContainerService

Using a brand new Azure account for this demo, I also needed to activate two additional feature flags.

az provider register -n Microsoft.Network
az provider register -n Microsoft.Compute

If you are missing required features flags, you will see errors, similar to. the below error.

Operation failed with status: ’Bad Request’. Details: Required resource provider registrations Microsoft.Compute,Microsoft.Network are missing.

Resource Group

AKS requires an Azure Resource Group for AKS. I chose to create a new Resource Group, using the Azure CLI.

az group create \
  --resource-group resource_group_name_goes_here \
  --location eastus

AKS_Demo_01

New Kubernetes Cluster

Using the aks feature of the Azure CLI version 2.0.21 or later, I provisioned a new Kubernetes cluster. By default, Azure will create a 3-node cluster. You can override the default number of nodes using the --node-count parameter; I chose one node. The version of Kubernetes you choose is also configurable using the --kubernetes-version parameter. I selected the latest Kubernetes version available with AKS, 1.8.2.

az aks create \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --node-count 1 \
  --generate-ssh-keys \
  --kubernetes-version 1.8.2

AKS_Demo_03

The newly created Azure Resource Group and AKS Kubernetes Cluster were both then visible on the Azure Portal.

AKS_Demo_04b

In addition to the new Resource Group I created, Azure also created a second Resource Group containing seven Azure resources. These Azure resources include a Virtual Machine (the single Kubernetes node), Network Security Group, Network Interface, Virtual Network, Route Table, Disk, and an Availability Group.

AKS_Demo_05b

With AKS up and running, I used another Azure CLI command to create a proxy connection to the Kubernetes Dashboard, which was deployed automatically and was running within the new AKS Cluster. The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.

az aks browse \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

AKS_Demo_23

Although no applications were deployed to AKS, yet, there were several Kubernetes components running within the AKS Cluster. Kubernetes components running within the kube-system Namespace, included heapster, kube-dns, kubernetes-dashboard, kube-proxy, kube-svc-redirect, and tunnelfront.

AKS_Demo_06B

Deploying the Application

MongoDB should be deployed first. Both the Voter and Candidate microservices depend on MongoDB. MongoDB is composed of four Kubernetes resources, a Deployment resource, Service resource, PersistentVolumeClaim resource, and PersistentVolume resource. I used kubectl, the command line interface for running commands against Kubernetes clusters, to create the four MongoDB resources, from the configuration files.

kubectl create \
  -f voter-data-vol-persistentvolume.yaml \
  -f voter-data-vol-persistentvolumeclaim.yaml \
  -f mongodb-deployment.yaml \
  -f mongodb-service.yaml

AKS_Demo_09

After MongoDB was deployed and running, I created the four remaining Deployment resources, Client, Gateway, Voter, and Candidate, from the Deployment resource configuration files. According to Kubernetes, ‘a Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.

AKS_Demo_10

Lastly, I created the remaining Service resources from the Service resource configuration files. According to Kubernetes, ‘a Service is an abstraction which defines a logical set of Pods and a policy by which to access them.

AKS_Demo_11

Switching back to the Kubernetes Dashboard, the Voter application components were now visible.

AKS_Demo_13

There were five Kubernetes Pods, one for each application component. Since there is only one Node in the Kubernetes Cluster, all five Pods were deployed to the same Node. There were also five corresponding Kubernetes Deployments.

AKS_Demo_14

Similarly, there were five corresponding Kubernetes ReplicaSets, the next-generation Replication Controller. There were also five corresponding Kubernetes Services. Note the Gateway and Client Services have an External Endpoint (External IP) associated with them. The IPs were created as a result of adding the Load Balancer Service type to their Service resource configuration files, mentioned earlier.

AKS_Demo_15.PNG

Lastly, note the Persistent Disk Claim for MongoDB, which had been successfully bound.

AKS_Demo_16

Switching back to the Azure Portal, following the application deployment, there were now three additional resources in the AKS Resource Group, a new Azure Load Balancer and two new Public IP Addresses. The Load Balancer is used to balance the Client and Gateway Services, which both have public IP addresses.

AKS_Demo_21b

To confirm the Gateway, Voter, and Candidate Services were reachable, using the public IP address of the Gateway Service, I browsed to HAProxy’s Statistics web page. Note the two backends, candidate and voter. The green color means HAProxy was able to successfully connect to both of these Services.

AKS_Demo_12

Accessing the Application

The Voter application’s AngularJS UI frontend can be accessed using the Client Service’s public IP address. However, this would not be very user-friendly. Even if I brought up the UI, using the public IP, the UI would be unable to connect to the HAProxy API Gateway, and subsequently, the Voter or Candidate Services. Based on the Client’s current configuration, the Client is expecting to find the Gateway at api.voter-demo.com:8080.

To make accessing the Client more user-friendly, and to ensure the Client has access to the Gateway, I provisioned an Azure DNS Zone resource for my domain, voter-demo.com. I assigned the DNS Zone to the AKS Resource Group.

AKS_Demo_20b

Within the new DNS Zone, I created three DNS records. The first record, an Alias (A) record, associated voter-demo.com with the public IP address of the Client Service. I added a second Alias (A) record for the www subdomain, also associating it with the public IP address of the Client Service. The third Alias (A) record associated the api subdomain with the public IP address of the Gateway Service.

AKS_Demo_18b

At a high level, the voter application’s routing architecture looks as follows. The client’s requests to the primary domain or to the api subdomain are resolved to one of the two public IP addresses configured in the load balancer’s frontend. The requests are passed to the load balancer’s backend pool, containing the single Azure VM, which is the single Kubernetes node, and onto the client or gateway Kubernetes Service. From there, requests are routed to one the appropriate Kubernetes Pods, containing the containerized application components, client or gateway.

kub-aks

Browsing to either http://voter-demo.com or http://www.voter-demo.com should bring up the Voter app UI (oh look, Hillary won this time…).

Mobile_App_View

Using Chrome’s Developer Tools, observe when a new vote is placed, an HTTP POST is made to the gateway, on the /voter/votes endpoint, http://api.voter-demo.com:8080/voter/votes. The Gateway then proxies this request to the Voter Service, at http://voter:8080/voter/votes. Since the Gateway and Voter Services both run within the same Cluster, the Gateway is able to use the Voter service’s name to address it, using Kubernetes kube-dns.

AKS_Demo_19b

Conclusion

In the past, I have developed, deployed, and managed containerized applications, using Rancher, AWS, AWS ECS, native Kubernetes, RedHat OpenShift, Docker Enterprise Edition, and Docker Community Edition. Based on my experience, and given my limited testing with Azure’s public preview of AKS, I am very impressed. Creating the Kubernetes Cluster could not have been easier. Scaling the Cluster, adding Persistent Volumes, and upgrading Kubernetes, is equally as easy. Conveniently, AKS integrates with other Kubernetes tools, like kubectl, kompose, and Helm.

I did run into some minor issues with the AKS Preview, such as being unable to connect to an earlier Cluster, after upgrading from the default Kubernetes version to 1.82. I also experience frequent disconnects when proxying to the Kubernetes Dashboard. I am sure the AKS Preview bugs will be worked out by the time AKS is officially released.

In addition to the many advantages of Kubernetes as a CaaS, a huge advantage of using AKS is the ability to easily integrate Azure’s many enterprise-grade compute, networking, database, caching, storage, and messaging resources. It would require minimal effort to swap out the Voter application’s single containerized version of MongoDB with a highly performant and available instance of Azure Cosmos DB. Similarly, it would be relatively easy to swap out the single containerized version of HAProxy with a fully-featured and secure instance of Azure API Management. The current version of the Voter application replies on RabbitMQ for service-to-service IPC versus this earlier application version’s reliance on HTTP-based IPC. It would be fairly simple to swap RabbitMQ for Azure Service Bus.

Lastly, AKS easily integrates with leading Development and DevOps tooling and processes. Building, managing, and deploying applications to AKS, is possible with Visual Studio, VSTS, Jenkins, Terraform, and Chef, according to Microsoft.

References

A few good references to get started with AKS:

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , ,

3 Comments

Developing Applications for the Cloud with Azure App Services and MongoDB Atlas

Shift Left Cloud

The continued growth of compute services by leading Cloud Service Providers (CSPs) like Microsoft, Amazon, and Google are transforming the architecture of modern software applications, as well as the software development lifecycle (SDLC). Self-service access to fully-managed, reasonably-priced, secure compute has significantly increased developer productivity. At the same time, cloud-based access to cutting-edge technologies, like Artificial Intelligence (AI), Internet Of Things (IoT), Machine Learning, and Data Analytics, has accelerated the capabilities of modern applications. Finally, as CSPs become increasingly platform agnostic, Developers are no longer limited to a single technology stack or operating system. Today, Developers are solving complex problems with multi-cloud, multi-OS polyglot solutions.

Developers now leverage the Cloud from the very start of the software development process; shift left Cloud, if you will*. Developers are no longer limited to building and testing software on local workstations or on-premise servers, then throwing it over the wall to Operations for deployment to the Cloud. Developers using Azure, AWS, and GCP, develop, build, test, and deploy their code directly to the Cloud. Existing organizations are rapidly moving development environments from on-premise to the Cloud. New organizations are born in the Cloud, without the burden of legacy on-premise data-centers and servers under desks to manage.

Example Application

To demonstrate the ease of developing a modern application for the Cloud, let’s explore a simple API-based, NoSQL-backed web application. The application, The .NET Diner, simulates a rudimentary restaurant menu ordering interface. It consists of a single-page application (SPA) and two microservices backed by MongoDB. For simplicity, there is no API Gateway between the UI and the two services, as normally would be in place. An earlier version of this application was used in two previous posts, including Cloud-based Continuous Integration and Deployment for .NET Development.

The original restaurant order application was written with JQuery and RESTful .NET WCF Services. The new application, used in this post, has been completely re-written and modernized. The web-based user interface (UI) is written with Google’s Angular 4 framework using TypeScript. The UI relies on a microservices-based API, built with C# using Microsoft’s Web API 2 and .NET 4.7. The services rely on MongoDB for data persistence.

RestaurantDemoAPI

All code for this project is available on GitHub within two projects, one for the Angular UI and another for the C# services. The entire application can easily be built and run locally on Windows using MongoDB Community Edition. Alternately, to run the application in the Cloud, you will require an Azure and MongoDB Atlas account.

This post is primarily about the development experience. For brevity, the post will not delve into security, DevOps practices for CI/CD, and the complexities of staging and releasing code to Production.

Cross-Platform Development

The API, consisting of a set of C# microservices, was developed with Microsoft Visual Studio Community 2017 on Windows 10. Visual Studio touts itself as a full-featured Integrated Development Environment (IDE) for Android, iOS, Windows, web, and cloud. Visual Studio is easily integrated with Azure, AWS, and Google, through the use of Extensions. Visual Studio is an ideal IDE for cloud-centric application development.

Capture_VS2017_1

To simulate a typical mixed-platform development environment, the Angular UI front-end was developed with JetBrains WebStorm 2017 on Mac OS X. WebStorm touts itself as a powerful IDE for modern JavaScript development.

Webstorm

Other tools used to develop the application include Git and GitHub for source code, MongoDB Community Edition for local database development, and Postman for API development and testing, both locally and on Azure. All the development tools used in the post are cross-platform. Versions of WebStorm, Visual Studio, MongoDB, Postman, Git, Node.js, npm, and Bash are all available for Mac, Windows, and Linux. Cross-platform flexibility is key when developing modern multi-OS polyglot applications.

Postman

Postman was used to build, test, and document the application’s API. Postman is an excellent choice for developing RESTful APIs. With Postman, you define Collections of HTTP requests for each of your APIs. You then define Environments, such as Development, Test, and Production, against which you will execute the Collections of HTTP requests. Each environment consists of environment-specific variables. Those variables can be used to define API URLs and as request parameters.

Not only can you define static variables, Postman’s runtime, built on Node.js, is scriptable using JavaScript, allowing you to programmatically set dynamic variables, based on the results of HTTP requests and responses, as demonstrated below.

Postman2

Postman also allows you to write and run automated API integration tests, as well as perform load testing, as shown below.

Postman3

Azure App Services

The Angular browser-based UI and the C# microservices will be deployed to Azure using the Azure App Service. Azure App Service is nearly identical to AWS Elastic BeanStalk and Google App Engine.  According to Microsoft, Azure App Service allows Developers to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps, running on Windows or Linux, using .NET, .NET Core, Java, Ruby, Node.js, PHP, and Python.

App Service is a fully-managed, turn-key platform. Azure takes care of infrastructure maintenance and load balancing. App Service easily integrates with other Azure services, such as API Management, Queue Storage, Azure Active Directory (AD), Cosmos DB, and Application Insights. Microsoft suggests evaluating the following four criteria when considering Azure App Services:

  • You want to deploy a web application that’s accessible through the Internet.
  • You want to automatically scale your web application according to demand without needing to redeploy.
  • You don’t want to maintain server infrastructure (including software updates).
  • You don’t need any machine-level customizations on the servers that host your web application.

There are currently four types of Azure App Services, which are Web Apps, Web Apps for Containers, Mobile Apps, and API Apps. The application in this post will use the Azure Web Apps for the Angular browser-based UI and Azure API Apps for the C# microservices.

MongoDB Atlas

Each of the C# microservices has separate MongoDB database. In the Cloud, the services use MongoDB Atlas, a secure, highly-available, and scalable cloud-hosted MongoDB service. Cloud-based databases, like Atlas, are often referred to as Database as a Service (DBaaS). Atlas is a Cloud-based alternative to traditional on-premise databases, as well as equivalent CSP-based solutions, such as Amazon DynamoDB, GCP Cloud Bigtable, and Azure Cosmos DB.

Atlas is an example of a SaaS who offer a single service or small set of closely related services, as an alternative to the big CSP’s equivalent services. Similar providers in this category include CloudAMQP (RabbitMQ as a Service), ClearDB (MySQL DBaaS), Akamai (Content Delivery Network), and Oracle Database Cloud Service (Oracle Database, RAC, and Exadata as a Service). Many of these providers, such as Atlas, are themselves hosted on AWS or other CSPs.

There are three pricing levels for MongoDB Atlas: Free, Essential, and Professional. To follow along with this post, the Free level is sufficient. According to MongoDB, with the Free account level, you get 512 MB of storage with shared RAM, a highly-available 3-node replica set, end-to-end encryption, secure authentication, fully managed upgrades, monitoring and alerts, and a management API. Atlas provides the ability to upgrade your account and CSP specifics at any time.

Once you register for an Atlas account, you will be able to log into Atlas, set up your users, whitelist your IP addresses for security, and obtain necessary connection information. You will need this connection information in the next section to configure the Azure API Apps.

Altas01

With the Free Atlas tier, you can view detailed Metrics about database cluster activity. However, with the free tier, you do not get access to Real-Time data insights or the ability to use the Data Explorer to view your data through the Atlas UI.

Altas03

Azure API Apps

The example application’s API consists of two RESTful microservices built with C#, the RestaurantMenu service and RestaurantOrder service. Both services are deployed as Azure API Apps. API Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.

Microsoft Visual Studio has done an excellent job providing Extensions to make cloud integration a breeze. I will be using Visual Studio Tools for Azure in this post. Similar to how you create a Publish Profile for deploying applications to Internet Information Services (IIS), you create a Publish Profile for Azure App Services. Using the step-by-step user interface, you create a Microsft Azure App Service Web Deploy Publish Profile for each service. To create a new Profile, choose the Microsoft Azure App Service Target.

Capture_VS_Profile_1

You must be connected to your Azure account to create the Publish Profile. Give the service an App Name, choose your Subscription, and select or create a Resource Group and an App Service Plan.

Capture21

The App Service Plan defines the Location and Size for your API App container; these will determine the cost of the compute. I suggest putting the two API Apps and the Web App in the same location, in this case, East US.

Capture20

The Publish Profile is now available for deploying the services to Azure. No command line interaction is required. The services can be built and published to Azure with a single click from within Visual Studio.

Capture23

Configuration Management

Azure App Services is highly configurable. For example, each API App requires a different configuration, in each environment, to connect to different instances of MongoDB Atlas databases. For security, sensitive Atlas credentials are not stored in the source code. The Atlas URL and sensitive credentials are stored in App Settings on Azure. For this post, the settings were input directly into the Azure UI, as shown below. You will need to input your own Atlas URL and credentials.

The compiled C# services expect certain environment variables to be present at runtime to connect to MongoDB Atlas. These are provided through Azure’s App Settings. Access to the App Settings in Azure should be tightly controlled through Azure AD and fine-grained Azure Role-Based Access Control (RBAC) service.

Capture70

CORS

If you want to deploy the application from this post to Azure, there is one code change you will need to make to each service, which deals with Cross-Origin Resource Sharing (CORS). The services are currently configured to only accept traffic from my temporary Angular UI App Service’s URL. You will need to adjust the CORS configuration in the \App_Start\WebApiConfig.cs file in each service, to match your own App Service’s new URL.

CaptureCORS

Angular UI Web App

The Angular UI application will be deployed as an Azure Web App, one of four types of Azure App Services, mentioned previously. According to Microsoft, Web Apps allow Developers to program in their favorite languages, including .NET, Java, Node.js, PHP, and Python on Windows or .NET Core, Node.js, PHP or Ruby on Linux. Web Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.

Using the Azure Portal, setting up a new Web App for the Angular UI is simple.

Capture31

Provide an App Name, Subscription, Resource Group, OS Type, and select whether or not you want Application Insights enabled for the Web App.

Capture33

Although an amazing IDE for web development, WebStorm lacks some of the direct integrations with Azure, AWS, and Google, available with other IDE’s, like Visual Studio. Since the Angular application was developed in WebStorm on Mac, we will take advantage of Azure App Service’s Continuous Deployment feature.

Azure Web Apps can be deployed automatically from most common source code management platforms, including Visual Studio Team Services (VSTS), GitHub, Bitbucket, OneDrive, and local Git repositories.

Capture34

For this post, I chose GitHub. To configure deployment from GitHub, select the GitHub Account, Organization, Project, and Branch from which Azure will deploy the Angular Web App.

Capture35

Configuring GitHub in the Azure Portal, Azure becomes an Authorized OAuth App on the GitHub account. Azure creates a Webhook, which fires each time files are pushed (git push) to the dist branch of the GitHub project’s repository.

Capture_GitHub_WebHooks

Using the ng build --dev --env=prod command, the Angular UI application must be first transpiled from TypeScript to JavaScript and bundled for deployment. The ng build command can be run from within WebStorm or from the command line.

Capture_NG_Build

The the --env=prod flag ensures that the Production environment configuration, containing the correct Azure API endpoints, issued transpiled into the build. This configuration is stored in the \src\environments\environment.prod.ts file, shown below. You will need to update these two endpoints to your own endpoints from the two API Apps you previously deployed to Azure.

Capture_Environment_Endpoints.PNG

Optionally, the code should be optimized for Production, by replacing the --dev flag with the --prod flag. Amongst other optimizations, the Production version of the code is uglified using UglifyJS. Note the difference in the build files shown below for Production, as compared to files above for Development.

Capture_NG_Build2

Since I chose GitHub for deployment to Azure, I used Git to manually push the local build files to the dist branch on GitHub.

Capture_GitHub_WebHooks2

Every time the webhook fires, Azure pulls and deploys the new build, overwriting the previously deployed version, as shown below.

Capture_Azure_CD

To stage new code and not immediately overwrite running code, Azure has a feature called Deployment slots. According to Microsoft, Deployment slots allow Developers to deploy different versions of Web Apps to different URLs. You can test a certain version and then swap content and configuration between slots. This is likely how you would eventually deploy your code to Production.

Up and Running

Below, the three Azure App Services are shown in the Azure Portal, successfully deployed and running. Note their Status, App Type, Location, and Subscription.

Capture60

Before exploring the deployed UI, the two Azure API Apps should be tested using Postman. Once the API is confirmed to be working properly, populated by making an HTTP Post request to the menuitems API, the RestaurantOrderService Azure API Service. When the HTTP Post request is made, the RestaurantOrderService stores a set of menu items in the RestaurantMenu Atlas MongoDB database, in the menus collection.

Capture_PopulateMenu

The Angular UI, the RestaurantWeb Azure Web App, is viewed by using the URL provided in the Web App’s Overview tab. The menu items displayed in the drop-down are supplied by an HTTP GET request to the menuitems API, provided by the RestaurantMenuService Azure API Service.

Capture_Final_UI_1

Your order is placed through an HTTP Post request to the orders API, the RestaurantOrderService Azure API Service. The RestaurantOrderService stores the order in the RestaurantOrder Atlas MongoDB database, in the orders collection. The order details are returned in the response body and displayed in the UI.

Capture_Final_UI_2

Once you have the development version of the application successfully up and running on Atlas and Azure, you can start to configure, test, and deploy additional application versions, as App Services, into higher Azure environments, such as Test, Performance, and eventually, Production.

Monitoring

Azure provides in-depth monitoring and performance analytics capabilities for your deployed applications with services like Application Insights. With Azure’s monitoring resources, you can monitor the live performance of your application and set up alerts for application errors and performance issues. Real-time monitoring useful when conducting performance tests. Developers can analyze response time of each API method and optimize the application, Azure configuration, and MongoDB databases, before deploying to Production.

Capture_Insights

Conclusion

This post demonstrated how the Cloud has shifted application development to a Cloud-first model. Future posts will demonstrate how an application, such as the app featured in this post, is secured, and how it is continuously built, tested, and deployed, using DevOps practices.

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , , ,

1 Comment