Posts Tagged Azure Container Service
First Impressions of AKS, Azure’s New Managed Kubernetes Container Service
Posted by Gary A. Stafford in Azure, Software Development on November 20, 2017
Kubernetes as a Service
On October 24, 2017, less than a month prior to writing this post, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. According to Gabe Monroy, PM Lead, Containers @ Microsoft Azure, in a blog post, AKS ‘features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling.’ Monroy goes on to say, ‘with AKS, customers get the benefit of open source Kubernetes without complexity and operational overhead.’
Unquestionably, Kubernetes has become the leading Container-as-a-Service (CaaS) choice, at least for now. Along with the release of AKS by Microsoft, there have been other recent announcements, which reinforce Kubernetes dominance. In late September, Rancher Labs announced the release of Rancher 2.0. According to Rancher, Rancher 2.0 would be based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. Even AWS seems to be warming up to Kubernetes, despite their own ECS, according to sources. There are rumors AWS will announce a Kubernetes offering at AWS re:Invent 2017, starting a week from now.
Previewing AKS
Being a big fan of both Azure and Kubernetes, I decided to give AKS a try. I chose to deploy an existing, simple, multi-tier web application, which I had used in several previous posts, including Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. All the code used for this post is available on GitHub.
Sample Application
The application, the Voter application, is composed of an AngularJS frontend client-side UI, and two Java Spring Boot microservices, both backed by individual MongoDB databases, and fronted with an HAProxy-based API Gateway. The AngularJS UI calls the API Gateway, which in turn calls the Spring services. The two microservices communicate with each other using HTTP-based inter-process communication (IPC). Although I would prefer event-based service-to-service IPC, HTTP-based IPC was simpler to implement, for this post.
Interestingly, the Voter application was designed using Docker Community Edition for Mac and deployed to AWS using Docker Community Edition for AWS. Not only would this be my chance to preview AKS, but also an opportunity to compare the ease of developing for Docker CE on AWS using a Mac, to developing for Kubernetes with AKS using Docker Community Edition for Windows.
Required Software
In order to develop for AKS on my Windows 10 Enterprise workstation, I first made sure I had the latest copies of the following software:
- Docker CE for Windows (v17.11.0)
- Azure CLI (v2.0.21)
- kubectl (v1.8.3)
- kompose (v1.4.0)
- Windows PowerShell (v5.1)
If you are following along with the post, make sure you have the latest version of the Azure CLI, minimally 2.0.21, according to the Azure CLI release notes. Also, I happen to be running the latest version of Docker CE from the Edge Channel. However, either channel’s latest release of Docker CE for Windows should work for this post. Using PowerShell is optional. I prefer PowerShell over working from the Windows Command Prompt, if for nothing else than to preserve my command history, by default.
Kubernetes Resources with Kompose
Originally developed for Docker CE, the Voter application stack was defined in a single Docker Compose file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
version: '3' | |
services: | |
mongodb: | |
image: mongo:latest | |
command: | |
– –smallfiles | |
hostname: mongodb | |
ports: | |
– 27017:27017/tcp | |
networks: | |
– voter_overlay_net | |
volumes: | |
– voter_data_vol:/data/db | |
candidate: | |
image: garystafford/candidate-service:0.2.28 | |
depends_on: | |
– mongodb | |
hostname: candidate | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
voter: | |
image: garystafford/voter-service:0.2.104 | |
depends_on: | |
– mongodb | |
– candidate | |
hostname: voter | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
client: | |
image: garystafford/voter-client:0.2.44 | |
depends_on: | |
– voter | |
hostname: client | |
ports: | |
– 80:8080/tcp | |
networks: | |
– voter_overlay_net | |
gateway: | |
image: garystafford/voter-api-gateway:0.2.24 | |
depends_on: | |
– voter | |
hostname: gateway | |
ports: | |
– 8080:8080/tcp | |
networks: | |
– voter_overlay_net | |
networks: | |
voter_overlay_net: | |
driver: overlay | |
volumes: | |
voter_data_vol: |
To work on AKS, the application stack’s configuration needs to be reproduced as Kubernetes configuration files. Instead of writing the configuration files manually, I chose to use kompose. Kompose is described on its website as ‘a conversion tool for Docker Compose to container orchestrators such as Kubernetes.’ Using kompose, I was able to automatically convert the Docker Compose file into analogous Kubernetes resource configuration files.
kompose convert -f docker-compose.yml
Each Docker service in the Docker Compose file was translated into a separate Kubernetes Deployment resource configuration file, as well as a corresponding Service resource configuration file.
For the AngularJS Client Service and the HAProxy API Gateway Service, I had to modify the Service configuration files to switch the Service type to a Load Balancer (type: LoadBalancer
). Being a Load Balancer, Kubernetes will assign a publically accessible IP address to each Service; the reasons for which are explained later in the post.
The MongoDB service requires a persistent storage volume. To accomplish this with Kubernetes, kompose created a PersistentVolumeClaims resource configuration file. I did have to create a corresponding PersistentVolume resource configuration file. It was also necessary to modify the PersistentVolumeClaims resource configuration file, specifying the Storage Class Name as manual, to correspond to the AKS Storage Class configuration (storageClassName: manual
).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kind: PersistentVolume | |
apiVersion: v1 | |
metadata: | |
name: voter-data-vol | |
labels: | |
type: local | |
spec: | |
storageClassName: manual | |
capacity: | |
storage: 10Gi | |
accessModes: | |
– ReadWriteOnce | |
hostPath: | |
path: "/tmp/data" |
From the original Docker Compose file, containing five Docker services, I ended up with a dozen individual Kubernetes resource configuration files. Individual configuration files are optimal for fine-grain management of Kubernetes resources. The Docker Compose file and the Kubernetes resource configuration files are included in the GitHub project.
git clone \ --branch master --single-branch --depth 1 --no-tags \ https://github.com/garystafford/azure-aks-demo.git
Creating AKS Resources
New AKS Feature Flag
According to Microsoft, to start with AKS, while still a preview, creating new clusters requires a feature flag on your subscription.
az provider register -n Microsoft.ContainerService
Using a brand new Azure account for this demo, I also needed to activate two additional feature flags.
az provider register -n Microsoft.Network az provider register -n Microsoft.Compute
If you are missing required features flags, you will see errors, similar to. the below error.
Operation failed with status: ’Bad Request’. Details: Required resource provider registrations Microsoft.Compute,Microsoft.Network are missing.
Resource Group
AKS requires an Azure Resource Group for AKS. I chose to create a new Resource Group, using the Azure CLI.
az group create \ --resource-group resource_group_name_goes_here \ --location eastus
New Kubernetes Cluster
Using the aks
feature of the Azure CLI version 2.0.21 or later, I provisioned a new Kubernetes cluster. By default, Azure will create a 3-node cluster. You can override the default number of nodes using the --node-count
parameter; I chose one node. The version of Kubernetes you choose is also configurable using the --kubernetes-version
parameter. I selected the latest Kubernetes version available with AKS, 1.8.2.
az aks create \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here \ --node-count 1 \ --generate-ssh-keys \ --kubernetes-version 1.8.2
The newly created Azure Resource Group and AKS Kubernetes Cluster were both then visible on the Azure Portal.
In addition to the new Resource Group I created, Azure also created a second Resource Group containing seven Azure resources. These Azure resources include a Virtual Machine (the single Kubernetes node), Network Security Group, Network Interface, Virtual Network, Route Table, Disk, and an Availability Group.
With AKS up and running, I used another Azure CLI command to create a proxy connection to the Kubernetes Dashboard, which was deployed automatically and was running within the new AKS Cluster. The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.
az aks browse \ --name cluser_name_goes_here \ --resource-group resource_group_name_goes_here
Although no applications were deployed to AKS, yet, there were several Kubernetes components running within the AKS Cluster. Kubernetes components running within the kube-system Namespace, included heapster, kube-dns, kubernetes-dashboard, kube-proxy, kube-svc-redirect, and tunnelfront.
Deploying the Application
MongoDB should be deployed first. Both the Voter and Candidate microservices depend on MongoDB. MongoDB is composed of four Kubernetes resources, a Deployment resource, Service resource, PersistentVolumeClaim resource, and PersistentVolume resource. I used kubectl
, the command line interface for running commands against Kubernetes clusters, to create the four MongoDB resources, from the configuration files.
kubectl create \ -f voter-data-vol-persistentvolume.yaml \ -f voter-data-vol-persistentvolumeclaim.yaml \ -f mongodb-deployment.yaml \ -f mongodb-service.yaml
After MongoDB was deployed and running, I created the four remaining Deployment resources, Client, Gateway, Voter, and Candidate, from the Deployment resource configuration files. According to Kubernetes, ‘a Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.’
Lastly, I created the remaining Service resources from the Service resource configuration files. According to Kubernetes, ‘a Service is an abstraction which defines a logical set of Pods and a policy by which to access them.’
Switching back to the Kubernetes Dashboard, the Voter application components were now visible.
There were five Kubernetes Pods, one for each application component. Since there is only one Node in the Kubernetes Cluster, all five Pods were deployed to the same Node. There were also five corresponding Kubernetes Deployments.
Similarly, there were five corresponding Kubernetes ReplicaSets, the next-generation Replication Controller. There were also five corresponding Kubernetes Services. Note the Gateway and Client Services have an External Endpoint (External IP) associated with them. The IPs were created as a result of adding the Load Balancer Service type to their Service resource configuration files, mentioned earlier.
Lastly, note the Persistent Disk Claim for MongoDB, which had been successfully bound.
Switching back to the Azure Portal, following the application deployment, there were now three additional resources in the AKS Resource Group, a new Azure Load Balancer and two new Public IP Addresses. The Load Balancer is used to balance the Client and Gateway Services, which both have public IP addresses.
To confirm the Gateway, Voter, and Candidate Services were reachable, using the public IP address of the Gateway Service, I browsed to HAProxy’s Statistics web page. Note the two backends, candidate and voter. The green color means HAProxy was able to successfully connect to both of these Services.
Accessing the Application
The Voter application’s AngularJS UI frontend can be accessed using the Client Service’s public IP address. However, this would not be very user-friendly. Even if I brought up the UI, using the public IP, the UI would be unable to connect to the HAProxy API Gateway, and subsequently, the Voter or Candidate Services. Based on the Client’s current configuration, the Client is expecting to find the Gateway at api.voter-demo.com:8080
.
To make accessing the Client more user-friendly, and to ensure the Client has access to the Gateway, I provisioned an Azure DNS Zone resource for my domain, voter-demo.com
. I assigned the DNS Zone to the AKS Resource Group.
Within the new DNS Zone, I created three DNS records. The first record, an Alias (A) record, associated voter-demo.com
with the public IP address of the Client Service. I added a second Alias (A) record for the www
subdomain, also associating it with the public IP address of the Client Service. The third Alias (A) record associated the api
subdomain with the public IP address of the Gateway Service.
At a high level, the voter application’s routing architecture looks as follows. The client’s requests to the primary domain or to the api
subdomain are resolved to one of the two public IP addresses configured in the load balancer’s frontend. The requests are passed to the load balancer’s backend pool, containing the single Azure VM, which is the single Kubernetes node, and onto the client or gateway Kubernetes Service. From there, requests are routed to one the appropriate Kubernetes Pods, containing the containerized application components, client or gateway.
Browsing to either http://voter-demo.com
or http://www.voter-demo.com
should bring up the Voter app UI (oh look, Hillary won this time…).
Using Chrome’s Developer Tools, observe when a new vote is placed, an HTTP POST
is made to the gateway, on the /voter/votes
endpoint, http://api.voter-demo.com:8080/voter/votes
. The Gateway then proxies this request to the Voter Service, at http://voter:8080/voter/votes
. Since the Gateway and Voter Services both run within the same Cluster, the Gateway is able to use the Voter service’s name to address it, using Kubernetes kube-dns.
Conclusion
In the past, I have developed, deployed, and managed containerized applications, using Rancher, AWS, AWS ECS, native Kubernetes, RedHat OpenShift, Docker Enterprise Edition, and Docker Community Edition. Based on my experience, and given my limited testing with Azure’s public preview of AKS, I am very impressed. Creating the Kubernetes Cluster could not have been easier. Scaling the Cluster, adding Persistent Volumes, and upgrading Kubernetes, is equally as easy. Conveniently, AKS integrates with other Kubernetes tools, like kubectl, kompose, and Helm.
I did run into some minor issues with the AKS Preview, such as being unable to connect to an earlier Cluster, after upgrading from the default Kubernetes version to 1.82. I also experience frequent disconnects when proxying to the Kubernetes Dashboard. I am sure the AKS Preview bugs will be worked out by the time AKS is officially released.
In addition to the many advantages of Kubernetes as a CaaS, a huge advantage of using AKS is the ability to easily integrate Azure’s many enterprise-grade compute, networking, database, caching, storage, and messaging resources. It would require minimal effort to swap out the Voter application’s single containerized version of MongoDB with a highly performant and available instance of Azure Cosmos DB. Similarly, it would be relatively easy to swap out the single containerized version of HAProxy with a fully-featured and secure instance of Azure API Management. The current version of the Voter application replies on RabbitMQ for service-to-service IPC versus this earlier application version’s reliance on HTTP-based IPC. It would be fairly simple to swap RabbitMQ for Azure Service Bus.
Lastly, AKS easily integrates with leading Development and DevOps tooling and processes. Building, managing, and deploying applications to AKS, is possible with Visual Studio, VSTS, Jenkins, Terraform, and Chef, according to Microsoft.
References
A few good references to get started with AKS:
- Container Orchestration Simplified with Managed Kubernetes in Azure Container Service (AKS)
- Deploy an Azure Container Service (AKS) cluster
- Prepare application for Azure Container Service (AKS)
- Kubernetes Basics
- kubectl Cheat Sheet
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.