Posts Tagged AKS

Istio Observability with Go, gRPC, and Protocol Buffers-based Microservices on Google Kubernetes Engine (GKE)

In the last two posts, Kubernetes-based Microservice Observability with Istio Service Mesh and Azure Kubernetes Service (AKS) Observability with Istio Service Mesh, we explored the observability tools which are included with Istio Service Mesh. These tools currently include Prometheus and Grafana for metric collection, monitoring, and alerting, Jaeger for distributed tracing, and Kiali for Istio service-mesh-based microservice visualization and monitoring. Combined with cloud platform-native monitoring and logging services, such as Stackdriver on GCP, CloudWatch on AWS, Azure Monitor logs on Azure, and we have a complete observability solution for modern, distributed, Cloud-based applications.

In this post, we will examine the use of Istio’s observability tools to monitor Go-based microservices that use Protocol Buffers (aka Protobuf) over gRPC (gRPC Remote Procedure Calls) and HTTP/2 for client-server communications, as opposed to the more traditional, REST-based JSON (JavaScript Object Notation) over HTTP (Hypertext Transfer Protocol). We will see how Kubernetes, Istio, Envoy, and the observability tools work seamlessly with gRPC, just as they do with JSON over HTTP, on Google Kubernetes Engine (GKE).

screen_shot_2019-04-18_at_6_03_38_pm

Technologies

Image result for grpc logogRPC

According to the gRPC project, gRPC, a CNCF incubating project, is a modern, high-performance, open-source and universal remote procedure call (RPC) framework that can run anywhere. It enables client and server applications to communicate transparently and makes it easier to build connected systems. Google, the original developer of gRPC, has used the underlying technologies and concepts in gRPC for years. The current implementation is used in several Google cloud products and Google externally facing APIs. It is also being used by Square, Netflix, CoreOS, Docker, CockroachDB, Cisco, Juniper Networks and many other organizations.

Image result for google developerProtocol Buffers

By default, gRPC uses Protocol Buffers. According to Google, Protocol Buffers (aka Protobuf) are a language- and platform-neutral, efficient, extensible, automated mechanism for serializing structured data for use in communications protocols, data storage, and more. Protocol Buffers are 3 to 10 times smaller and 20 to 100 times faster than XML. Once you have defined your messages, you run the protocol buffer compiler for your application’s language on your .proto file to generate data access classes.

Protocol Buffers are 3 to 10 times smaller and 20 to 100 times faster than XML.

Protocol buffers currently support generated code in Java, Python, Objective-C, and C++, Dart, Go, Ruby, and C#. For this post, we have compiled for Go. You can read more about the binary wire format of Protobuf on Google’s Developers Portal.

Envoy Proxy

According to the Istio project, Istio uses an extended version of the Envoy proxy. Envoy is deployed as a sidecar to a relevant service in the same Kubernetes pod. Envoy, created by Lyft, is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Istio leverages Envoy’s many built-in features, including dynamic service discovery, load balancing, TLS termination, HTTP/2 and gRPC proxies, circuit-breakers, health checks, staged rollouts, fault injection, and rich metrics.

According to the post by Harvey Tuch of Google, Evolving a Protocol Buffer canonical API, Envoy proxy adopted Protocol Buffers, specifically proto3, as the canonical specification of for version 2 of Lyft’s gRPC-first API.

Reference Microservices Platform

In the last two posts, we explored Istio’s observability tools, using a RESTful microservices-based API platform written in Go and using JSON over HTTP for service to service communications. The API platform was comprised of eight Go-based microservices and one sample Angular 7, TypeScript-based front-end web client. The various services are dependent on MongoDB, and RabbitMQ for event queue-based communications. Below, the is JSON over HTTP-based platform architecture.

Golang Service Diagram with Proxy v2

Below, the current Angular 7-based web client interface.

screen_shot_2019-04-15_at_10_23_47_pm

Converting to gRPC and Protocol Buffers

For this post, I have modified the eight Go microservices to use gRPC and Protocol Buffers, Google’s data interchange format. Specifically, the services use version 3 release (aka proto3) of Protocol Buffers. With gRPC, a gRPC client calls a gRPC server. Some of the platform’s services are gRPC servers, others are gRPC clients, while some act as both client and server, such as Service A, B, and E. The revised architecture is shown below.

Golang-Service-Diagram-with-gRPC

gRPC Gateway

Assuming for the sake of this demonstration, that most consumers of the API would still expect to communicate using a RESTful JSON over HTTP API, I have added a gRPC Gateway reverse proxy to the platform. The gRPC Gateway is a gRPC to JSON reverse proxy, a common architectural pattern, which proxies communications between the JSON over HTTP-based clients and the gRPC-based microservices. A diagram from the grpc-gateway GitHub project site effectively demonstrates how the reverse proxy works.

grpc_gateway.png

Image courtesy: https://github.com/grpc-ecosystem/grpc-gateway

In the revised platform architecture diagram above, note the addition of the reverse proxy, which replaces Service A at the edge of the API. The proxy sits between the Angular-based Web UI and Service A. Also, note the communication method between services is now Protobuf over gRPC instead of JSON over HTTP. The use of Envoy Proxy (via Istio) is unchanged, as is the MongoDB Atlas-based databases and CloudAMQP RabbitMQ-based queue, which are still external to the Kubernetes cluster.

Alternatives to gRPC Gateway

As an alternative to the gRPC Gateway reverse proxy, we could convert the TypeScript-based Angular UI client to gRPC and Protocol Buffers, and continue to communicate directly with Service A as the edge service. However, this would limit other consumers of the API to rely on gRPC as opposed to JSON over HTTP, unless we also chose to expose two different endpoints, gRPC, and JSON over HTTP, another common pattern.

Demonstration

In this post’s demonstration, we will repeat the exact same installation process, outlined in the previous post, Kubernetes-based Microservice Observability with Istio Service Mesh. We will deploy the revised gRPC-based platform to GKE on GCP. You could just as easily follow Azure Kubernetes Service (AKS) Observability with Istio Service Mesh, and deploy the platform to AKS.

Source Code

All source code for this post is available on GitHub, contained in three projects. The Go-based microservices source code, all Kubernetes resources, and all deployment scripts are located in the k8s-istio-observe-backend project repository, in the new grpc branch.

git clone \
  --branch grpc --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/k8s-istio-observe-backend.git

The Angular-based web client source code is located in the k8s-istio-observe-frontend repository on the new grpc branch. The source protocol buffers .proto file and the generated code, using the protocol buffers compiler, is located in the new pb-greeting project repository. You do not need to clone either of these projects for this post’s demonstration.

All Docker images for the services, UI, and the reverse proxy are located on Docker Hub.

Code Changes

This post is not specifically about writing Go for gRPC and Protobuf. However, to better understand the observability requirements and capabilities of these technologies, compared to JSON over HTTP, it is helpful to review some of the source code.

Service A

First, compare the source code for Service A, shown below, to the original code in the previous post. The service’s code is almost completely re-written. I relied on several references for writing the code, including, Tracing gRPC with Istio, written by Neeraj Poddar of Aspen Mesh and Distributed Tracing Infrastructure with Jaeger on Kubernetes, by Masroor Hasan.

Specifically, note the following code changes to Service A:

  • Import of the pb-greeting protobuf package;
  • Local Greeting struct replaced with pb.Greeting struct;
  • All services are now hosted on port 50051;
  • The HTTP server and all API resource handler functions are removed;
  • Headers, used for distributed tracing with Jaeger, have moved from HTTP request object to metadata passed in the gRPC context object;
  • Service A is coded as a gRPC server, which is called by the gRPC Gateway reverse proxy (gRPC client) via the Greeting function;
  • The primary PingHandler function, which returns the service’s Greeting, is replaced by the pb-greeting protobuf package’s Greeting function;
  • Service A is coded as a gRPC client, calling both Service B and Service C using the CallGrpcService function;
  • CORS handling is offloaded to Istio;
  • Logging methods are unchanged;

Source code for revised gRPC-based Service A (gist):


// author: Gary A. Stafford
// site: https://programmaticponderings.com
// license: MIT License
// purpose: Service A – gRPC/Protobuf
package main
import (
"context"
"github.com/banzaicloud/logrus-runtime-formatter"
"github.com/google/uuid"
"github.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing"
ot "github.com/opentracing/opentracing-go"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"net"
"os"
"time"
pb "github.com/garystafford/pb-greeting"
)
const (
port = ":50051"
)
type greetingServiceServer struct {
}
var (
greetings []*pb.Greeting
)
func (s *greetingServiceServer) Greeting(ctx context.Context, req *pb.GreetingRequest) (*pb.GreetingResponse, error) {
greetings = nil
tmpGreeting := pb.Greeting{
Id: uuid.New().String(),
Service: "Service-A",
Message: "Hello, from Service-A!",
Created: time.Now().Local().String(),
}
greetings = append(greetings, &tmpGreeting)
CallGrpcService(ctx, "service-b:50051")
CallGrpcService(ctx, "service-c:50051")
return &pb.GreetingResponse{
Greeting: greetings,
}, nil
}
func CallGrpcService(ctx context.Context, address string) {
conn, err := createGRPCConn(ctx, address)
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
headersIn, _ := metadata.FromIncomingContext(ctx)
log.Infof("headersIn: %s", headersIn)
client := pb.NewGreetingServiceClient(conn)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
ctx = metadata.NewOutgoingContext(context.Background(), headersIn)
defer cancel()
req := pb.GreetingRequest{}
greeting, err := client.Greeting(ctx, &req)
log.Info(greeting.GetGreeting())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
for _, greeting := range greeting.GetGreeting() {
greetings = append(greetings, greeting)
}
}
func createGRPCConn(ctx context.Context, addr string) (*grpc.ClientConn, error) {
//https://aspenmesh.io/2018/04/tracing-grpc-with-istio/
var opts []grpc.DialOption
opts = append(opts, grpc.WithStreamInterceptor(
grpc_opentracing.StreamClientInterceptor(
grpc_opentracing.WithTracer(ot.GlobalTracer()))))
opts = append(opts, grpc.WithUnaryInterceptor(
grpc_opentracing.UnaryClientInterceptor(
grpc_opentracing.WithTracer(ot.GlobalTracer()))))
opts = append(opts, grpc.WithInsecure())
conn, err := grpc.DialContext(ctx, addr, opts)
if err != nil {
log.Fatalf("Failed to connect to application addr: ", err)
return nil, err
}
return conn, nil
}
func getEnv(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
return fallback
}
func init() {
formatter := runtime.Formatter{ChildFormatter: &log.JSONFormatter{}}
formatter.Line = true
log.SetFormatter(&formatter)
log.SetOutput(os.Stdout)
level, err := log.ParseLevel(getEnv("LOG_LEVEL", "info"))
if err != nil {
log.Error(err)
}
log.SetLevel(level)
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreetingServiceServer(s, &greetingServiceServer{})
log.Fatal(s.Serve(lis))
}

view raw

main.go

hosted with ❤ by GitHub

Greeting Protocol Buffers

Shown below is the greeting source protocol buffers .proto file. The greeting response struct, originally defined in the services, remains largely unchanged (gist). The UI client responses will look identical.


syntax = "proto3";
package greeting;
import "google/api/annotations.proto";
message Greeting {
string id = 1;
string service = 2;
string message = 3;
string created = 4;
}
message GreetingRequest {
}
message GreetingResponse {
repeated Greeting greeting = 1;
}
service GreetingService {
rpc Greeting (GreetingRequest) returns (GreetingResponse) {
option (google.api.http) = {
get: "/api/v1/greeting"
};
}
}

view raw

greeting.proto

hosted with ❤ by GitHub

When compiled with protoc,  the Go-based protocol compiler plugin, the original 27 lines of source code swells to almost 270 lines of generated data access classes that are easier to use programmatically.

# Generate gRPC stub (.pb.go)
protoc -I /usr/local/include -I. \
  -I ${GOPATH}/src \
  -I ${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --go_out=plugins=grpc:. \
  greeting.proto

# Generate reverse-proxy (.pb.gw.go)
protoc -I /usr/local/include -I. \
  -I ${GOPATH}/src \
  -I ${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --grpc-gateway_out=logtostderr=true:. \
  greeting.proto

# Generate swagger definitions (.swagger.json)
protoc -I /usr/local/include -I. \
  -I ${GOPATH}/src \
  -I ${GOPATH}/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --swagger_out=logtostderr=true:. \
  greeting.proto

Below is a small snippet of that compiled code, for reference. The compiled code is included in the pb-greeting project on GitHub and imported into each microservice and the reverse proxy (gist). We also compile a separate version for the reverse proxy to implement.


// Code generated by protoc-gen-go. DO NOT EDIT.
// source: greeting.proto
package greeting
import (
context "context"
fmt "fmt"
proto "github.com/golang/protobuf/proto"
_ "google.golang.org/genproto/googleapis/api/annotations"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type Greeting struct {
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Service string `protobuf:"bytes,2,opt,name=service,proto3" json:"service,omitempty"`
Message string `protobuf:"bytes,3,opt,name=message,proto3" json:"message,omitempty"`
Created string `protobuf:"bytes,4,opt,name=created,proto3" json:"created,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Greeting) Reset() { *m = Greeting{} }
func (m *Greeting) String() string { return proto.CompactTextString(m) }
func (*Greeting) ProtoMessage() {}
func (*Greeting) Descriptor() ([]byte, []int) {
return fileDescriptor_6acac03ccd168a87, []int{0}
}
func (m *Greeting) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Greeting.Unmarshal(m, b)
}
func (m *Greeting) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Greeting.Marshal(b, m, deterministic)

view raw

greeting.pb.go

hosted with ❤ by GitHub

Using Swagger, we can view the greeting protocol buffers’ single RESTful API resource, exposed with an HTTP GET method. I use the Docker-based version of Swagger UI for viewing protoc generated swagger definitions.

docker run -p 8080:8080 -d --name swagger-ui \
  -e SWAGGER_JSON=/tmp/greeting.swagger.json \
  -v ${GOAPTH}/src/pb-greeting:/tmp swaggerapi/swagger-ui

The Angular UI makes an HTTP GET request to the /api/v1/greeting resource, which is transformed to gRPC and proxied to Service A, where it is handled by the Greeting function.

screen_shot_2019-04-15_at_9_05_23_pm

gRPC Gateway Reverse Proxy

As explained earlier, the gRPC Gateway reverse proxy service is completely new. Specifically, note the following code features in the gist below:

  • Import of the pb-greeting protobuf package;
  • The proxy is hosted on port 80;
  • Request headers, used for distributed tracing with Jaeger, are collected from the incoming HTTP request and passed to Service A in the gRPC context;
  • The proxy is coded as a gRPC client, which calls Service A;
  • Logging is largely unchanged;

The source code for the Reverse Proxy (gist):


// author: Gary A. Stafford
// site: https://programmaticponderings.com
// license: MIT License
// purpose: gRPC Gateway / Reverse Proxy
// reference: https://github.com/grpc-ecosystem/grpc-gateway
package main
import (
"context"
"flag"
lrf "github.com/banzaicloud/logrus-runtime-formatter"
gw "github.com/garystafford/pb-greeting"
"github.com/grpc-ecosystem/grpc-gateway/runtime"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"net/http"
"os"
)
func injectHeadersIntoMetadata(ctx context.Context, req *http.Request) metadata.MD {
//https://aspenmesh.io/2018/04/tracing-grpc-with-istio/
var (
otHeaders = []string{
"x-request-id",
"x-b3-traceid",
"x-b3-spanid",
"x-b3-parentspanid",
"x-b3-sampled",
"x-b3-flags",
"x-ot-span-context"}
)
var pairs []string
for _, h := range otHeaders {
if v := req.Header.Get(h); len(v) > 0 {
pairs = append(pairs, h, v)
}
}
return metadata.Pairs(pairs)
}
type annotator func(context.Context, *http.Request) metadata.MD
func chainGrpcAnnotators(annotators annotator) annotator {
return func(c context.Context, r *http.Request) metadata.MD {
var mds []metadata.MD
for _, a := range annotators {
mds = append(mds, a(c, r))
}
return metadata.Join(mds)
}
}
func run() error {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
annotators := []annotator{injectHeadersIntoMetadata}
mux := runtime.NewServeMux(
runtime.WithMetadata(chainGrpcAnnotators(annotators)),
)
opts := []grpc.DialOption{grpc.WithInsecure()}
err := gw.RegisterGreetingServiceHandlerFromEndpoint(ctx, mux, "service-a:50051", opts)
if err != nil {
return err
}
return http.ListenAndServe(":80", mux)
}
func getEnv(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
return fallback
}
func init() {
formatter := lrf.Formatter{ChildFormatter: &log.JSONFormatter{}}
formatter.Line = true
log.SetFormatter(&formatter)
log.SetOutput(os.Stdout)
level, err := log.ParseLevel(getEnv("LOG_LEVEL", "info"))
if err != nil {
log.Error(err)
}
log.SetLevel(level)
}
func main() {
flag.Parse()
if err := run(); err != nil {
log.Fatal(err)
}
}

view raw

main.go

hosted with ❤ by GitHub

Below, in the Stackdriver logs, we see an example of a set of HTTP request headers in the JSON payload, which are propagated upstream to gRPC-based Go services from the gRPC Gateway’s reverse proxy. Header propagation ensures the request produces a complete distributed trace across the complete service call chain.

screen_shot_2019-04-15_at_11_10_50_pm

Istio VirtualService and CORS

According to feedback in the project’s GitHub Issues, the gRPC Gateway does not directly support Cross-Origin Resource Sharing (CORS) policy. In my own experience, the gRPC Gateway cannot handle OPTIONS HTTP method requests, which must be issued by the Angular 7 web UI. Therefore, I have offloaded CORS responsibility to Istio, using the VirtualService resource’s CorsPolicy configuration. This makes CORS much easier to manage than coding CORS configuration into service code (gist):


apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-rev-proxy
spec:
hosts:
api.dev.example-api.com
gateways:
demo-gateway
http:
match:
uri:
prefix: /
route:
destination:
port:
number: 80
host: service-rev-proxy.dev.svc.cluster.local
weight: 100
corsPolicy:
allowOrigin:
"*"
allowMethods:
OPTIONS
GET
allowCredentials: true
allowHeaders:
"*"

Set-up and Installation

To deploy the microservices platform to GKE, follow the detailed instructions in part one of the post, Kubernetes-based Microservice Observability with Istio Service Mesh: Part 1, or Azure Kubernetes Service (AKS) Observability with Istio Service Mesh for AKS.

  1. Create the external MongoDB Atlas database and CloudAMQP RabbitMQ clusters;
  2. Modify the Kubernetes resource files and bash scripts for your own environments;
  3. Create the managed GKE or AKS cluster on GCP or Azure;
  4. Configure and deploy Istio to the managed Kubernetes cluster, using Helm;
  5. Create DNS records for the platform’s exposed resources;
  6. Deploy the Go-based microservices, gRPC Gateway reverse proxy, Angular UI, and associated resources to Kubernetes cluster;
  7. Test and troubleshoot the platform deployment;
  8. Observe the results;

The Three Pillars

As introduced in the first post, logs, metrics, and traces are often known as the three pillars of observability. These are the external outputs of the system, which we may observe. As modern distributed systems grow ever more complex, the ability to observe those systems demands equally modern tooling that was designed with this level of complexity in mind. Traditional logging and monitoring systems often struggle with today’s hybrid and multi-cloud, polyglot language-based, event-driven, container-based and serverless, infinitely-scalable, ephemeral-compute platforms.

Tools like Istio Service Mesh attempt to solve the observability challenge by offering native integrations with several best-of-breed, open-source telemetry tools. Istio’s integrations include Jaeger for distributed tracing, Kiali for Istio service mesh-based microservice visualization and monitoring, and Prometheus and Grafana for metric collection, monitoring, and alerting. Combined with cloud platform-native monitoring and logging services, such as Stackdriver for GKE, CloudWatch for Amazon’s EKS, or Azure Monitor logs for AKS, and we have a complete observability solution for modern, distributed, Cloud-based applications.

Pillar 1: Logging

Moving from JSON over HTTP to gRPC does not require any changes to the logging configuration of the Go-based service code or Kubernetes resources.

Stackdriver with Logrus

As detailed in part two of the last post, Kubernetes-based Microservice Observability with Istio Service Mesh, our logging strategy for the eight Go-based microservices and the reverse proxy continues to be the use of Logrus, the popular structured logger for Go, and Banzai Cloud’s logrus-runtime-formatter.

If you recall, the Banzai formatter automatically tags log messages with runtime/stack information, including function name and line number; extremely helpful when troubleshooting. We are also using Logrus’ JSON formatter. Below, in the Stackdriver console, note how each log entry below has the JSON payload contained within the message with the log level, function name, lines on which the log entry originated, and the message.

screen_shot_2019-04-15_at_11_10_36_pm

Below, we see the details of a specific log entry’s JSON payload. In this case, we can see the request headers propagated from the downstream service.

screen_shot_2019-04-15_at_11_10_50_pm

Pillar 2: Metrics

Moving from JSON over HTTP to gRPC does not require any changes to the metrics configuration of the Go-based service code or Kubernetes resources.

Prometheus

Prometheus is a completely open source and community-driven systems monitoring and alerting toolkit originally built at SoundCloud, circa 2012. Interestingly, Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second hosted-project, after Kubernetes.

screen_shot_2019-04-15_at_11_04_54_pm

Grafana

Grafana describes itself as the leading open source software for time series analytics. According to Grafana Labs, Grafana allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. You can easily create, explore, and share visually-rich, data-driven dashboards. Grafana allows users to visually define alert rules for your most important metrics. Grafana will continuously evaluate rules and can send notifications.

According to Istio, the Grafana add-on is a pre-configured instance of Grafana. The Grafana Docker base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. Below, we see two of the pre-configured dashboards, the Istio Mesh Dashboard and the Istio Performance Dashboard.

screen_shot_2019-04-15_at_10_45_38_pm

screen_shot_2019-04-15_at_10_46_03_pm

Pillar 3: Traces

Moving from JSON over HTTP to gRPC did require a complete re-write of the tracing logic in the service code. In fact, I spent the majority of my time ensuring the correct headers were propagated from the Istio Ingress Gateway to the gRPC Gateway reverse proxy, to Service A in the gRPC context, and upstream to all the dependent, gRPC-based services. I am sure there are a number of optimization in my current code, regarding the correct handling of traces and how this information is propagated across the service call stack.

Jaeger

According to their website, Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance and latency optimization. The Jaeger website contains an excellent overview of Jaeger’s architecture and general tracing-related terminology.

Below we see the Jaeger UI Traces View. In it, we see a series of traces generated by hey, a modern load generator and benchmarking tool, and a worthy replacement for Apache Bench (ab). Unlike abhey supports HTTP/2. The use of hey was detailed in the previous post.

screen_shot_2019-04-18_at_6_08_21_pm

A trace, as you might recall, is an execution path through the system and can be thought of as a directed acyclic graph (DAG) of spans. If you have worked with systems like Apache Spark, you are probably already familiar with DAGs.

screen_shot_2019-04-15_at_11_06_13_pm

Below we see the Jaeger UI Trace Detail View. The example trace contains 16 spans, which encompasses nine components – seven of the eight Go-based services, the reverse proxy, and the Istio Ingress Gateway. The trace and the spans each have timings. The root span in the trace is the Istio Ingress Gateway. In this demo, traces do not span the RabbitMQ message queues. This means you would not see a trace which includes the decoupled, message-based communications between Service D to Service F, via the RabbitMQ.

screen_shot_2019-04-15_at_11_08_07_pm

Within the Jaeger UI Trace Detail View, you also have the ability to drill into a single span, which contains additional metadata. Metadata includes the URL being called, HTTP method, response status, and several other headers.

screen_shot_2019-04-15_at_11_08_22_pm

Microservice Observability

Moving from JSON over HTTP to gRPC does not require any changes to the Kiali configuration of the Go-based service code or Kubernetes resources.

Kiali

According to their website, Kiali provides answers to the questions: What are the microservices in my Istio service mesh, and how are they connected? Kiali works with Istio, in OpenShift or Kubernetes, to visualize the service mesh topology, to provide visibility into features like circuit breakers, request rates and more. It offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads.

The Graph View in the Kiali UI is a visual representation of the components running in the Istio service mesh. Below, filtering on the cluster’s dev Namespace, we should observe that Kiali has mapped all components in the platform, along with rich metadata, such as their version and communication protocols.

screen_shot_2019-04-18_at_6_03_38_pm

Using Kiali, we can confirm our service-to-service IPC protocol is now gRPC instead of the previous HTTP.

screen_shot_2019-04-14_at_11_15_49_am

Conclusion

Although converting from JSON over HTTP to protocol buffers with gRPC required major code changes to the services, it did not impact the high-level observability we have of those services using the tools provided by Istio, including Prometheus, Grafana, Jaeger, and Kiali.

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , , , , , , ,

2 Comments

Azure Kubernetes Service (AKS) Observability with Istio Service Mesh

In the last two-part post, Kubernetes-based Microservice Observability with Istio Service Mesh, we deployed Istio, along with its observability tools, Prometheus, Grafana, Jaeger, and Kiali, to Google Kubernetes Engine (GKE). Following that post, I received several questions about using Istio’s observability tools with other popular managed Kubernetes platforms, primarily Azure Kubernetes Service (AKS). In most cases, including with AKS, both Istio and the observability tools are compatible.

In this short follow-up of the last post, we will replace the GKE-specific cluster setup commands, found in part one of the last post, with new commands to provision a similar AKS cluster on Azure. The new AKS cluster will run Istio 1.1.3, released 4/15/2019, alongside the latest available version of AKS (Kubernetes), 1.12.6. We will replace Google’s Stackdriver logging with Azure Monitor logs. We will retain the external MongoDB Atlas cluster and the external CloudAMQP cluster dependencies.

Previous articles about AKS include First Impressions of AKS, Azure’s New Managed Kubernetes Container Service (November 2017) and Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB (December 2017).

Source Code

All source code for this post is available on GitHub in two projects. The Go-based microservices source code, all Kubernetes resources, and all deployment scripts are located in the k8s-istio-observe-backend project repository.

git clone \
  --branch master --single-branch \
  --depth 1 --no-tags \
  https://github.com/garystafford/k8s-istio-observe-backend.git

The Angular UI TypeScript-based source code is located in the k8s-istio-observe-frontend repository. You will not need to clone the Angular UI project for this post’s demonstration.

Setup

This post assumes you have a Microsoft Azure account with the necessary resource providers registered, and the Azure Command-Line Interface (CLI), az, installed and available to your command shell. You will also need Helm and Istio 1.1.3 installed and configured, which is covered in the last post.

screen_shot_2019-03-27_at_1_35_46_pm

Start by logging into Azure from your command shell.

az login \
  --username {{ your_username_here }} \
  --password {{ your_password_here }}

Resource Providers

If you are new to Azure or AKS, you may need to register some additional resource providers to complete this demonstration.

az provider list --output table

screen_shot_2019-03-27_at_5_37_46_pm

If you are missing required resource providers, you will see errors similar to the one shown below. Simply activate the particular provider corresponding to the error.

Operation failed with status:'Bad Request'. 
Details: Required resource provider registrations 
Microsoft.Compute, Microsoft.Network are missing.

To register the necessary providers, use the Azure CLI or the Azure Portal UI.

az provider register --namespace Microsoft.ContainerService
az provider register --namespace Microsoft.Network
az provider register --namespace Microsoft.Compute

Resource Group

AKS requires an Azure Resource Group. According to Azure, a resource group is a container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. I chose to create a new resource group associated with my closest geographic Azure Region, East US, using the Azure CLI.

az group create \
  --resource-group aks-observability-demo \
  --location eastus

screen_shot_2019-03-26_at_6_54_39_pm

Create the AKS Cluster

Before creating the GKE cluster, check for the latest versions of AKS. At the time of this post, the latest versions of AKS was 1.12.6.

az aks get-versions \
  --location eastus \
  --output table

screen_shot_2019-03-26_at_6_56_38_pm

Using the latest GKE version, create the GKE managed cluster. There are many configuration options available with the az aks create command. For this post, I am creating three worker nodes using the Azure Standard_DS3_v2 VM type, which will give us a total of 12 vCPUs and 42 GB of memory. Anything smaller and all the Pods may not be schedulable. Instead of supplying an existing SSH key, I will let Azure create a new one. You should have no need to SSH into the worker nodes. I am also enabling the monitoring add-on. According to Azure, the add-on sets up Azure Monitor for containers, announced in December 2018, which monitors the performance of workloads deployed to Kubernetes environments hosted on AKS.

time az aks create \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --node-count 3 \
  --node-vm-size Standard_DS3_v2 \
  --enable-addons monitoring \
  --generate-ssh-keys \
  --kubernetes-version 1.12.6

Using the time command, we observe that the cluster took approximately 5m48s to provision; I have seen times up to almost 10 minutes. AKS provisioning is not as fast as GKE, which in my experience is at least 2x-3x faster than AKS for a similarly sized cluster.

screen_shot_2019-03-26_at_7_03_49_pm

After the cluster creation completes, retrieve your AKS cluster credentials.

az aks get-credentials \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --overwrite-existing

Examine the Cluster

Use the following command to confirm the cluster is ready by examining the status of three worker nodes.

kubectl get nodes --output=wide

screen_shot_2019-03-27_at_6_06_10_pm.png

Observe that Azure currently uses Ubuntu 16.04.5 LTS for the worker node’s host operating system. If you recall, GKE offers both Ubuntu as well as a Container-Optimized OS from Google.

Kubernetes Dashboard

Unlike GKE, there is no custom AKS dashboard. Therefore, we will use the Kubernetes Web UI (dashboard), which is installed by default with AKS, unlike GKE. According to Azure, to make full use of the dashboard, since the AKS cluster uses RBAC, a ClusterRoleBinding must be created before you can correctly access the dashboard.

kubectl create clusterrolebinding kubernetes-dashboard \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:kubernetes-dashboard

Next, we must create a proxy tunnel on local port 8001 to the dashboard running on the AKS cluster. This CLI command creates a proxy between your local system and the Kubernetes API and opens your web browser to the Kubernetes dashboard.

az aks browse \
  --name aks-observability-demo \
  --resource-group aks-observability-demo

screen_shot_2019-03-26_at_7_08_54_pm

Although you should use the Azure CLI, PowerShell, or SDK for all your AKS configuration tasks, using the dashboard for monitoring the cluster and the resources running on it, is invaluable.

screen_shot_2019-03-26_at_7_06_57_pm

The Kubernetes dashboard also provides access to raw container logs. Azure Monitor provides the ability to construct complex log queries, but for quick troubleshooting, you may just want to see the raw logs a specific container is outputting, from the dashboard.

screen_shot_2019-03-29_at_9_23_57_pm

Azure Portal

Logging into the Azure Portal, we can observe the AKS cluster, within the new Resource Group.

screen_shot_2019-03-26_at_7_08_25_pm

In addition to the Azure Resource Group we created, there will be a second Resource Group created automatically during the creation of the AKS cluster. This group contains all the resources that compose the AKS cluster. These resources include the three worker node VM instances, and their corresponding storage disks and NICs. The group also includes a network security group, route table, virtual network, and an availability set.

screen_shot_2019-03-26_at_7_08_04_pm

Deploy Istio

From this point on, the process to deploy Istio Service Mesh and the Go-based microservices platform follows the previous post and use the exact same scripts. After modifying the Kubernetes resource files, to deploy Istio, use the bash script, part4_install_istio.sh. I have added a few more pauses in the script to account for the apparently slower response times from AKS as opposed to GKE. It definitely takes longer to spin up the Istio resources on AKS than on GKE, which can result in errors if you do not pause between each stage of the deployment process.

screen_shot_2019-03-26_at_7_11_44_pm

screen_shot_2019-03-26_at_7_18_26_pm

Using the Kubernetes dashboard, we can view the Istio resources running in the istio-system Namespace, as shown below. Confirm that all resource Pods are running and healthy before deploying the Go-based microservices platform.

screen_shot_2019-03-26_at_7_16_50_pm

Deploy the Platform

Deploy the Go-based microservices platform, using bash deploy script, part5a_deploy_resources.sh.

screen_shot_2019-03-26_at_7_20_05_pm

The script deploys two replicas (Pods) of each of the eight microservices, Service-A through Service-H, and the Angular UI, to the dev and test Namespaces, for a total of 36 Pods. Each Pod will have the Istio sidecar proxy (Envoy Proxy) injected into it, alongside the microservice or UI.

screen_shot_2019-03-26_at_7_21_24_pm

Azure Load Balancer

If we return to the Resource Group created automatically when the AKS cluster was created, we will now see two additional resources. There is now an Azure Load Balancer and Public IP Address.

screen_shot_2019-03-26_at_7_21_56_pm

Similar to the GKE cluster in the last post, when the Istio Ingress Gateway is deployed as part of the platform, it is materialized as an Azure Load Balancer. The front-end of the load balancer is the new public IP address. The back-end of the load-balancer is a pool containing the three AKS worker node VMs. The load balancer is associated with a set of rules and health probes.

screen_shot_2019-03-26_at_7_22_51_pm

DNS

I have associated the new Azure public IP address, connected with the front-end of the load balancer, with the four subdomains I am using to represent the UI and the edge service, Service-A, in both Namespaces. If Azure is your primary Cloud provider, then Azure DNS is a good choice to manage your domain’s DNS records. For this demo, you will require your own domain.

screen_shot_2019-03-28_at_9_43_42_pm

Testing the Platform

With everything deployed, test the platform is responding and generate HTTP traffic for the observability tools to record. Similar to last time, I have chosen hey, a modern load generator and benchmarking tool, and a worthy replacement for Apache Bench (ab). Unlike ab, hey supports HTTP/2. Below, I am running hey directly from Azure Cloud Shell. The tool is simulating 10 concurrent users, generating a total of 500 HTTP GET requests to Service A.

# quick setup from Azure Shell using Bash
go get -u github.com/rakyll/hey
cd go/src/github.com/rakyll/hey/
go build
  
./hey -n 500 -c 10 -h2 http://api.dev.example-api.com/api/ping

We had 100% success with all 500 calls resulting in an HTTP 200 OK success status response code. Based on the results, we can observe the platform was capable of approximately 4 requests/second, with an average response time of 2.48 seconds and a mean time of 2.80 seconds. Almost all of that time was the result of waiting for the response, as the details indicate.

screen_shot_2019-03-26_at_7_57_03_pm

Logging

In this post, we have replaced GCP’s Stackdriver logging with Azure Monitor logs. According to Microsoft, Azure Monitor maximizes the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from Cloud and on-premises environments. In my opinion, Stackdriver is a superior solution for searching and correlating the logs of distributed applications running on Kubernetes. I find the interface and query language of Stackdriver easier and more intuitive than Azure Monitor, which although powerful, requires substantial query knowledge to obtain meaningful results. For example, here is a query to view the log entries from the services in the dev Namespace, within the last day.

let startTimestamp = ago(1d);
KubePodInventory
| where TimeGenerated > startTimestamp
| where ClusterName =~ "aks-observability-demo"
| where Namespace == "dev"
| where Name contains "service-"
| distinct ContainerID
| join
(
    ContainerLog
    | where TimeGenerated > startTimestamp
)
on ContainerID
| project LogEntrySource, LogEntry, TimeGenerated, Name
| order by TimeGenerated desc
| render table

Below, we see the Logs interface with the search query and log entry results.

screen_shot_2019-03-29_at_9_13_37_pm

Below, we see a detailed view of a single log entry from Service A.

screen_shot_2019-03-29_at_9_18_12_pm

Observability Tools

The previous post goes into greater detail on the features of each of the observability tools provided by Istio, including Prometheus, Grafana, Jaeger, and Kiali.

We can use the exact same kubectl port-forward commands to connect to the tools on AKS as we did on GKE. According to Google, Kubernetes port forwarding allows using a resource name, such as a service name, to select a matching pod to port forward to since Kubernetes v1.10. We forward a local port to a port on the tool’s pod.

# Grafana
kubectl port-forward -n istio-system \
  $(kubectl get pod -n istio-system -l app=grafana \
  -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
  
# Prometheus
kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=prometheus \
  -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
  
# Jaeger
kubectl port-forward -n istio-system \
$(kubectl get pod -n istio-system -l app=jaeger \
-o jsonpath='{.items[0].metadata.name}') 16686:16686 &
  
# Kiali
kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=kiali \
  -o jsonpath='{.items[0].metadata.name}') 20001:20001 &

screen_shot_2019-03-26_at_8_04_24_pm

Prometheus and Grafana

Prometheus is a completely open source and community-driven systems monitoring and alerting toolkit originally built at SoundCloud, circa 2012. Interestingly, Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second hosted-project, after Kubernetes.

Grafana describes itself as the leading open source software for time series analytics. According to Grafana Labs, Grafana allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. You can easily create, explore, and share visually-rich, data-driven dashboards. Grafana also users to visually define alert rules for your most important metrics. Grafana will continuously evaluate rules and can send notifications.

According to Istio, the Grafana add-on is a pre-configured instance of Grafana. The Grafana Docker base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. Below, we see one of the pre-configured dashboards, the Istio Service Dashboard.

screen_shot_2019-03-26_at_8_16_52_pm

Jaeger

According to their website, Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance and latency optimization. The Jaeger website contains a good overview of Jaeger’s architecture and general tracing-related terminology.

screen_shot_2019-03-26_at_8_03_31_pm

Below, we see a typical, distributed trace of the services, starting ingress gateway and passing across the upstream service dependencies.

screen_shot_2019-03-26_at_8_03_45_pm

Kaili

According to their website, Kiali provides answers to the questions: What are the microservices in my Istio service mesh, and how are they connected? Kiali works with Istio, in OpenShift or Kubernetes, to visualize the service mesh topology, to provide visibility into features like circuit breakers, request rates and more. It offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads.

There is a common Kubernetes Secret that controls access to the Kiali API and UI. The default login is admin, the password is 1f2d1e2e67df.

screen_shot_2019-03-26_at_7_59_17_pm

Below, we see a detailed view of our platform, running in the dev namespace, on AKS.

screen_shot_2019-03-26_at_8_02_38_pm

Delete AKS Cluster

Once you are finished with this demo, use the following two commands to tear down the AKS cluster and remove the cluster context from your local configuration.

time az aks delete \
  --name aks-observability-demo \
  --resource-group aks-observability-demo \
  --yes

kubectl config delete-context aks-observability-demo

Conclusion

In this brief, follow-up post, we have explored how the current set of observability tools, part of the latest version of Istio Service Mesh, integrates with Azure Kubernetes Service (AKS).

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , , , ,

1 Comment

Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB

An earlier post, Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ, demonstrated the use of a message-based, event-driven, decoupled architectural approach for communications between microservices, using Spring AMQP and RabbitMQ. This distributed computing model is known as eventual consistency. To paraphrase microservices.io, ‘using an event-driven, eventually consistent approach, each service publishes an event whenever it updates its data. Other services subscribe to events. When an event is received, a service (subscriber) updates its data.

That earlier post illustrated a fairly simple example application, the Voter API, consisting of a set of three Spring Boot microservices backed by MongoDB and RabbitMQ, and fronted by an API Gateway built with HAProxy. All API components were containerized using Docker and designed for use with Docker CE for AWS as the Container-as-a-Service (CaaS) platform.

Optimizing for Kubernetes on Azure

This post will demonstrate how a modern application, such as the Voter API, is optimized for Kubernetes in the Cloud (Kubernetes-as-a-Service), in this case, AKS, Azure’s new public preview of Managed Kubernetes for Azure Container Service. According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. I wrote about AKS in detail, in my last post, First Impressions of AKS, Azure’s New Managed Kubernetes Container Service.

In addition to migrating to AKS, the Voter API will take advantage of additional enterprise-grade Azure’s resources, including Azure’s Service Bus and Cosmos DB, replacements for the Voter API’s RabbitMQ and MongoDB. There are several architectural options for the Voter API’s messaging and NoSQL data source requirements when moving to Azure.

  1. Keep Dockerized RabbitMQ and MongoDB – Easy to deploy to Kubernetes, but not easily scalable, highly-available, or manageable. Would require storage optimized Azure VMs for nodes, node affinity, and persistent storage for data.
  2. Replace with Cloud-based Non-Azure Equivalents – Use SaaS-based equivalents, such as CloudAMQP (RabbitMQ-as-a-Service) and MongoDB Atlas, which will provide scalability, high-availability, and manageability.
  3. Replace with Azure Service Bus and Cosmos DB – Provides all the advantages of  SaaS-based equivalents, and additionally as Azure resources, benefits from being in the Azure Cloud alongside AKS.

Source Code

The Kubernetes resource files and deployment scripts used in this post are all available on GitHub. This is the only project you need to clone to reproduce the AKS example in this post.

git clone \
  --branch master --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/azure-aks-sb-cosmosdb-demo.git

The Docker images for the three Spring microservices deployed to AKS, Voter, Candidate, and Election, are available on Docker Hub. Optionally, the source code, including Dockerfiles, for the Voter, Candidate, and Election microservices, as well as the Voter Client are available on GitHub, in the kub-aks branch.

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/candidate-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/election-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/voter-service.git

git clone \
  --branch kub-aks --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/voter-client.git

Azure Service Bus

To demonstrate the capabilities of Azure’s Service Bus, the Voter API’s Spring microservice’s source code has been re-written to work with Azure Service Bus instead of RabbitMQ. A future post will explore the microservice’s messaging code. It is more likely that a large application, written specifically for a technology that is easily portable such as RabbitMQ or MongoDB, would likely remain on that technology, even if the application was lifted and shifted to the Cloud or moved between Cloud Service Providers (CSPs). Something important to keep in mind when choosing modern technologies – portability.

Service Bus is Azure’s reliable cloud Messaging-as-a-Service (MaaS). Service Bus is an original Azure resource offering, available for several years. The core components of the Service Bus messaging infrastructure are queues, topics, and subscriptions. According to Microsoft, ‘the primary difference is that topics support publish/subscribe capabilities that can be used for sophisticated content-based routing and delivery logic, including sending to multiple recipients.

Since the three Voter API’s microservices are not required to produce messages for more than one other service consumer, Service Bus queues are sufficient, as opposed to a pub/sub model using Service Bus topics.

Cosmos DB

Cosmos DB, Microsoft’s globally distributed, multi-model database, offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs). Ideal for the Voter API, Cosmos DB supports MongoDB’s data models through the MongoDB API, a MongoDB database service built on top of Cosmos DB. The MongoDB API is compatible with existing MongoDB libraries, drivers, tools, and applications. Therefore, there are no code changes required to convert the Voter API from MongoDB to Cosmos DB. I simply had to change the database connection string.

NGINX Ingress Controller

Although the Voter API’s HAProxy-based API Gateway could be deployed to AKS, it is not optimal for Kubernetes. Instead, the Voter API will use an NGINX-based Ingress Controller. NGINX will serve as an API Gateway, as HAProxy did, previously.

According to NGINX, ‘an Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for your Kubernetes services. Such a load balancer usually exposes your services to clients outside of your Kubernetes cluster.

An Ingress resource requires an Ingress Controller to function. Continuing from NGINX, ‘an Ingress Controller is an application that monitors Ingress resources via the Kubernetes API and updates the configuration of a load balancer in case of any changes. Different load balancers require different Ingress controller implementations. In the case of software load balancers, such as NGINX, an Ingress controller is deployed in a pod along with a load balancer.

There are currently two NGINX-based Ingress Controllers available, one from Kubernetes and one directly from NGINX. Both being equal, for this post, I chose the Kubernetes version, without RBAC (Kubernetes offers a version with and without RBAC). RBAC should always be used for actual cluster security. There are several advantages of using either version of the NGINX Ingress Controller for Kubernetes, including Layer 4 TCP and UDP and Layer 7 HTTP load balancing, reverse proxying, ease of SSL termination, dynamically-configurable path-based rules, and support for multiple hostnames.

Azure Web App

Lastly, the Voter Client application, not really part of the Voter API, but useful for demonstration purposes, will be converted from a containerized application to an Azure Web App. Since it is not part of the Voter API, separating the Client application from AKS makes better architectural sense. Web Apps are a powerful, richly-featured, yet incredibly simple way to host applications and services on Azure. For more information on using Azure Web Apps, read my recent post, Developing Applications for the Cloud with Azure App Services and MongoDB Atlas.

Revised Component Architecture

Below is a simplified component diagram of the new architecture, including Azure Service Bus, Cosmos DB, and the NGINX Ingress Controller. The new architecture looks similar to the previous architecture, but as you will see, it is actually very different.

AKS-r8.png

Process Flow

To understand the role of each API component, let’s look at one of the event-driven, decoupled process flows, the creation of a new election candidate. In the simplified flow diagram below, an API consumer executes an HTTP POST request containing the new candidate object as JSON. The Candidate microservice receives the HTTP request and creates a new document in the Cosmos DB Voter database. A Spring RepositoryEventHandler within the Candidate microservice responds to the document creation and publishes a Create Candidate event message, containing the new candidate object as JSON, to the Azure Service Bus Candidate Queue.

Candidate_Producer

Independently, the Voter microservice is listening to the Candidate Queue. Whenever a new message is produced by the Candidate microservice, the Voter microservice retrieves the message off the queue. The Voter microservice then transforms the new candidate object contained in the incoming message to its own candidate data model and creates a new document in its own Voter database.

Voter_Consumer

The same process flows exist between the Election and the Candidate microservices. The Candidate microservice maintains current elections in its database, which are retrieved from the Election queue.

Data Models

It is useful to understand, the Candidate microservice’s candidate domain model is not necessarily identical to the Voter microservice’s candidate domain model. Each microservice may choose to maintain its own representation of a vote, a candidate, and an election. The Voter service transforms the new candidate object in the incoming message based on its own needs. In this case, the Voter microservice is only interested in a subset of the total fields in the Candidate microservice’s model. This is the beauty of decoupling microservices, their domain models, and their datastores.

Other Events

The versions of the Voter API microservices used for this post only support Election Created events and Candidate Created events. They do not handle Delete or Update events, which would be necessary to be fully functional. For example, if a candidate withdraws from an election, the Voter service would need to be notified so no one places votes for that candidate. This would normally happen through a Candidate Delete or Candidate Update event.

Provisioning Azure Service Bus

First, the Azure Service Bus is provisioned. Provisioning the Service Bus may be accomplished using several different methods, including manually using the Azure Portal or programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform. I chose to provision the Azure Service Bus and the two queues using the Azure Portal for expediency. I chose the Basic Service Bus Tier of service, of which there are three tiers, Basic, Standard, and Premium.

Azure_006_Full_ServiceBus

The application requires two queues, the candidate.queue, and the election.queue.

Azure_008_ServiceBus

Provisioning Cosmos DB

Next, Cosmos DB is provisioned. Like Azure Service Bus, Cosmos DB may be provisioned using several methods, including manually using the Azure Portal, programmatically using Azure Resource Manager (ARM) with PowerShell or Terraform, or using the Azure CLI, which was my choice.

az cosmosdb create \
  --name cosmosdb_instance_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --location "East US=0" \
  --kind MongoDB

The post’s Cosmos DB instance exists within the single East US Region, with no failover. In a real Production environment, you would configure Cosmos DB with multi-region failover. I chose MongoDB as the type of Cosmos DB database account to create. The allowed values are GlobalDocumentDB, MongoDB, Parse. All other settings were left to the default values.

The three Spring microservices each have their own database. You do not have to create the databases in advance of consuming the Voter API. The databases and the database collections will be automatically created when new documents are first inserted by the microservices. Below, the three databases and their collections have been created and populated with documents.

Azure_001_CosmosDB

The GitHub project repository also contains three shell scripts to generate sample vote, candidate, and election documents. The scripts will delete any previous documents from the database collections and generate new sets of sample documents. To use, you will have to update the scripts with your own Voter API URL.

AKS_012_AddVotes2

MongoDB Aggregation Pipeline

Each of the three Spring microservices uses Spring Data MongoDB, which takes advantage of MongoDB’s Aggregation Framework. According to MongoDB, ‘the aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result.’ Below is an example of aggregation from the Candidate microservice’s VoterContoller class.

Aggregation aggregation = Aggregation.newAggregation(
    match(Criteria.where("election").is(election)),
    group("candidate").count().as("votes"),
    project("votes").and("candidate").previousOperation(),
    sort(Sort.Direction.DESC, "votes")
);

To use MongoDB’s aggregation framework with Cosmos DB, it is currently necessary to activate the MongoDB Aggregation Pipeline Preview Feature of Cosmos DB. The feature can be activated from the Azure Portal, as shown below.

Azure_002_CosmosDB

Cosmos DB Emulator

Be warned, Cosmos DB can be very expensive, even without database traffic or any Production-grade bells and whistles. Be careful when spinning up instances on Azure for learning purposes, the cost adds up quickly! In less than ten days, while writing this post, my cost was almost US$100 for the Voter API’s Cosmos DB instance.

I strongly recommend downloading the free Azure Cosmos DB Emulator to develop and test applications from your local machine. Although certainly not as convenient, it will save you the cost of developing for Cosmos DB directly on Azure.

With Cosmos DB, you pay for reserved throughput provisioned and data stored in containers (a collection of documents or a table or a graph). Yes, that’s right, Azure charges you per MongoDB collection, not even per database. Azure Cosmos DB’s current pricing model seems less than ideal for microservice architectures, each with their own database instance.

By default the reserved throughput, billed as Request Units (RU) per second or RU/s, is set to 1,000 RU/s per collection. For development and testing, you can reduce each collection to a minimum of 400 RU/s. The Voter API creates five collections at 1,000 RU/s or 5,000 RU/s total. Reducing this to a total of 2,000 RU/s makes Cosmos DB marginally more affordable to explore.

Azure_003_CosmosDB.PNG

Building the AKS Cluster

An existing Azure Resource Group is required for AKS. I chose to use the latest available version of Kubernetes, 1.8.2.

# login to azure
az login \
  --username your_username  \
  --password your_password

# create resource group
az group create \
  --resource-group resource_group_name_goes_here \
  --location eastus

# create aks cluster
az aks create \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --ssh-key-value path_to_your_public_key \
  --kubernetes-version 1.8.2

# get credentials to access aks cluster
az aks get-credentials \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

# display cluster's worker nodes
kubectl get nodes --output=wide

By default, AKS will provision a three-node Kubernetes cluster using Azure’s Standard D1 v2 Virtual Machines. According to Microsoft, ‘D series VMs are general purpose VM sizes provide balanced CPU-to-memory ratio. They are ideal for testing and development, small to medium databases, and low to medium traffic web servers.’ Azure D1 v2 VM’s are based on Linux OS images, currently Debian 8 (Jessie), with 1 vCPU and 3.5 GB of memory. By default with AKS, each VM receives 30 GiB of Standard HDD attached storage.

AKS_002_CreateCluster
You should always select the type and quantity of the cluster’s VMs and their attached storage, optimized for estimated traffic volumes and the specific workloads you are running. This can be done using the --node-count--node-vm-size, and --node-osdisk-size arguments with the az aks create command.

Deployment

The Voter API resources are deployed to its own Kubernetes Namespace, voter-api. The NGINX Ingress Controller resources are deployed to a different namespace, ingress-nginx. Separate namespaces help organize individual Kubernetes resources and separate different concerns.

Voter API

First, the voter-api namespace is created. Then, five required Kubernetes Secrets are created within the namespace. These secrets all contain sensitive information, such as passwords, that should not be shared. There is one secret for each of the three Cosmos DB database connection strings, one secret for the Azure Service Bus connection string, and one secret for the Let’s Encrypt SSL/TLS certificate and private key, used for secure HTTPS access to the Voter API.

AKS_004_CreateVoterAPI

Secrets

The Voter API’s secrets are used to populate environment variables within the pod’s containers. The environment variables are then available for use within the containers. Below is a snippet of the Voter pods resource file showing how the Cosmos DB and Service Bus connection strings secrets are used to populate environment variables.

env:
  - name: AZURE_SERVICE_BUS_CONNECTION_STRING
    valueFrom:
      secretKeyRef:
        name: azure-service-bus
        key: connection-string
  - name: SPRING_DATA_MONGODB_URI
    valueFrom:
      secretKeyRef:
        name: azure-cosmosdb-voter
        key: connection-string

Shown below, the Cosmos DB and Service Bus connection strings secrets have been injected into the Voter container and made available as environment variables to the microservice’s executable JAR file on start-up. As environment variables, the secrets are visible in plain text. Access to containers should be tightly controlled through Kubernetes RBAC and Azure AD, to ensure sensitive information, such as secrets, remain confidential.

AKS_014_EnvVars

Next, the three Kubernetes ReplicaSet resources, corresponding to the three Spring microservices, are created using Deployment controllers. According to Kubernetes, a Deployment that configures a ReplicaSet is now the recommended way to set up replication. The Deployments specify three replicas of each of the three Spring Services, resulting in a total of nine Kubernetes Pods.

Each pod, by default, will be scheduled on a different node if possible. According to Kubernetes, ‘the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.).’ Note below how each of the three microservice’s three replicas has been scheduled on a different node in the three-node AKS cluster.

AKS_005B_CreateVoterPods

Next, the three corresponding Kubernetes ClusterIP-type Services are created. And lastly, the Kubernetes Ingress is created. According to Kubernetes, the Ingress resource is an API object that manages external access to the services in a cluster, typically HTTP. Ingress provides load balancing, SSL termination, and name-based virtual hosting.

The Ingress configuration contains the routing rules used with the NGINX Ingress Controller. Shown below are the routing rules for each of the three microservices within the Voter API. Incoming API requests are routed to the appropriate pod and service port by NGINX.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: voter-ingress
  namespace: voter-api
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - api.voter-demo.com
    secretName: api-voter-demo-secret
  rules:
  - http:
      paths:
      - path: /candidate
        backend:
          serviceName: candidate
          servicePort: 8080
      - path: /election
        backend:
          serviceName: election
          servicePort: 8080
      - path: /voter
        backend:
          serviceName: voter
          servicePort: 8080

The screengrab below shows all of the Voter API resources created on AKS.

AKS_005B_CreateVoterAPI

NGINX Ingress Controller

After completing the deployment of the Voter API, the NGINX Ingress Controller is created. It starts with creating the ingress-nginx namespace. Next, the NGINX Ingress Controller is created, consisting of the NGINX Ingress Controller, three Kubernetes ConfigMap resources, and a default back-end application. The Controller and backend each have their own Service resources. Like the Voter API, each has three replicas, for a total of six pods. Together, the Ingress resource and NGINX Ingress Controller manage traffic to the Spring microservices.

The screengrab below shows all of the NGINX Ingress Controller resources created on AKS.

AKS_015_NGINX_Resources.PNG

The  NGINX Ingress Controller Service, shown above, has an external public IP address associated with itself.  This is because that Service is of the type, Load Balancer. External requests to the Voter API will be routed through the NGINX Ingress Controller, on this IP address.

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https

If you are only using HTTPS, not HTTP, then the references to HTTP and port 80 in the Ingress configuration are unnecessary. The NGINX Ingress Controller’s resources are explained in detail in the GitHub documentation, along with further configuration instructions.

DNS

To provide convenient access to the Voter API and the Voter Client, my domain, voter-demo.com, is associated with the public IP address associated with the Voter API Ingress Controller and with the public IP address associated with the Voter Client Azure Web App. DNS configuration is done through Azure’s DNS Zone resource.

AKS_008_FinalDNS

The two TXT type records might not look as familiar as the SOA, NS, and A type records. The TXT records are required to associate the domain entries with the Voter Client Azure Web App. Browsing to http://www.voter-demo.com or simply http://voter-demo.com brings up the Voter Client.

VoterClientAngular5_small.png

The Client sends and receives data via the Voter API, available securely at https://api.voter-demo.com.

Routing API Requests

With the Pods, Services, Ingress, and NGINX Ingress Controller created and configured, as well as the Azure Layer 4 Load Balancer and DNS Zone, HTTP requests from API consumers are properly and securely routed to the appropriate microservices. In the example below, three back-to-back requests are made to the voter/info API endpoint. HTTP requests are properly routed to one of the three Voter pod replicas using the default round-robin algorithm, as proven by the observing the different hostnames (pod names) and the IP addresses (private pod IPs) in each HTTP response.

AKS_013B_LoadBalancingReplicaSets.PNG

Final Architecture

Shown below is the final Voter API Azure architecture. To simplify the diagram, I have deliberately left out the three microservice’s ClusterIP-type Services, the three default back-end application pods, and the default back-end application’s ClusterIP-type Service. All resources shown below are within the single East US Azure region, except DNS, which is a global resource.

kub-aks-v10-small.png

Shown below is the new Azure Resource Group created by Azure during the AKS provisioning process. The Resource Group contains the various resources required to support the AKS cluster, NGINX Ingress Controller, and the Voter API. Necessary Azure resources were automatically provisioned when I provisioned AKS and when I created the new Voter API and NGINX resources.

AKS_006_FinalRG

In addition to the Resource Group above, the original Resource Group contains the AKS Container Service resource itself, Service Bus, Cosmos DB, and the DNS Zone resource.

AKS_007_FinalRG

The Voter Client Web App, consisting of the Azure App Service and App Service plan resource, is located in a third, separate Resource Group, not shown here.

Cleaning Up AKS

A nice feature of AKS, running a az aks delete command will delete all the Azure resources created as part of provisioning AKS, the API, and the Ingress Controller. You will have to delete the Cosmos DB, Service Bus, and DNS Zone resources, separately.

az aks delete \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

Conclusion

Taking advantage of Kubernetes with AKS, and the array of Azure’s enterprise-grade resources, the Voter API was shifted from a simple Docker architecture to a production-ready solution. The Voter API is now easier to manage, horizontally scalable, fault-tolerant, and marginally more secure. It is capable of reliably supporting dozens more microservices, with multiple replicas. The Voter API will handle a high volume of data transactions and event messages.

There is much more that needs to be done to productionalize the Voter API on AKS, including:

  • Add multi-region failover of Cosmos DB
  • Upgrade to Service Bus Standard or Premium Tier
  • Optimized Azure VMs and storage for anticipated traffic volumes and application-specific workloads
  • Implement Kubernetes RBAC
  • Add Monitoring, logging, and alerting with Envoy or similar
  • Secure end-to-end TLS communications with Itsio or similar
  • Secure the API with OAuth and Azure AD
  • Automate everything with DevOps – AKS provisioning, testing code, creating resources, updating microservices, and managing data

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , , , ,

4 Comments

First Impressions of AKS, Azure’s New Managed Kubernetes Container Service

Kubernetes as a Service

On October 24, 2017, less than a month prior to writing this post, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). According to Microsoft, the goal of AKS is to simplify the deployment, management, and operations of Kubernetes. According to PM Lead, Containers @ Microsoft Azure, in a blog post, AKS ‘features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling.’ Monroy goes on to say, ‘with AKS, customers get the benefit of open source Kubernetes without complexity and operational overhead.

Unquestionably, Kubernetes has become the leading Container-as-a-Service (CaaS) choice, at least for now. Along with the release of AKS by Microsoft, there have been other recent announcements, which reinforce Kubernetes dominance. In late September, Rancher Labs announced the release of Rancher 2.0. According to Rancher, Rancher 2.0 would be based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. Even AWS seems to be warming up to Kubernetes, despite their own ECS, according to sources. There are rumors AWS will announce a Kubernetes offering at AWS re:Invent 2017, starting a week from now.

Previewing AKS

Being a big fan of both Azure and Kubernetes, I decided to give AKS a try. I chose to deploy an existing, simple, multi-tier web application, which I had used in several previous posts, including Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. All the code used for this post is available on GitHub.

Sample Application

The application, the Voter application, is composed of an AngularJS frontend client-side UI, and two Java Spring Boot microservices, both backed by individual MongoDB databases, and fronted with an HAProxy-based API Gateway. The AngularJS UI calls the API Gateway, which in turn calls the Spring services. The two microservices communicate with each other using HTTP-based inter-process communication (IPC). Although I would prefer event-based service-to-service IPC, HTTP-based IPC was simpler to implement, for this post.

AKS v4

Interestingly, the Voter application was designed using Docker Community Edition for Mac and deployed to AWS using Docker Community Edition for AWS. Not only would this be my chance to preview AKS, but also an opportunity to compare the ease of developing for Docker CE on AWS using a Mac, to developing for Kubernetes with AKS using Docker Community Edition for Windows.

Required Software

In order to develop for AKS on my Windows 10 Enterprise workstation, I first made sure I had the latest copies of the following software:

If you are following along with the post, make sure you have the latest version of the Azure CLI, minimally 2.0.21, according to the Azure CLI release notes. Also, I happen to be running the latest version of Docker CE from the Edge Channel. However, either channel’s latest release of Docker CE for Windows should work for this post. Using PowerShell is optional. I prefer PowerShell over working from the Windows Command Prompt, if for nothing else than to preserve my command history, by default.

AKS_V2_04

Kubernetes Resources with Kompose

Originally developed for Docker CE, the Voter application stack was defined in a single Docker Compose file.


version: '3'
services:
mongodb:
image: mongo:latest
command:
–smallfiles
hostname: mongodb
ports:
27017:27017/tcp
networks:
voter_overlay_net
volumes:
voter_data_vol:/data/db
candidate:
image: garystafford/candidate-service:0.2.28
depends_on:
mongodb
hostname: candidate
ports:
8080:8080/tcp
networks:
voter_overlay_net
voter:
image: garystafford/voter-service:0.2.104
depends_on:
mongodb
candidate
hostname: voter
ports:
8080:8080/tcp
networks:
voter_overlay_net
client:
image: garystafford/voter-client:0.2.44
depends_on:
voter
hostname: client
ports:
80:8080/tcp
networks:
voter_overlay_net
gateway:
image: garystafford/voter-api-gateway:0.2.24
depends_on:
voter
hostname: gateway
ports:
8080:8080/tcp
networks:
voter_overlay_net
networks:
voter_overlay_net:
driver: overlay
volumes:
voter_data_vol:

To work on AKS, the application stack’s configuration needs to be reproduced as Kubernetes configuration files. Instead of writing the configuration files manually, I chose to use kompose. Kompose is described on its website as ‘a conversion tool for Docker Compose to container orchestrators such as Kubernetes.’ Using kompose, I was able to automatically convert the Docker Compose file into analogous Kubernetes resource configuration files.

kompose convert -f docker-compose.yml

Each Docker service in the Docker Compose file was translated into a separate Kubernetes Deployment resource configuration file, as well as a corresponding Service resource configuration file.

AKS_Demo_08

For the AngularJS Client Service and the HAProxy API Gateway Service, I had to modify the Service configuration files to switch the Service type to a Load Balancer (type: LoadBalancer). Being a Load Balancer, Kubernetes will assign a publically accessible IP address to each Service; the reasons for which are explained later in the post.

The MongoDB service requires a persistent storage volume. To accomplish this with Kubernetes, kompose created a PersistentVolumeClaims resource configuration file. I did have to create a corresponding PersistentVolume resource configuration file. It was also necessary to modify the PersistentVolumeClaims resource configuration file, specifying the Storage Class Name as manual, to correspond to the AKS Storage Class configuration (storageClassName: manual).


kind: PersistentVolume
apiVersion: v1
metadata:
name: voter-data-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
ReadWriteOnce
hostPath:
path: "/tmp/data"

From the original Docker Compose file, containing five Docker services, I ended up with a dozen individual Kubernetes resource configuration files. Individual configuration files are optimal for fine-grain management of Kubernetes resources. The Docker Compose file and the Kubernetes resource configuration files are included in the GitHub project.

git clone \
  --branch master --single-branch --depth 1 --no-tags \
  https://github.com/garystafford/azure-aks-demo.git

Creating AKS Resources

New AKS Feature Flag

According to Microsoft, to start with AKS, while still a preview, creating new clusters requires a feature flag on your subscription.

az provider register -n Microsoft.ContainerService

Using a brand new Azure account for this demo, I also needed to activate two additional feature flags.

az provider register -n Microsoft.Network
az provider register -n Microsoft.Compute

If you are missing required features flags, you will see errors, similar to. the below error.

Operation failed with status: ’Bad Request’. Details: Required resource provider registrations Microsoft.Compute,Microsoft.Network are missing.

Resource Group

AKS requires an Azure Resource Group for AKS. I chose to create a new Resource Group, using the Azure CLI.

az group create \
  --resource-group resource_group_name_goes_here \
  --location eastus

AKS_Demo_01

New Kubernetes Cluster

Using the aks feature of the Azure CLI version 2.0.21 or later, I provisioned a new Kubernetes cluster. By default, Azure will create a 3-node cluster. You can override the default number of nodes using the --node-count parameter; I chose one node. The version of Kubernetes you choose is also configurable using the --kubernetes-version parameter. I selected the latest Kubernetes version available with AKS, 1.8.2.

az aks create \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here \
  --node-count 1 \
  --generate-ssh-keys \
  --kubernetes-version 1.8.2

AKS_Demo_03

The newly created Azure Resource Group and AKS Kubernetes Cluster were both then visible on the Azure Portal.

AKS_Demo_04b

In addition to the new Resource Group I created, Azure also created a second Resource Group containing seven Azure resources. These Azure resources include a Virtual Machine (the single Kubernetes node), Network Security Group, Network Interface, Virtual Network, Route Table, Disk, and an Availability Group.

AKS_Demo_05b

With AKS up and running, I used another Azure CLI command to create a proxy connection to the Kubernetes Dashboard, which was deployed automatically and was running within the new AKS Cluster. The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.

az aks browse \
  --name cluser_name_goes_here \
  --resource-group resource_group_name_goes_here

AKS_Demo_23

Although no applications were deployed to AKS, yet, there were several Kubernetes components running within the AKS Cluster. Kubernetes components running within the kube-system Namespace, included heapster, kube-dns, kubernetes-dashboard, kube-proxy, kube-svc-redirect, and tunnelfront.

AKS_Demo_06B

Deploying the Application

MongoDB should be deployed first. Both the Voter and Candidate microservices depend on MongoDB. MongoDB is composed of four Kubernetes resources, a Deployment resource, Service resource, PersistentVolumeClaim resource, and PersistentVolume resource. I used kubectl, the command line interface for running commands against Kubernetes clusters, to create the four MongoDB resources, from the configuration files.

kubectl create \
  -f voter-data-vol-persistentvolume.yaml \
  -f voter-data-vol-persistentvolumeclaim.yaml \
  -f mongodb-deployment.yaml \
  -f mongodb-service.yaml

AKS_Demo_09

After MongoDB was deployed and running, I created the four remaining Deployment resources, Client, Gateway, Voter, and Candidate, from the Deployment resource configuration files. According to Kubernetes, ‘a Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.

AKS_Demo_10

Lastly, I created the remaining Service resources from the Service resource configuration files. According to Kubernetes, ‘a Service is an abstraction which defines a logical set of Pods and a policy by which to access them.

AKS_Demo_11

Switching back to the Kubernetes Dashboard, the Voter application components were now visible.

AKS_Demo_13

There were five Kubernetes Pods, one for each application component. Since there is only one Node in the Kubernetes Cluster, all five Pods were deployed to the same Node. There were also five corresponding Kubernetes Deployments.

AKS_Demo_14

Similarly, there were five corresponding Kubernetes ReplicaSets, the next-generation Replication Controller. There were also five corresponding Kubernetes Services. Note the Gateway and Client Services have an External Endpoint (External IP) associated with them. The IPs were created as a result of adding the Load Balancer Service type to their Service resource configuration files, mentioned earlier.

AKS_Demo_15.PNG

Lastly, note the Persistent Disk Claim for MongoDB, which had been successfully bound.

AKS_Demo_16

Switching back to the Azure Portal, following the application deployment, there were now three additional resources in the AKS Resource Group, a new Azure Load Balancer and two new Public IP Addresses. The Load Balancer is used to balance the Client and Gateway Services, which both have public IP addresses.

AKS_Demo_21b

To confirm the Gateway, Voter, and Candidate Services were reachable, using the public IP address of the Gateway Service, I browsed to HAProxy’s Statistics web page. Note the two backends, candidate and voter. The green color means HAProxy was able to successfully connect to both of these Services.

AKS_Demo_12

Accessing the Application

The Voter application’s AngularJS UI frontend can be accessed using the Client Service’s public IP address. However, this would not be very user-friendly. Even if I brought up the UI, using the public IP, the UI would be unable to connect to the HAProxy API Gateway, and subsequently, the Voter or Candidate Services. Based on the Client’s current configuration, the Client is expecting to find the Gateway at api.voter-demo.com:8080.

To make accessing the Client more user-friendly, and to ensure the Client has access to the Gateway, I provisioned an Azure DNS Zone resource for my domain, voter-demo.com. I assigned the DNS Zone to the AKS Resource Group.

AKS_Demo_20b

Within the new DNS Zone, I created three DNS records. The first record, an Alias (A) record, associated voter-demo.com with the public IP address of the Client Service. I added a second Alias (A) record for the www subdomain, also associating it with the public IP address of the Client Service. The third Alias (A) record associated the api subdomain with the public IP address of the Gateway Service.

AKS_Demo_18b

At a high level, the voter application’s routing architecture looks as follows. The client’s requests to the primary domain or to the api subdomain are resolved to one of the two public IP addresses configured in the load balancer’s frontend. The requests are passed to the load balancer’s backend pool, containing the single Azure VM, which is the single Kubernetes node, and onto the client or gateway Kubernetes Service. From there, requests are routed to one the appropriate Kubernetes Pods, containing the containerized application components, client or gateway.

kub-aks

Browsing to either http://voter-demo.com or http://www.voter-demo.com should bring up the Voter app UI (oh look, Hillary won this time…).

Mobile_App_View

Using Chrome’s Developer Tools, observe when a new vote is placed, an HTTP POST is made to the gateway, on the /voter/votes endpoint, http://api.voter-demo.com:8080/voter/votes. The Gateway then proxies this request to the Voter Service, at http://voter:8080/voter/votes. Since the Gateway and Voter Services both run within the same Cluster, the Gateway is able to use the Voter service’s name to address it, using Kubernetes kube-dns.

AKS_Demo_19b

Conclusion

In the past, I have developed, deployed, and managed containerized applications, using Rancher, AWS, AWS ECS, native Kubernetes, RedHat OpenShift, Docker Enterprise Edition, and Docker Community Edition. Based on my experience, and given my limited testing with Azure’s public preview of AKS, I am very impressed. Creating the Kubernetes Cluster could not have been easier. Scaling the Cluster, adding Persistent Volumes, and upgrading Kubernetes, is equally as easy. Conveniently, AKS integrates with other Kubernetes tools, like kubectl, kompose, and Helm.

I did run into some minor issues with the AKS Preview, such as being unable to connect to an earlier Cluster, after upgrading from the default Kubernetes version to 1.82. I also experience frequent disconnects when proxying to the Kubernetes Dashboard. I am sure the AKS Preview bugs will be worked out by the time AKS is officially released.

In addition to the many advantages of Kubernetes as a CaaS, a huge advantage of using AKS is the ability to easily integrate Azure’s many enterprise-grade compute, networking, database, caching, storage, and messaging resources. It would require minimal effort to swap out the Voter application’s single containerized version of MongoDB with a highly performant and available instance of Azure Cosmos DB. Similarly, it would be relatively easy to swap out the single containerized version of HAProxy with a fully-featured and secure instance of Azure API Management. The current version of the Voter application replies on RabbitMQ for service-to-service IPC versus this earlier application version’s reliance on HTTP-based IPC. It would be fairly simple to swap RabbitMQ for Azure Service Bus.

Lastly, AKS easily integrates with leading Development and DevOps tooling and processes. Building, managing, and deploying applications to AKS, is possible with Visual Studio, VSTS, Jenkins, Terraform, and Chef, according to Microsoft.

References

A few good references to get started with AKS:

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , ,

3 Comments