Archive for category DevOps

Managing Applications Across Multiple Kubernetes Environments with Istio: Part 2

In this two-part post, we are exploring the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.

Part Two

In part one of this post, we created a Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the election microservice had already been released to Production, we deployed v1 to each of the three namespaces.

In part two of this post, we will learn how to utilize the advanced API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new v2 of the election microservice, locally on Kubernetes Minikube. Once confident v2 is functioning as intended, we will promote and test v2 across the dev, test, and uat namespaces.

Testing Locally with Minikube

Deploying to GKE, no matter how automated, takes time and resources, whether those resources are team members or just compute and system resources. Before deploying v2 of the election service to the non-prod GKE cluster, we should ensure that it has been thoroughly tested locally. Local testing should include the following criteria:

  1. Source code builds successfully;
  2. All unit-tests pass;
  3. A new Docker Image can be created from the build artifact;
  4. The Service can be deployed to Kubernetes (Minikube);
  5. The deployed instance can connect to the database and execute the Liquibase change sets;
  6. The deployed instance passes a minimal set of integration tests;

Minikube gives us the ability to quickly iterate and test an application, as well as the Kubernetes and Istio resources required for its operation, before promoting to GKE. These resources include the Kubernetes Namespace, Secret, Deployment, Service, Route Rule, and Istio Ingress. Since Minikube is just that, a miniature version of our GKE cluster, we should be able to have a nearly one-to-one parity between the Kubernetes resources we apply locally and those applied to GKE. This post assumes you have the latest version of Minikube installed, and are familiar with its operation.

This project includes a minikube sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the Minikube deployment example shown in this post. The three included scripts are designed to be easily adapted to a CI/CD DevOps workflow. You may need to modify the scripts to match your environment’s configuration. Note this Minikube-deployed version of the election service relies on the external Amazon RDS database instance.

Local Database Version

To eliminate AWS costs, I have included a second version of the minikube Kubernetes resource files, minikube_db_local This version deploys an ephemeral PostgreSQL database instance to Minikube, as opposed to relying on the external Amazon RDS instance. Be aware, the database does not have persistent storage or an Istio sidecar proxy. The alternate version is not covered in this post.

istio_100.png

Minikube Cluster

If you do not have a running Minikube cluster, create one with the minikube start command.

istio_081

Minikube allows you to use normal kubectl CLI commands to interact with the Minikube cluster. Using the kubectl get nodes command, we should see a single Minikube node running the latest Kubernetes v1.10.0.

istio_082

Istio on Minikube

Next, install Istio following Istio’s online installation instructions. A basic Istio installation on Minikube, without the additional add-ons, should only require a single Istio install script.

istio_083

If successful, you should observe a new istio-system namespace, containing the four main Istio components: istio-ca, istio-ingress, istio-mixer, and istio-pilot.

istio_084

Deploy v2 to Minikube

Next, create a Minikube Development environment, consisting of a dev Namespace, Istio Ingress, and Secret, using the part1-create-environment.sh script. Next, deploy v2 of the election service to thedev Namespace, along with an associated Route Rule, using the part2-deploy-v2.sh script. One v2 instance should be sufficient to satisfy the testing requirements.

istio_085

Access to v2 of the election service on Minikube is a bit different than with GKE. When routing external HTTP requests, there is no load balancer, no external public IP address, and no public DNS or subdomains. To access the single instance of v2 running on Minikube, we use the local IP address of the Minikube cluster, obtained with the minikube ip command. The access port required is the Node Port (nodePort) of the istio-ingress Service. The command is shown below (gist) and included in the part3-smoke-test.sh script.

The second part of our HTTP request routing is the same as with GKE, relying on an Istio Route Rules. The /v2/ sub-collection resource in the HTTP request URL is rewritten and routed to the v2 election Pod by the Route Rule. To confirm v2 of the election service is running and addressable, curl the /v2/actuator/health endpoint. Spring Actuator’s /health endpoint is frequently used at the end of a CI/CD server’s deployment pipeline to confirm success. The Spring Boot application can take a few minutes to fully start up and be responsive to requests, depending on the speed of your local machine.

istio_093.png

Using the Kubernetes Dashboard, we should see our deployment of the single election service Pod is running successfully in Minikube’s dev namespace.

istio_087

Once deployed, we run a battery of integration tests to confirm that the new v2 functionality is working as intended before deploying to GKE. In the next section of this post, we will explore the process creating and managing Postman Collections and Postman Environments, and how to automate those Collections of tests with Newman and Jenkins.

istio_088

Integration Testing

The typical reason an application is deployed to lower environments, prior to Production, is to perform application testing. Although definitions vary across organizations, testing commonly includes some or all of the following types: Integration Testing, Functional Testing, System Testing, Stress or Load Testing, Performance Testing, Security Testing, Usability Testing, Acceptance Testing, Regression Testing, Alpha and Beta Testing, and End-to-End Testing. Test teams may also refer to other testing forms, such as Whitebox (Glassbox), Blackbox Testing, Smoke, Validation, or Sanity Testing, and Happy Path Testing.

The site, softwaretestinghelp.com, defines integration testing as, ‘testing of all integrated modules to verify the combined functionality after integration is termed so. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

In this post, we are concerned that our integrated modules are functioning cohesively, primarily the election service, Amazon RDS database, DNS, Istio Ingress, Route Rules, and the Istio sidecar Proxy. Unlike Unit Testing and Static Code Analysis (SCA), which is done pre-deployment, integration testing requires an application to be deployed and running in an environment.

Postman

I have chosen Postman, along with Newman, to execute a Collection of integration tests before promoting to the next environment. The integration tests confirm the deployed application’s name and version. The integration tests then perform a series of HTTP GET, POST, PUT, PATCH, and DELETE actions against the service’s resources. The integration tests verify a successful HTTP response code is returned, based on the type of request made.

istio_055

Postman tests are written in JavaScript, similar to other popular, modern testing frameworks. Postman offers advanced features such as test-chaining. Tests can be chained together through the use of environment variables to store response values and pass them onto to other tests. Values shared between tests are also stored in the Postman Environments. Below, we store the ID of the new candidate, the result of an HTTP POST to the /candidates endpoint. We then use the stored candidate ID in proceeding HTTP GET, PUT, and PATCH test requests to the same /candidates endpoint.

istio_056

Environment-specific variables, such as the resource host, port, and environment sub-collection resource, are abstracted and stored as key/value pairs within Postman Environments, and called through variables in the request URL and within the tests. Thus, the same Postman Collection of tests may be run against multiple environments using different Postman Environments.

istio_057

Postman Runner allows us to run multiple iterations of our Collection. We also have the option to build in delays between tests. Lastly, Postman Runner can load external JSON and CSV formatted test data, which is beyond the scope of this post.

istio_058

Postman contains a simple Run Summary UI for viewing test results.

istio_060

Test Automation

To support running tests from the command line, Postman provides Newman. According to Postman, Newman is a command-line collection runner for Postman. Newman offers the same functionality as Postman’s Collection Runner, all part of the newman CLI. Newman is Node.js module, installed globally as an npm package, npm install newman --global.

Typically, Development and Testing teams compose Postman Collections and define Postman Environments, locally. Teams run their tests locally in Postman, during their development cycle. Then, those same Postman Collections are executed from the command line, or more commonly as part of a CI/CD pipeline, such as with Jenkins.

Below, the same Collection of integration tests ran in the Postman Runner UI, are run from the command line, using Newman.

istio_061

Jenkins

Below, the same Postman Collection of integration tests are executed as part of a CI/CD pipeline in Jenkins Blue Ocean.

istio_101B.png

In this pipeline, a set of smoke tests is run first to ensure the new deployment is running properly, and then the integration tests are executed.

istio_102

In this simple example, we have a three-stage CI/CD pipeline created from a Jenkinsfile (gist).

Test Results

Newman offers several options for displaying test results. For easy integration with Jenkins, Newman results can be delivered in a format that can be displayed as JUnit test reports. The JUnit test report format, XML, is a popular method of standardizing test results from different testing tools. Below is a truncated example of a test report file (gist).

Translating Newman test results to JUnit reports allows the percentage of test cases successfully executed, to be tracked over multiple deployments, a universal testing metric. Below we see the JUnit Test Reports Test Result Trend graph for a series of test runs.

istio_103

Deploying to Development

Development environments typically have a rapid turnover of application versions. Many teams use their Development environment as a Continuous Integration (CI) environment, where every commit that successfully builds and passes all unit tests, is deployed. The purpose of the deployment is to ensure build artifacts will successfully deploy through the CI/CD pipeline, start properly, and pass a basic set of smoke tests.

Other teams use their Development environments as an extension of their local environment. The Development environment will possess some or all of the required external integration points, which the Developer’s local environment may not.

External integration points, such as payment gateways, customer relationship management (CRM) systems, content management systems (CMS), or data analytics engines, are often stubbed-out in lower environments. Generally, third-party providers only offer a limited number of parallel non-Production integration environments. While an application may pass through several non-prod environments, testing against all external integration points will only occur in one or two of those environments.

The goal of the Development environment is to help Developers ensure their application is functioning correctly and is ready for the Test teams to evaluate.

With v2 of the election service ready for testing on GKE, we deploy it to the GKE cluster’s dev namespace using the part4a-deploy-v2-dev.sh script. We will also delete the previous v1 version of the election service. Similar to the v1 deployment script, the v2 scripts perform a kube-inject command, which manually injects the Istio sidecar proxy alongside the election service, into each election v2 Pod. The deployment script also deploys an alternate Istio Route Rule, which routes requests to api.dev.voter-demo.com/v2/* resource of v2 of the election service.

istio_054.png

Once deployed, we run our Postman Collection of integration tests with Newman or as part of a CI/CD pipeline. In the Development environment, we may choose to run a limited set of tests for the sake of expediency, or because not all external integration points are accessible.

Promotion to Test

With local Minikube and Development environment testing complete, we promote and deploy v2 of the election service to the Test environment, using the part4b-deploy-v2-test.sh script. In Test, we will not delete v1 of the election service.

istio_062

Often, an organization will maintain a running copy of all versions of an application currently deployed to Production, in a lower environment. Let’s look at two scenarios where this is common. First, v1 of the election service has an issue in Production, which needs to be confirmed and may require a hot-fix by the Development team. Validation of the v1 Production bug is often done in a lower environment. The second scenario for having both versions running in an environment is when v1 and v2 both need to co-exist in Production. Organizations frequently support multiple API versions. Cutting over an entire API user-base to a new API version is often completed over a series of releases, and requires careful coordination with API consumers.

Testing All Versions

An essential role of integration testing should be to confirm that both versions of the election service are functioning correctly, while simultaneously running in the same namespace. For example, we want to verify traffic is routed correctly, based on the HTTP request URL, to the correct version. Another common test scenario is database schema changes. Suppose we make what we believe are backward-compatible database changes to v2 of the election service. We should be able to prove, through testing, that both the old and new versions function correctly against the latest version of the database schema.

There are a few Postman-Newman-Jenkins automation strategies that could employ to test multiple versions of an application without creating separate Collections and Environments. A simple solution would be to programmatically change the Postman Environment’s version variable injected from a pipeline parameter (abridged environment file shown below).

istio_095.png

Once initial automated integration testing is complete, Test teams will typically execute additional forms of application testing if necessary, before signing off for UAT and Performance Testing to begin.

User-Acceptance Testing

With testing in the Test environments completed, we continue onto UAT. The term UAT suggest that a set of actual end-users (API consumers) of the election service will perform their own testing. Frequently, UAT is only done for a short, fixed period of time, often with a specialized team of Testers. Issues experienced during UAT can be expensive and impact the ability to release an application to Production on-time if sign-off is delayed.

After deploying v2 of the election service to UAT, and before opening it up to the UAT team, we would naturally want to repeat the same integration testing process we conducted in the previous Test environment. We must ensure that v2 is functioning as expected before our end-users begin their testing. This is where leveraging a tool like Jenkins makes automated integration testing more manageable and repeatable. One strategy would be to duplicate our existing Development and Test pipelines, and re-target the new pipeline to call v2 of the election service in UAT.

istio_104.png

Again, in a JUnit report format, we can examine individual results through the Jenkins Console.

istio_105.png

We can also examine individual results from each test run using a specific build’s Console Output.

istio_106.png

Testing and Instrumentation

To fully evaluate the integration test results, you must look beyond just the percentage of test cases executed successfully. It makes little sense to release a new version of an application if it passes all functional tests, but significantly increases client response times, unnecessarily increases memory consumption or wastes other compute resources, or is grossly inefficient in the number of calls it makes to the database or third-party dependencies. Often times, integration testing uncovers potential performance bottlenecks that are incorporated into performance test plans.

Critical intelligence about the performance of the application can only be obtained through the use of logging and metrics collection and instrumentation. Istio provides this telemetry out-of-the-box with Zipkin, Jaeger, Service Graph, Fluentd, Prometheus, and Grafana. In the included Grafana Istio Dashboard below, we see the performance of v1 of the election service, under test, in the Test environment. We can compare request and response payload size and timing, as well as request and response times to external integration points, such as our Amazon RDS database. We are able to observe the impact of individual test requests on the application and all its integration points.

istio_067

As part of integration testing, we should monitor the Amazon RDS CloudWatch metrics. CloudWatch allows us to evaluate critical database performance metrics, such as the number of concurrent database connections, CPU utilization, read and write IOPS, Memory consumption, and disk storage requirements.

istio_043

A discussion of metrics starts moving us toward load and performance testing against Production service-level agreements (SLAs). Using a similar approach to integration testing, with load and performance testing, we should be able to accurately estimate the sizing requirements our new application for Production. Load and Performance Testing helps answer questions like the type and size of compute resources are required for our GKE Production cluster and for our Amazon RDS database, or how many compute nodes and number of instances (Pods) are necessary to support the expected user-load.

All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.

, , , , , , , , , , , , , ,

Leave a comment

Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1

In the following two-part post, we will explore the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.

Application Environment Management

Container orchestration engines, such as Kubernetes, have revolutionized the deployment and management of microservice-based architectures. Combined with a Service Mesh, such as Istio, Kubernetes provides a secure, instrumented, enterprise-grade platform for modern, distributed applications.

One of many challenges with any platform, even one built on Kubernetes, is managing multiple application environments. Whether applications run on bare-metal, virtual machines, or within containers, deploying to and managing multiple application environments increases operational complexity.

As Agile software development practices continue to increase within organizations, the need for multiple, ephemeral, on-demand environments also grows. Traditional environments that were once only composed of Development, Test, and Production, have expanded in enterprises to include a dozen or more environments, to support the many stages of the modern software development lifecycle. Current application environments often include Continous Integration and Delivery (CI), Sandbox, Development, Integration Testing (QA), User Acceptance Testing (UAT), Staging, Performance, Production, Disaster Recovery (DR), and Hotfix. Each environment requiring its own compute, security, networking, configuration, and corresponding dependencies, such as databases and message queues.

Environments and Kubernetes

There are various infrastructure architectural patterns employed by Operations and DevOps teams to provide Kubernetes-based application environments to Development teams. One pattern consists of separate physical Kubernetes clusters. Separate clusters provide a high level of isolation. Isolation offers many advantages, including increased performance and security, the ability to tune each cluster’s compute resources to meet differing SLAs, and ensuring a reduced blast radius when things go terribly wrong. Conversely, separate clusters often result in increased infrastructure costs and operational overhead, and complex deployment strategies. This pattern is often seen in heavily regulated, compliance-driven organizations, where security, auditability, and separation of duties are paramount.

Kube Clusters Diagram F15

Namespaces

An alternative to separate physical Kubernetes clusters is virtual clusters. Virtual clusters are created using Kubernetes Namespaces. According to Kubernetes documentation, ‘Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces’.

In most enterprises, Operations and DevOps teams deliver a combination of both virtual and physical Kubernetes clusters. For example, lower environments, such as those used for Development, Test, and UAT, often reside on the same physical cluster, each in a separate virtual cluster (namespace). At the same time, environments such as Performance, Staging, Production, and DR, often require the level of isolation only achievable with physical Kubernetes clusters.

In the Cloud, physical clusters may be further isolated and secured using separate cloud accounts. For example, with AWS you might have a Non-Production AWS account and a Production AWS account, both managed by an AWS Organization.

Kube Clusters Diagram v2 F3

In a multi-environment scenario, a single physical cluster would contain multiple namespaces, into which separate versions of an application or applications are independently deployed, accessed, and tested. Below we see a simple example of a single Kubernetes non-prod cluster on the left, containing multiple versions of different microservices, deployed across three namespaces. You would likely see this type of deployment pattern as applications are deployed, tested, and promoted across lower environments, before being released to Production.

Kube Clusters Diagram v2 F5.png

Example Application

To demonstrate the promotion and testing of an application across multiple environments, we will use a simple election-themed microservice, developed for a previous post, Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry. The Spring Boot-based application allows API consumers to create, read, update, and delete, candidates, elections, and votes, through an exposed set of resources, accessed via RESTful endpoints.

Source Code

All source code for this post can be found on GitHub. The project’s README file contains a list of the election microservice’s endpoints. To get started quickly, use one of the two following options (gist).

Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post. The scripts are designed to be easily adapted to a CI/CD DevOps workflow. You will need to modify the script’s variables to match your own environment’s configuration.

istio_107small

Database

The post’s Spring Boot application relies on a PostgreSQL database. In the previous post, ElephantSQL was used to host the PostgreSQL instance. This time, I have used Amazon RDS for PostgreSQL. Amazon RDS for PostgreSQL and ElephantSQL are equivalent choices. For simplicity, you might also consider a containerized version of PostgreSQL, managed as part of your Kubernetes environment.

Ideally, each environment should have a separate database instance. Separate database instances provide better isolation, fine-grained RBAC, easier test data lifecycle management, and improved performance. Although, for this post, I suggest a single, shared, minimally-sized RDS instance.

The PostgreSQL database’s sensitive connection information, including database URL, username, and password, are stored as Kubernetes Secrets, one secret for each namespace, and accessed by the Kubernetes Deployment controllers.

istio_043.png

Istio

Although not required, Istio makes the task of managing multiple virtual and physical clusters significantly easier. Following Istio’s online installation instructions, download and install Istio 0.7.1.

To create a Google Kubernetes Engine (GKE) cluster with Istio, you could use gcloud CLI’s container clusters create command, followed by installing Istio manually using Istio’s supplied Kubernetes resource files. This was the method used in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).

Alternatively, you could use Istio’s Google Cloud Platform (GCP) Deployment Manager files, along with the gcloud CLI’s deployment-manager deployments create command to create a Kubernetes cluster, replete with Istio, in a single step. Although arguably simpler, the deployment-manager method does not provide the same level of fine-grain control over cluster configuration as the container clusters create method. For this post, the deployment-manager method will suffice.

istio_001

The latest version of the Google Kubernetes Engine, available at the time of this post, is 1.9.6-gke.0. However, to install this version of Kubernetes Engine using the Istio’s supplied deployment Manager Jinja template requires updating the hardcoded value in the istio-cluster.jinja file from 1.9.2-gke.1. This has been updated in the next release of Istio.

istio_002

Another change, the latest version of Istio offered as an option in the istio-cluster-jinja.schema file. Specifically, the installIstioRelease configuration variable, is only 0.6.0. The template does not include 0.7.1 as an option. Modify the istio-cluster-jinja.schema file to include the choice of 0.7.1. Optionally, I also set 0.7.1 as the default. This change should also be included in the next version of Istio.

istio_075.png

There are a limited number of GKE and Istio configuration defaults defined in the istio-cluster.yaml file, all of which can be overridden from the command line.

istio_002B.png

To optimize the cluster, and keep compute costs to a minimum, I have overridden several of the default configuration values using the properties flag with the gcloud CLI’s deployment-manager deployments create command. The README file provided by Istio explains how to use this feature. Configuration changes include the name of the cluster, the version of Istio (0.7.1), the number of nodes (2), the GCP zone (us-east1-b), and the node instance type (n1-standard-1). I also disabled automatic sidecar injection and chose not to install the Istio sample book application onto the cluster (gist).

Cluster Provisioning

To provision the GKE cluster and deploy Istio, first modify the variables in the part1-create-gke-cluster.sh file (shown above), then execute the script. The script also retrieves your cluster’s credentials, to enable command line interaction with the cluster using the kubectl CLI.

istio_002C.png

Once complete, validate the version of Istio by examining Istio’s Docker image versions, using the following command (gist).

The result should be a list of Istio 0.7.1 Docker images.

istio_076.png

The new cluster should be running GKE version 1.9.6.gke.0. This can be confirmed using the following command (gist).

Or, from the GCP Cloud Console.

istio_037

The new GKE cluster should be composed of (2) n1-standard-1 nodes, running in the us-east-1b zone.

istio_038

As part of the deployment, all of the separate Istio components should be running within the istio-system namespace.

istio_040

As part of the deployment, an external IP address and a load balancer were provisioned by GCP and associated with the Istio Ingress. GCP’s Deployment Manager should have also created the necessary firewall rules for cluster ingress and egress.

istio_010.png

Building the Environments

Next, we will create three namespaces,dev, test, and uat, which represent three non-production environments. Each environment consists of a Kubernetes Namespace, Istio Ingress, and Secret. The three environments are deployed using the part2-create-environments.sh script.

istio_048.png

Deploying v1

For this demonstration, we will assume v1 of the election service has been previously promoted, tested, and released to Production. Hence, we would expect v1 to be deployed to each of the lower environments. Additionally, a new v2 of the election service has been developed and tested locally using Minikube. It is ready for deployment to the three environments and will undergo integration testing (detailed in part 2 of the post).

If you recall from our GKE/Istio configuration, we chose manual sidecar injection of the Istio proxy. Therefore, all election deployment scripts perform a kube-inject command. To connect to our external Amazon RDS database, this kube-inject command requires the includeIPRanges flag, which contains two cluster configuration values, the cluster’s IPv4 CIDR (clusterIpv4Cidr) and the service’s IPv4 CIDR (servicesIpv4Cidr).

Before deployment, we export the includeIPRanges value as an environment variable, which will be used by the deployment scripts (gist).

Using this method with manual sidecar injection is discussed in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).

To deploy v1 of the election service to all three namespaces, execute the part3-deploy-v1-all-envs.sh script.

istio_051.png

We should now have two instances of v1 of the election service, running in the dev, test, and uat namespaces, for a total of six election-v1 Kubernetes Pods.

istio_052

HTTP Request Routing

Before deploying additional versions of the election service in Part 2 of this post, we should understand how external HTTP requests will be routed to different versions of the election service, in multiple namespaces. In the post’s simple example, we have matrix of three namespaces and two versions of the election service. That means we need a method to route external traffic to up to six different election versions. There multiple ways to solve this problem, each with their own pros and cons. For this post, I found a combination of DNS and HTTP request rewriting is most effective.

DNS

First, to route external HTTP requests to the correct namespace, we will use subdomains. Using my current DNS management solution, Azure DNS, I create three new A records for my registered domain, voter-demo.com. There is one A record for each namespace, including api.dev, api.test, and api.uat.

istio_077.png

All three subdomains should resolve to the single external IP address assigned to the cluster’s load balancer.

istio_010.png

As part of the environments creation, the script deployed an Istio Ingress, one to each environment. The ingress accepts traffic based on a match to the Request URL (gist).

The istio-ingress service load balancer, running in the istio-system namespace, routes inbound external traffic, based on the Request URL, to the Istio Ingress in the appropriate namespace.

istio_053.png

The Istio Ingress in the namespace then directs the traffic to one of the Kubernetes Pods, containing the election service and the Istio sidecar proxy.

istio_068.png

HTTP Rewrite

To direct the HTTP request to v1 or v2 of the election service, an Istio Route Rule is used. As part of the environment creation, along with a Namespace and Ingress resources, we also deployed an Istio Route Rule to each environment. This particular route rule examines the HTTP request URL for a /v1/ or /v2/ sub-collection resource. If it finds the sub-collection resource, it performs a HTTPRewrite, removing the sub-collection resource from the HTTP request. The Route Rule then directs the HTTP request to the appropriate version of the election service, v1 or v2 (gist).

According to Istio, ‘if there are multiple registered instances with the specified tag(s), they will be routed to based on the load balancing policy (algorithm) configured for the service (round-robin by default).’ We are using the default load balancing algorithm to distribute requests across multiple copies of each election service.

The final external HTTP request routing for the election service in the Non-Production GKE cluster is shown on the left, in the diagram, below.

Kube Clusters Diagram F14

Below are some examples of HTTP GET requests that would be successfully routed to our election service, using the above-described routing strategy (gist).

Part Two

In part one of this post, we created the Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the election microservice had already been released to Production, we deployed v1 to each of the three namespaces.

In part two of this post, we will learn how to utilize the sophisticated API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new, v2 of the election microservice, locally, on Kubernetes Minikube. Once we are confident v2 is functioning as intended, we will promote and test v2, across the dev, test, and uat namespaces.

All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.

, , , , , , , , , , ,

Leave a comment

Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry

In this post, we will explore the development of a cloud-native, data-centric Spring Boot 2.0 application, and its deployment to Pivotal Software’s hosted Pivotal Cloud Foundry service, Pivotal Web Services. We will add a few additional features, such as Spring Data, Lombok, and Swagger, to enhance our application.

According to Pivotal, Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications. Spring Boot takes an opinionated view of the Spring platform and third-party libraries. Spring Boot 2.0 just went GA on March 1, 2018. This is the first major revision of Spring Boot since 1.0 was released almost 4 years ago. It is also the first GA version of Spring Boot that provides support for Spring Framework 5.0.

Pivotal Web Services’ tagline is ‘The Agile Platform for the Agile Team Powered by Cloud Foundry’. According to Pivotal,  Pivotal Web Services (PWS) is a hosted environment of Pivotal Cloud Foundry (PCF). PWS is hosted on AWS in the US-East region. PWS utilizes two availability zones for redundancy. PWS provides developers a Spring-centric PaaS alternative to AWS Elastic Beanstalk, Elastic Container Service (Amazon ECS), and OpsWorks. With PWS, you get the reliability and security of AWS, combined with the rich-functionality and ease-of-use of PCF.

To demonstrate the feature-rich capabilities of the Spring ecosystem, the Spring Boot application shown in this post incorporates the following complimentary technologies:

  • Spring Boot Actuator: Sub-project of Spring Boot, adds several production grade services to Spring Boot applications with little developer effort
  • Spring Data JPA: Sub-project of Spring Data, easily implement JPA based repositories and data access layers
  • Spring Data REST: Sub-project of Spring Data, easily build hypermedia-driven REST web services on top of Spring Data repositories
  • Spring HATEOAS: Create REST representations that follow the HATEOAS principle from Spring-based applications
  • Springfox Swagger 2: We are using the Springfox implementation of the Swagger 2 specification, an automated JSON API documentation for API’s built with Spring
  • Lombok: The @Data annotation generates boilerplate code that is typically associated with simple POJOs (Plain Old Java Objects) and beans: @ToString, @EqualsAndHashCode, @Getter, @Setter, and @RequiredArgsConstructor

Source Code

All source code for this post can be found on GitHub. To get started quickly, use one of the two following commands (gist).

For this post, I have used JetBrains IntelliJ IDEA and Git Bash on Windows for development. However, all code should be compatible with most popular IDEs and development platforms. The project assumes you have Docker and the Cloud Foundry Command Line Interface (cf CLI) installed locally.

Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

Demo Application

The Spring Boot application demonstrated in this post is a simple election-themed RESTful API. The app allows API consumers to create, read, update, and delete, candidates, elections, and votes, via its exposed RESTful HTTP-based resources.

The Spring Boot application consists of (7) JPA Entities that mirror the tables and views in the database, (7) corresponding Spring Data Repositories, (2) Spring REST Controller, (4) Liquibase change sets, and associated Spring, Liquibase, Swagger, and PCF configuration files. I have intentionally chosen to avoid the complexities of using Data Transfer Objects (DTOs) for brevity, albeit a security concern, and directly expose the entities as resources.

img022_Final_Project

Controller Resources

This application is a simple CRUD application. The application contains a few simple HTTP GET resources in each of the two controller classes, as an introduction to Spring REST controllers. For example, the CandidateController contains the /candidates/summary and /candidates/summary/{election} resources (shown below in Postman). Typically, you would expose your data to the end-user as controller resources, as opposed to exposing the entities directly. The ease of defining controller resources is one of the many powers of Spring Boot.

img025_CustomResource.PNG

Paging and Sorting

As an introduction to Spring Data’s paging and sorting features, both the VoteRepository and VotesElectionsViewRepository Repository Interfaces extend Spring Data’s PagingAndSortingRepository<T,ID> interface, instead of the default CrudRepository<T,ID> interface. With paging and sorting enabled, you may both sort and limit the amount of data returned in the response payload. For example, to reduce the size of your response payload, you might choose to page through the votes in blocks of 25 votes at a time. In that case, as shown below in Postman, if you needed to return just votes 26-50, you would append the /votes resource with ?page=1&size=25. Since paging starts at page 0 (zero), votes 26-50 will on page 1.

img024_Paging

Swagger

This project also includes the Springfox implementation of the Swagger 2 specification. Along with the Swagger 2 dependency, the project takes a dependency on io.springfox:springfox-swagger-ui. The Springfox Swagger UI dependency allows us to view and interactively test our controller resources through Swagger’s browser-based UI, as shown below.

img027B_Swagger

All Swagger configuration can be found in the project’s SwaggerConfig Spring Configuration class.

Gradle

This post’s Spring Boot application is built with Gradle, although it could easily be converted to Maven if desired. According to Gradle, Gradle is the modern tool used to build, automate and deliver everything from mobile apps to microservices.

Data

In real-life, most applications interact with one or more data sources. The Spring Boot application demonstrated in this post interacts with data from a PostgreSQL database. PostgreSQL, or simply Postgres, is the powerful, open-source object-relational database system, which has supported production-grade applications for 15 years. The application’s single elections database consists of (6) tables, (3) views, and (2) function, which are both used to generate random votes for this demonstration.

img020_Database_Diagram

Spring Data makes interacting with PostgreSQL easy. In addition to the features of Spring Data, we will use Liquibase. Liquibase is known as the source control for your database. With Liquibase, your database development lifecycle can mirror your Spring development lifecycle. Both DDL (Data Definition Language) and DML (Data Manipulation Language) changes are versioned controlled, alongside the Spring Boot application source code.

Locally, we will take advantage of Docker to host our development PostgreSQL database, using the official PostgreSQL Docker image. With Docker, there is no messy database installation and configuration of your local development environment. Recreating and deleting your PostgreSQL database is simple.

To support the data-tier in our hosted PWS environment, we will implement ElephantSQL, an offering from the Pivotal Services Marketplace. ElephantSQL is a hosted version of PostgreSQL, running on AWS. ElephantSQL is referred to as PostgreSQL as a Service, or more generally, a Database as a Service or DBaaS. As a Pivotal Marketplace service, we will get easy integration with our PWS-hosted Spring Boot application, with near-zero configuration.

Docker

First, set up your local PostgreSQL database using Docker and the official PostgreSQL Docker image. Since this is only a local database instance, we will not worry about securing our database credentials (gist).

Your running PostgreSQL container should resemble the output shown below.

img001_docker

Data Source

Most IDEs allow you to create and save data sources. Although this is not a requirement, it makes it easier to view the database’s resources and table data. Below, I have created a data source in IntelliJ from the running PostgreSQL container instance. The port, username, password, and database name were all taken from the above Docker command.

img002_IntelliJ_Data_Source

Liquibase

There are multiple strategies when it comes to managing changes to your database. With Liquibase, each set of changes are handled as change sets. Liquibase describes a change set as an atomic change that you want to apply to your database. Liquibase offers multiple formats for change set files, including XML, JSON, YAML, and SQL. For this post, I have chosen SQL, specifically PostgreSQL SQL dialect, which can be designated in the IntelliJ IDE. Below is an example of the first changeset, which creates four tables and two indexes.

img023_Change_Set

As shown below, change sets are stored in the db/changelog/changes sub-directory, as configured in the master change log file (db.changelog-master.yaml). Change set files follow an incremental naming convention.

img003C_IntelliJ_Liquibase_Changesets

The empty PostgreSQL database, before any Liquibase changes, should resemble the screengrab shown below.

img003_IntelliJ_Blank_Database_cropped

To automatically run Liquibase database migrations on startup, the org.liquibase:liquibase-core dependency must be added to the project’s build.gradle file. To apply the change sets to your local, empty PostgreSQL database, simply start the service locally with the gradle bootRun command. As the app starts after being compiled, any new Liquibase change sets will be applied.

img004_Gradle_bootRun

You might ask how does Liquibase know the change sets are new. During the initial startup of the Spring Boot application, in addition to any initial change sets, Liquibase creates two database tables to track changes, the databasechangelog and databasechangeloglock tables. Shown below are the two tables, along with the results of the four change sets included in the project, and applied by Liquibase to the local PostgreSQL elections database.

img005_IntelliJ_Initial_Database_cropped

Below we see the contents of the databasechangelog table, indicating that all four change sets were successfully applied to the database. Liquibase checks this table before applying change sets.

img006B_IntelliJ_Database_Change_Log

ElephantSQL

Before we can deploy our Spring Boot application to PWS, we need an accessible PostgreSQL instance in the Cloud; I have chosen ElephantSQL. Available through the Pivotal Services Marketplace, ElephantSQL currently offers one free and three paid service plans for their PostgreSQL as a Service. I purchased the Panda service plan as opposed to the free Turtle service plan. I found the free service plan was too limited in the maximum number of database connections for multiple service instances.

Previewing and purchasing an ElephantSQL service plan from the Pivotal Services Marketplace, assuming you have an existing PWS account, literally takes a single command (gist).

The output of the command should resemble the screengrab below. Note the total concurrent connections and total storage for each plan.

img007_PCF_ElephantSQL_Service_Purchase

To get details about the running ElephantSQL service, use the cf service elections command.

img007_PCF_ElephantSQL_Service_Info

From the ElephantSQL Console, we can obtain the connection information required to access our PostgreSQL elections database. You will need the default database name, username, password, and URL.

img012_PWS_ElephantSQL_Details

Service Binding

Once you have created the PostgreSQL database service, you need to bind the database service to the application. We will bind our application and the database, using the PCF deployment manifest file (manifest.yml), found in the project’s root directory. Binding is done using the services section (shown below).

The key/value pairs in the env section of the deployment manifest will become environment variables, local to the deployed Spring Boot service. These key/value pairs in the manifest will also override any configuration set in Spring’s external application properties file (application.yml). This file is located in the resources sub-directory. Note the SPRING_PROFILES_ACTIVE: test environment variable in the manifest.yml file. This variable designates which Spring Profile will be active from the multiple profiles defined in the application.yml file.

img008B_PCF_Manifest

Deployment to PWS

Next, we run gradle build followed by cf push to deploy one instance of the Spring Boot service to PWS and associate it with our ElephantSQL database instance. Below is the expected output from the cf push command.

img008_PCF_CF_Push

Note the route highlighted below. This is the URL where your Spring Boot service will be available.

img009_PCF_CF_Push2

To confirm your ElephantSQL database was populated by Liquibase when PWS started the deployed Spring application instance, we can check the ElephantSQL Console’s Stats tab. Note the database tables and rows in each table, signifying Liquibase ran successfully. Alternately, you could create another data source in your IDE, connected to ElephantSQL; this can be helpful for troubleshooting.

img013_Candidates

To access the running service and check that data is being returned, point your browser (or Postman) to the URL returned from the cf push command output (see second screengrab above) and hit the /candidates resource. Obviously, your URL, referred to as a route by PWS, will be different and unique. In the response payload, you should observe a JSON array of eight candidate objects. Each candidate was inserted into the Candidate table of the database, by Liquibase, when Liquibase executed the second of the four change sets on start-up.

img012_PWS_ElephantSQL

With Spring Boot Actuator and Spring Data REST, our simple Spring Boot application has dozens of resources exposed automatically, without extensive coding of resource controllers. Actuator exposes resources to help manage and troubleshoot the application, such as info, health, mappings (shown below), metrics, env, and configprops, among others. All Actuator resources are exposed explicitly, thus they can be disabled for Production deployments. With Spring Boot 2.0, all Actuator resources are now preceded with /actuator/ .

img029_Postman_Mappings

According to Pivotal, Spring Data REST builds on top of Spring Data repositories, analyzes an application’s domain model and exposes hypermedia-driven HTTP resources for aggregates contained in the model, such as our /candidates resource. A partial list of the application’s exposed resources are listed in the GitHub project’s README file.

In Spring’s approach to building RESTful web services, HTTP requests are handled by a controller. Spring Data REST automatically exposes CRUD resources for our entities. With Spring Data JPA, POJOs like our Candidate class are annotated with @Entity, indicating that it is a JPA entity. Lacking a @Table annotation, it is assumed that this entity will be mapped to a table named Candidate.

With Spring’s Data REST’s RESTful HTTP-based API, traditional database Create, Read, Update, and Delete commands for each PostgreSQL database table are automatically mapped to equivalent HTTP methods, including POST, GET, PUT, PATCH, and DELETE.

Below is an example, using Postman, to create a new Candidate using an HTTP POST method.

img029_Postman_Post

Below is an example, using Postman, to update a new Candidate using an HTTP PUT method.

img029_Postman_Put.PNG

With Spring Data REST, we can even retrieve data from read-only database Views, as shown below. This particular JSON response payload was returned from the candidates_by_elections database View, using the /election-candidates resource.

img028_Postman_View.PNG

Scaling Up

Once your application is deployed and you have tested its functionality, you can easily scale out or scale in the number instances, memory, and disk, with the cf scale command (gist).

Below is sample output from scaling up the Spring Boot application to two instances.

img016_Scale_Up2

Optionally, you can activate auto-scaling, which will allow the application to scale based on load.

img016_Autoscaling.PNG

Following the PCF architectural model, auto-scaling is actually another service from the Pivotal Services Marketplace, PCF App Autoscaler, as seen below, running alongside our ElephantSQL service.

img016_Autoscaling2.PNG

With PCF App Autoscaler, you set auto-scaling minimum and maximum instance limits and define scaling rules. Below, I have configured auto-scaling to scale out the number of application instances when the average CPU Utilization of all instances hits 80%. Conversely, the application will scale in when the average CPU Utilization recedes below 40%. In addition to CPU Utilization, PCF App Autoscaler also allows you to set scaling rules based on HTTP Throughput, HTTP Latency, RabbitMQ Depth (queue depth), and Memory Utilization.

Furthermore, I set the auto-scaling minimum number of instances to two and the maximum number of instances to four. No matter how much load is placed on the application, PWS will not scale above four instances. Conversely, PWS will maintain a minimum of two running instances at all times.

img016_Autoscaling3

Conclusion

This brief post demonstrates both the power and simplicity of Spring Boot to quickly develop highly-functional, data-centric RESTful API applications with minimal coding. Further, when coupled with Pivotal Cloud Foundry, Spring developers have a highly scalable, resilient cloud-native application hosting platform.

, , , , , , , , , , , ,

Leave a comment

Deploying and Configuring Istio on Google Kubernetes Engine (GKE)

GKE_021B

Introduction

Unquestionably, Kubernetes has quickly become the leading Container-as-a-Service (CaaS) platform. In late September 2017, Rancher Labs announced the release of Rancher 2.0, based on Kubernetes. In mid-October, at DockerCon Europe 2017, Docker announced they were integrating Kubernetes into the Docker platform. In late October, Microsoft released the public preview of Managed Kubernetes for Azure Container Service (AKS). In November, Google officially renamed its Google Container Engine to Google Kubernetes Engine. Most recently, at AWS re:Invent 2017, Amazon announced its own manged version of Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS).

The recent abundance of Kuberentes-based CaaS offerings makes deploying, scaling, and managing modern distributed applications increasingly easier. However, as Craig McLuckie, CEO of Heptio, recently stated, “…it doesn’t matter who is delivering Kubernetes, what matters is how it runs.” Making Kubernetes run better is the goal of a new generation of tools, such as Istio, EnvoyProject Calico, Helm, and Ambassador.

What is Istio?

One of those new tools and the subject of this post is Istio. Released in Alpha by Google, IBM and Lyft, in May 2017, Istio is an open platform to connect, manage, and secure microservices. Istio describes itself as, “…an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio’s control plane functionality.

Istio contains several components, split between the data plane and a control plane. The data plane includes the Istio Proxy (an extended version of Envoy proxy). The control plane includes the Istio Mixer, Istio Pilot, and Istio-Auth. The Istio components work together to provide behavioral insights and operational control over a microservice-based service mesh. Istio describes a service mesh as a “transparently injected layer of infrastructure between a service and the network that gives operators the controls they need while freeing developers from having to bake solutions to distributed system problems into their code.

In this post, we will deploy the latest version of Istio, v0.4.0, on Google Cloud Platform, using the latest version of Google Kubernetes Engine (GKE), 1.8.4-gke.1. Both versions were just released in mid-December, as this post is being written. Google, as you probably know, was the creator of Kubernetes, now an open-source CNCF project. Google was the first Cloud Service Provider (CSP) to offer managed Kubernetes in the Cloud, starting in 2014, with Google Container Engine (GKE), which used Kubernetes. This post will outline the installation of Istio on GKE, as well as the deployment of a sample application, integrated with Istio, to demonstrate Istio’s observability features.

Getting Started

All code from this post is available on GitHub. You will need to change some variables within the code, to meet your own project’s needs (gist).

The scripts used in this post are as follows, in order of execution (gist).

Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

Creating GKE Cluster

First, we create the Google Kubernetes Engine (GKE) cluster. The GKE cluster creation is highly-configurable from either the GCP Cloud Console or from the command line, using the Google Cloud Platform gcloud CLI. The CLI will be used throughout the post. I have chosen to create a highly-available, 3-node cluster (1 node/zone) in GCP’s South Carolina us-east1 region (gist).

Once built, we need to retrieve the cluster’s credentials.

Having chosen to use Kubernetes’ Alpha Clusters feature, the following warning is displayed, warning the Alpha cluster will be deleted in 30 days (gist).

The resulting GKE cluster will have the following characteristics (gist).

Installing Istio

With the GKE cluster created, we can now deploy Istio. There are at least two options for deploying Istio on GCP. You may choose to manually install and configure Istio in a GKE cluster, as I will do in this post, following these instructions. Alternatively, you may choose to use the Istio GKE Deployment Manager. This all-in-one GCP service will create your GKE cluster, and install and configure Istio and the Istio add-ons, including their Book Info sample application.

G002_DeployCluster

There were a few reasons I chose not to use the Istio GKE Deployment Manager option. First, until very recently, you could not install the latest versions of Istio with this option (as of 12/21 you can now deploy v0.3.0 and v0.4.0). Secondly, currently, you only have the choice of GKE version 1.7.8-gke.0. I wanted to test the latest v1.8.4 release with a stable GA version of RBAC. Thirdly, at least three out of four of my initial attempts to use the Istio GKE Deployment Manager failed during provisioning for unknown reasons. Lastly, you will learn more about GKE, Kubernetes, and Istio by doing it yourself, at least the first time.

Istio Code Changes

Before installing Istio, I had to make several minor code changes to my existing Kubernetes resource files. The requirements are detailed in Istio’s Pod Spec Requirements. These changes are minor, but if missed, cause errors during deployment, which can be hard to identify and resolve.

First, you need to name your Service ports in your Service resource files. More specifically, you need to name your service ports, http, as shown in the Candidate microservice’s Service resource file, below (note line 10) (gist).

Second, an app label is required for Istio. I added an app label to each Deployment and Service resource file, as shown below in the Candidate microservice’s Deployment resource files (note lines 5 and 6) (gist).

The next set of code changes were to my existing Ingress resource file. The requirements for an Ingress resource using Istio are explained here. The first change, Istio ignores all annotations other than kubernetes.io/ingress.class: istio (note line 7, below). The next change, if using HTTPS, the secret containing your TLS/SSL certificate and private key must be called istio-ingress-certs; all other names will be ignored (note line 10, below). Related and critically important, that secret must be deployed to the istio-system namespace, not the application’s namespace. The last change, for my particular my prefix match routing rules, I needed to change the rules from /{service_name} to /{service_name}/.*. The /.* is a special Istio notation that is used to indicate a prefix match (note lines 14, 18, and 22, below) (gist).

Installing Istio

To install Istio, you first will need to download and uncompress the correct distribution of Istio for your OS. Istio provides instructions for installation on various platforms.

My install-istio.sh script contains a variable, ISTIO_HOME, which should point to the root of your local Istio directory. We will also deploy all the current Istio add-ons, including Prometheus, Grafana, ZipkinService Graph, and Zipkin-to-Stackdriver (gist).

Once installed, from the GCP Cloud Console, an alternative to the native Kubernetes Dashboard, we should see the following Istio resources deployed and running. Below, note the three nodes are distributed across three zones within the GCP us-east-1 region, the correct version of GKE is employed, Stackdriver logging and monitoring are enabled, and the Alpha Clusters features are also enabled.

GKE_001

And here, we see the nodes that comprise the GKE cluster.

GKE_001_1

GKE_001_2.PNG

Below, note the four components that comprise Istio: istio-ca, istio-ingress, istio-mixer, and istio-pilot. Additionally, note the five components that comprise the Istio add-ons.

GKE_002

Below, observe the Istio Ingress has automatically been assigned a public IP address by GCP, accessible on ports 80 and 443. This IP address is how we will communicate with applications running on our GKE cluster, behind the Istio Ingress Load Balancer. Later, we will see how the Istio Ingress Load Balancer knows how to route incoming traffic to those application endpoints, using the Voter API’s Ingress configuration.

GKE_003.PNG

Istio makes ample use of Kubernetes Config Maps and Secrets, to store configuration, and to store certificates for mutual TLS.

GKE_004

Creation of the GKE cluster and deployed Istio to the cluster is complete. Following, I will demonstrate the deployment of the Voter API to the cluster. This will be used to demonstrate the capabilities of Istio on GKE.

Kubernetes Dashboard

In addition to the GCP Cloud Console, the native Kubernetes Dashboard is also available. To open, use the kubectl proxy command and connect to the Kubernetes Dashboard at https://127.0.0.1:8001/ui. You should now be able to view and edit all resources, from within the Kubernetes Dashboard.

GKE_005_5

Sample Application

To demonstrate the functionality of Istio and GKE, I will deploy the Voter API. I have used variations of the sample Voter API application in several previous posts, including Architecting Cloud-Optimized Apps with AKS (Azure’s Managed Kubernetes), Azure Service Bus, and Cosmos DB and Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. I suggest reading these two post to better understand the Voter API’s design.

AKS

For this post, I have reconfigured the Voter API to use MongoDB’s Atlas Database-as-a-Service (DBaaS) as the NoSQL data-source for each microservice. The Voter API is connected to a MongoDB Atlas 3-node M10 instance cluster in GCP’s us-east1 (South Carolina) region. With Atlas, you have the choice of deploying clusters to GCP or AWS.

GKE_014

The Voter API will use CloudAMQP’s RabbitMQ-as-a-Service for its decoupled, eventually consistent, message-based architecture. For this post, the Voter API is configured to use a RabbitMQ cluster in GCP’s us-east1 (South Carolina) region; I chose a minimally-configured free version of RabbitMQ. CloudAMQP allows you to provide a much more robust multi-node clusters for Production, on GCP or AWS.

GKE_015_1.PNG

CloudAMQP provides access to their own Management UI, in addition to access to RabbitMQ’s Management UI.

GKE_015B

With the Voter API running and taking traffic, we can see each Voter API microservice instance, nine replicas in total, connected to RabbitMQ. They are each publishing and consuming messages off the two queues.

GKE_016

The GKE, MongoDB Atlas, and RabbitMQ clusters are all running in the same GCP Region. Optimizing the Voter API cloud architecture on GCP, within a single Region, greatly reduces network latency, increases API performance, and improves end-to-end application and infrastructure observability and traceability.

Installing the Voter API

For simplicity, I have divided the Voter API deployment into three parts. First, we create the new voter-api Kubernetes Namespace, followed by creating a series of Voter API Kuberentes Secrets (gist).

There are a total of five secrets, one secret for each of the three microservice’s MongoDB databases, one secret for the RabbitMQ connection string (shown below), and one secret containing a Let’s Encrypt SSL/TLS certificate chain and private key for the Voter API’s domain, api.voter-demo.com (shown below).

GKE_011

GKE_006.PNG

GKE_007.PNG

Next, we create the microservice pods, using the Kubernetes Deployment files, create three ClusterIP-type Kubernetes Services, and a Kubernetes Ingress. The Ingress contains the service endpoint configuration, which Istio Ingress will use to correctly route incoming external API traffic (gist).

Three Kubernetes Pods for each of the three microservice should be created, for a total of nine pods. In the GCP Cloud UI’s Workloads (Kubernetes Deployments), you should see the following three resources. Note each Workload has three pods, each containing one replica of the microservice.

GKE_010

In the GCP Cloud UI’s Discovery and Load Balancing tab, you should observe the following four resources. Note the Voter API Ingress endpoints for the three microservices, which are used by the Istio Proxy, discussed below.

GKE_009.PNG

Istio Proxy

Examining the Voter API deployment more closely, you will observe that each of the nine Voter API microservice pods have two containers running within them (gist).

Along with the microservice container, there is an Istio Proxy container, commonly referred to as a sidecar container. Istio Proxy is an extended version of the Envoy proxy, Lyfts well-known, highly performant edge and service proxy. The proxy sidecar container is injected automatically when the Voter API pods are created. This is possible because we deployed the Istio Initializer (istio-initializer.yaml). The Istio Initializer guarantees that Istio Proxy will be automatically injected into every microservice Pod. This is referred to as automatic sidecar injection. Below we see an example of one of three Candidate pods running the istio-proxy sidecar.

GKE_012

In the example above, all traffic to and from the Candidate microservice now passes through the Istio Proxy sidecar. With Istio Proxy, we gain several enterprise-grade features, including enhanced observability, service discovery and load balancing, credential injection, and connection management.

Manual Sidecar Injection

What if we have application components we do not want automatically managed with Istio Proxy. In that case, manual sidecar injection might be preferable to automatic sidecar injection with Istio Initializer. For manual sidecar injection, we execute a istioctl kube-inject command for each of the Kubernetes Deployments. The command manually injects the Istio Proxy container configuration into the Deployment resource file, alongside each Voter API microservice container. On Mac and Linux, this command is similar to the following. Proxy injection is discussed in detail, here (gist).

External Service Egress

Whether you choose automatic or manual sidecar injection of the Istio Proxy, Istio’s egress rules currently only support HTTP and HTTPS requests. The Voter API makes external calls to its backend services, using two alternate protocols, MongoDB Wire Protocol (mongodb://) and RabbitMQ AMQP (amqps://). Since we cannot use an Istio egress rule for either protocol, we will use the includeIPRanges option with the istioctl kube-inject command to open egress to the two backend services. This will completely bypass Istio for a specific IP range. You can read more about calling external services directly, on Istio’s website.

You will need to modify the includeIPRanges argument within the create-voter-api-part3.sh script, adding your own GKE cluster’s IP ranges to the IP_RANGES variable. The two IP ranges can be found using the following GCP CLI command (gist).

The create-voter-api-part3.sh script also contains a modified version the istioctl kube-inject command for each Voter API Deployment. Using the modified command, the original Deployment files are not altered, instead, a temporary copy of the Deployment file is created into which Istio injects the required modifications. The temporary Deployment file is then used for the deployment, and then immediately deleted (gist).

Some would argue not having the actual deployed version of the file checked into in source code control is an anti-pattern; in this case, I would disagree. If I need to redeploy, I would just run the istioctl kube-inject command again. You can always view, edit, and import the deployed YAML file, from the GCP CLI or GKE Management UI.

The amount of Istio configuration injected into each microservice Pod’s Deployment resource file is considerable. The Candidate Deployment file swelled from 68 lines to 276 lines of code! This hints at the power, as well as the complexity of Istio. Shown below is a snippet of the Candidate Deployment YAML, after Istio injection.

GKE_025

Confirming Voter API

Installation of the Voter API is now complete. We can validate the Voter API is working, and that traffic is being routed through Istio, using Postman. Below, we see a list of candidates successfully returned from the Voter microservice, through the Voter API. This means, not only us the API running, but that messages have been successfully passed between the services, using RabbitMQ, and saved to the microservice’s corresponding MongoDB databases.

GKE_030

Below, note the server and x-envoy-upstream-service-time response headers. They both confirm the Voter API HTTPS traffic is being managed by Istio.

GKE_031.PNG

Observability

Observability is certainly one of the primary advantages of implementing Istio. For anyone like myself, who has spent many long and often frustrating hours installing, configuring, and managing monitoring systems for distributed platforms, Istio’s observability features are most welcome. Istio provides Prometheus, Grafana, ZipkinService Graph, and Zipkin-to-Stackdriver add-ons. Combined with the monitoring capabilities of Backend-as-a-Service providers, such as MongoDB Altas and CloudAMQP RabvbitMQ, you get considerable visibility into your application, out-of-the-box.

Prometheus
First, we will look at Prometheus, a leading open-source monitoring solution. The easiest way to access the Prometheus UI, or any of the other add-ons, including Prometheus, is using port-forwarding. For example with Prometheus, we use the following command (gist).

Alternatively, you could securely expose any of the Istio add-ons through the Istio Ingress, similar to how the Voter API microservice endpoints are exposed.

Prometheus collects time series metrics from both the Istio and Voter API components. Below we see two examples of typical metrics being collected; they include the 201 responses generated by the Candidate microservice, and the outflow of bytes by the Election microservice, over a given period of time.

GKE_022

GKE_022_1

Grafana
Although Prometheus is an excellent monitoring solution, Grafana, the leading open source software for time series analytics, provides a much easier way to visualize the metrics collected by Prometheus. Conveniently, Istio provides a dynamically-configured Grafana Dashboard, which will automatically display metrics for components deployed to GKE.

GKE_020B.PNG

Below, note the metrics collected for the Candidate and Election microservice replicas. Out-of-the-box, Grafana displays common HTTP KPIs, such as request rate, success rate, response codes, response time, and response size. Based on the version label included in the Deployment resource files, we can delineate metrics collected by the version of the Voter API microservices, in this case, v1 of the Candidate and Election microservices.

GKE_021B

Zipkin
Next, we have Zipkin, a leading distributed tracing system.

GKE_018

Since the Voter API application uses RabbitMQ to decouple communications between services, versus direct HTTP-based IPC, we won’t see any complex multi-segment traces. We will only see traces representing traffic to and from the microservices, which passes through the Istio Ingress.

GKE_019

Service Graph
Similar to Zipkin, Service Graph is not as valuable with the Voter API application as it could be with more complex applications. Below is a Service Graph view of the Voter API showing microservice version and requests/second to each microservice.

GKE_024

Stackdriver

One last tool we have to monitor our GKE cluster is Stackdriver. Stackdriver provides fine-grain monitoring, logging, and diagnostics. If you recall, we enabled Stackdriver logging and monitoring when we first provisioned the GKE cluster. Stackdrive allows us to examine the performance of the GKE cluster’s resources, review logs, and set alerts.

GKE_028

GKE_029

GKE_027

Zipkin-to-Stackdriver

When we installed Istio, we also installed the Zipkin-to-Stackdriver add-on. The Stackdriver Trace Zipkin Collector is a drop-in replacement for the standard Zipkin HTTP collector that writes to Google’s free Stackdriver Trace distributed tracing service. To use Stackdriver for traces originating from Zipkin, there is additional configuration required, which is commented out of the current version of the zipkin-to-stackdriver.yaml file (gist).

Instructions to configure the Zipkin-to-Stackdriver feature can be found here. Below is an example of how you might add the necessary configuration using a Kubernetes ConfigMap to inject the required user credentials JSON file (zipkin-to-stackdriver-creds.json) into the zipkin-to-stackdriver container. The new configuration can be seen on lines 27-44 (gist).

Conclusion

Istio provides a significant amount of fine-grained management control to Kubernetes. Managed Kubernetes CaaS offerings like GKE, coupled with tools like Istio, will soon make running reliable and secure containerized applications in Production, commonplace.

References

All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , ,

Leave a comment

Docker Log Aggregation and Visualization Options with the ELK Stack

elk

As a Developer and DevOps Engineer, it wasn’t that long ago, I spent a lot of time requesting logs from Operations teams for applications running in Production. Many organizations I’ve worked with have created elaborate systems for requesting, granting, and revoking access to application logs. Requesting and obtaining access to logs typically took hours or days, or simply never got approved. Since most enterprise applications are composed of individual components running on multiple application and web servers, it was necessary to request multiple logs. What was often a simple problem to diagnose and fix, became an unnecessarily time-consuming ordeal.

Hopefully, you are still not in this situation. Given the average complexity of today’s modern, distributed, containerized application platforms, accessing individual logs is simply unrealistic and ineffective. The solution is log aggregation and visualization.

Log Aggregation and Visualization

In the context of this post, log aggregation and visualization is defined as the collection, centralized storage, and ability to simultaneously display application logs from multiple, dissimilar sources. Take a typical modern web application. The frontend UI might be built with Angular, React, or Node. The UI is likely backed by multiple RESTful services, possibly built in Java Spring Boot or Python Flask, and a database or databases, such as MongoDB or MySQL. To support the application, there are auxiliary components, such as API gateways, load-balancers, and messaging brokers. These components are likely deployed as multiple instances, for performance and availability. All instances generate application logs in varying formats.

When troubleshooting an application, such as the one described above, you must often trace a user’s transaction from UI through firewalls and gateways, to the web server, back through the API gateway, to multiple backend services via load-balancers, through message queues, to databases, possibly to external third-party APIs, and back to the client. This is why log aggregation and visualization is essential.

Logging Options

Log aggregation and visualization solutions typically come in three varieties: cloud-hosted by a SaaS provider, a service provided by your Cloud provider, and self-hosted, either on-premises or in the cloud. Cloud-hosted SaaS solutions include Loggly, Splunk, Logentries, and Sumo Logic. Some of these solutions, such as Splunk, are also available as a self-hosted service. Cloud-provider solutions include AWS CloudWatch and Azure Application Insights. Most hosted solutions have reoccurring pricing models based on the volume of logs or the number of server nodes being monitored.

Self-hosted solutions include Graylog 2, Nagios Log Server, Splunk Free, and elastic’s Elastic Stack. The ELK Stack (Elasticsearch, Logstash, and Kibana), as it was previously known, has been re-branded the Elastic Stack, which now includes Beats. Beats is elastic’s lightweight shipper that send data from edge machines to Logstash and Elasticsearch.

Often, you will see other components mentioned in the self-hosted space, such as Fluentd, syslog, and Kafka. These are examples of log aggregators or datastores for logs. They lack the combined abilities to collect, store, and display multiple logs. These components are generally part of a larger log aggregation and visualization solution.

This post will explore self-hosted log aggregation and visualization of a Dockerized application on AWS, using the ELK Stack. The post details three common variations of log collection and routing to ELK, using various Docker logging drivers, along with Logspout, Fluentd, and GELF (Graylog Extended Log Format).

Docker Swarm Cluster

The post’s example application is deployed to a Docker Swarm, built on AWS, using Docker CE for AWS. Docker has automated the creation of a Swarm on AWS using Docker Cloud, right from your desktop. Creating a Swarm is as easy as inputting a few options and clicking build. Docker uses an AWS CloudFormation script to provision all the necessary AWS resources for the Docker Swarm.

swam_mode

For this post’s logging example, I built a minimally configured Docker Swarm cluster, consisting of a single Manager Node and three Worker Nodes. The four Swarm nodes, all EC2 instances, are behind an AWS ELB, inside a new AWS VPC.

Logging Diagram AWS Diagram 3D

As seen with the docker node ls command, the Docker Swarm will look similar to the following.

Sample Application Components

Multiple containerized copies of a simple Java Spring Boot RESTful Hello-World service, available on GitHub, along with the associated logging aggregators, are deployed to Worker Node 1 and Worker Node 2. We will explore each of these application components later in the post. The containerized components consist of the following:

  1. Fluentd (garystafford/custom-fluentd)
  2. Logspout (garystafford/custom-logspout)
  3. NGINX (garystafford/custom-nginx)
  4. Hello-World Service using Docker’s default JSON file logging driver
  5. Hello-World Service using Docker’s GELF logging driver
  6. Hello-World Service using Docker’s Fluentd logging driver

NGINX is used as a simple frontend API gateway, which to routes HTTP requests to each of the three logging variations of the Hello-World service (garystafford/hello-world).

A single container, running the entire ELK Stack (garystafford/custom-elk) is deployed to Worker Node 3. This is to isolate the ELK Stack from the application. Typically, in a real environment, ELK would be running on separate infrastructure for performance and security, not alongside your application. Running a docker service ls, the deployed services appear as follows.

Portainer

A single instance of Portainer (Docker Hub: portainer/portainer) is deployed on the single Manager Node. Portainer, amongst other things, provides a detailed view of Docker Swarm, showing each Swarm Node and the service containers deployed to them.

portainer

In my opinion, Portainer provides a much better user experience than Docker Enterprise Edition’s most recent Universal Control Plane (UCP). In the past, I have also used Visualizer (dockersamples/visualizer), one of the first open source solutions in this space. However, since the Visualizer project moved to Docker, it seems like the development of new features has completely stalled out. A good list of container tools can be found on StackShare.

Deployment

All the Docker service containers are deployed to the AWS-based Docker Swarm using a single Docker Compose file. The order of service startup is critical. ELK should fully startup first, followed by Fluentd and Logspout, then the three sets of Hello-World instances, and finally NGINX.

To deploy and start all the Docker services correctly, there are two scripts in the GitHub repository. First, execute the following command, sh ./stack_deploy.sh. This will deploy the Docker service stack and create an overlay network, containing all the services as configured in the docker-compose.yml file. Then, to ensure the services start in the correct sequence, execute sh ./service_update.sh. This will restart each service in the correct order, with pauses between services to allow time for startup; a bit of a hack, but effective.

Collection and Routing Examples

Below is a diagram showing all the components comprising this post’s examples, and includes the protocols and ports on which they communicate. Following, we will look at three variations of self-hosted log collection and routing options for ELK.

Logging Diagram

Example 1: Fluentd

The first example of log aggregation and visualization uses Fluentd, a Cloud Native Computing Foundation (CNCF) hosted project. Fluentd is described as ‘an open source data collector for unified logging layer.’ A container running Fluentd with a custom configuration runs globally on each Worker Node where the applications are deployed, in this case, the hello-fluentd Docker service. Here is the custom Fluentd configuration file (fluent.conf):

The Hello-World service is configured through the Docker Compose file to use the Fluentd Docker logging driver. The log entries from the Hello-World containers on the Worker Nodes are diverted from being output to JSON files, using the default JSON file logging driver, to the Fluentd container instance on the same host as the Hello-World container. The Fluentd container is listening for TCP traffic on port 24224.

Fluentd then sends the individual log entries to Elasticsearch directly, bypassing Logstash. Fluentd log entries are sent via HTTP to port 9200, Elasticsearch’s JSON interface.

Logging Diagram Fluentd

Using Fluentd as a transport method, log entries appear as JSON documents in ELK, as shown below. This Elasticsearch JSON document is an example of a single line log entry. Note the primary field container identifier, when using Fluentd, is container_id. This field will vary depending on the Docker driver and log collector, as seen in the next two logging examples.

fluentd-log.png

The next example shows a Fluentd multiline log entry. Using the Fluentd Concat filter plugin (fluent-plugin-concat), the individual lines of a stack trace from a Java runtime exception, thrown by the hello-fluentd Docker service, have been recombined into a single Elasticsearch JSON document.

fluentd-multiline

In the above log entries, note the DEPLOY_ENV and SERVICE_NAME fields. These values were injected into the Docker Compose file, as environment variables, during deployment of the Hello-World service. The Fluentd Docker logging driver applies these as env options, as shown in the example Docker Compose snippet, below, lines 5-9.

Example 2: Logspout

The second example of log aggregation and visualization uses GliderLabs’ Logspout. Logspout is described by GliderLabs as ‘a log router for Docker containers that runs inside Docker. It attaches to all containers on a host, then routes their logs wherever you want. It also has an extensible module system.’ In the post’s example, a container running Logspout with a custom configuration runs globally on each Worker Node where the applications are deployed, identical to Fluentd.

The hello-logspout Docker service is configured through the Docker Compose file to use the default JSON file logging driver. According to Docker, ‘by default, Docker captures the standard output (and standard error) of all your containers and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container.

Normally, it is not necessary to explicitly set the default Docker logging driver to JSON files. However, in this case, Docker CE for AWS automatically configured each Swarm Nodes Docker daemon default logging driver to Amazon CloudWatch Logs logging driver. The default drive may be seen by running the docker info command while attached to the Docker daemon. Note line 12 in the snippet below.

The hello-fluentd Docker service containers on the Worker Nodes send log entries to individual JSON files. The Fluentd container on each host then retrieves and routes those JSON log entries to Logstash, within the ELK container running on Worker Node 3, over UDP to port 5000. Logstash, which is explicitly listening for JSON via UDP on port 5000, then outputs those log entries to Elasticsearch, via HTTP to port 9200, Elasticsearch’s JSON interface.

Logging Diagram Logspout

Using Logspout as a transport method, log entries appear as JSON documents in ELK, as shown below. Note the field differences between the Fluentd log entry above and this entry. There are a number significant variations, making it difficult to use both methods, across the same distributed application. For example, the main body of the log entry is contained in the message field using Logspout, but in the log field using Fluentd. The name of the Docker container, which serves as the primary means of identifying the container instance, is the docker.name field with Logspout, but container.name for Fluentd.

Another helpful field, provided by Logspout, is the docker.image field. This is beneficial when associating code issues to a particular code release. In this example, the Hello-World service uses the latest Docker image tag, which is not considered best practice. However, in a real production environment, the Docker tags often represents the incremental build number from the CI/CD system, which is tied to a specific build of the code.

logspout-logThe other challenge I have had with Logspout is passing the env and tag options, such as DEPLOY_ENV and SERVICE_NAME, as seen previously with the Fluentd example. Note they are blank in the above sample. It is possible, but not as straightforward as with Fluentd, and requires interacting directly with the Docker daemon on each Worker node.

Example 3: Graylog Extended Format (GELF)

The third and final example of log aggregation and visualization uses the Docker Graylog Extended Format (GELF) logging driver. According to the GELF website, ‘the Graylog Extended Log Format (GELF) is a log format that avoids the shortcomings of classic plain syslog.’ These syslog shortcomings include a maximum length of 1024 bytes, no data types, multiple dialects making parsing difficult, and no compression.

The GELF format, designed to work with the Graylog Open Source Log Management Server, work equally as well with the ELK Stack. With the GELF logging driver, there is no intermediary logging collector and router, as with Fluentd and Logspout. The hello-gelf Docker service is configured through its Docker Compose file to use the GELF logging driver. The two hello-gelf Docker service containers on the Worker Nodes send log entries directly to Logstash, running within the ELK container, running on Worker Node 3, via UDP to port 12201.

Logstash, which is explicitly listening for UDP traffic on port 12201, then outputs those log entries to Elasticsearch, via HTTP to port 9200, Elasticsearch’s JSON interface.

Logging Diagram GELF

Using the Docker Graylog Extended Format (GELF) logging driver as a transport method, log entries appear as JSON documents in ELK, as shown below. They are the most verbose of the three formats.

gelf-logAgain, note the field differences between the Fluentd and Logspout log entries above, and this GELF entry. Both the field names of the main body of the log entry and the name of the Docker container are different from both previous examples.

Another bonus with GELF, each entry contains the command field, which stores the command used to start the container’s process. This can be helpful when troubleshooting application startup issues. Often, the exact container startup command might have been injected into the Docker Compose file at deploy time by the CI Server and contained variables, as is the case with the Hello-World service. Reviewing the log entry in Kibana for the command is much easier and safer than logging into the container and executing commands to check the running process for the startup command.

Unlike Logspout, and similar to Fluentd, note the DEPLOY_ENV and SERVICE_NAME fields are present in the GELF entry. These were injected into the Docker Compose file as environment variables during deployment of the Hello-World service. The GELF Docker logging driver applies these as env options. With GELF the entry also gets the optional tag, which was passed in the Docker Compose file’s service definition, tag: docker.{{.Name}}.

Unlike Fluentd, GELF and Logspout do not easily handle multiline logs. Below is an example of a multiline Java runtime exception thrown by the hello-gelf Docker service. The stack trace is not recombined into a single JSON document in Elasticsearch, like in the Fluentd example. The stack trace exists as multiple JSON documents, making troubleshooting much more difficult. Logspout entries will look similar to GELF.

gelf-multiline

Pros and Cons

In my opinion, and based on my level of experience with each of the self-hosted logging collection and routing options, the following some of their pros and cons.

Fluentd

  • Pros
    • Part of CNCF, Fluentd is becoming the defacto logging standard for cloud-native applications
    • Easily extensible via a large number of plugins
    • Easily containerized
    • Ability to easily handle multiline log entries (ie. Java stack trace)
    • Ability to use the Fluentd container’s service name as the Fluentd address, not an IP address or DNS resolvable hostname
  • Cons
    • Using Docker’s Fluentd logging driver, if the Fluentd container is not available on the container’s host, the container logging to Fluentd will fail (major con!)

Logspout

  • Pros
    • Doesn’t require a change to the default Docker JSON file logging driver, logs are still viewable via docker logs command (big plus!)
    • Easily to add and remove functionality via Golang modules
    • Easily containerized
  • Cons
    • Inability to easily handle multiline log entries (ie. Java stack trace)
    • Logspout containers must be restarted if ELK is restarted to restart logging
    • To reach Logstash, Logspout must use a DNS resolvable hostname or IP address, not the name of the ELK container on the same overlay network (big con!)

GELF

  • Pros
    • Application containers, using Docker GELF logging driver will not fail if downstream Logspout container is unavailable
    • Docker GELF logging driver allows compression of logs for shipment to Logspout
  • Cons
    • Inability to easily handle multiline log entries (ie. Java stack trace)

Conclusion

Of course, there are other self-hosted logging collection and routing options, including elastic’s Beats, journald, and various syslog servers. Each has their pros and cons, depending on your project’s needs. After building and maintaining several self-hosted mission-critical log aggregation and visualization solutions, it is easy to see the appeal of an off-the-shelf cloud-hosted SaaS solution such as Splunk or Cloud provider solutions such as Application Insights.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , , , , ,

Leave a comment

Docker Enterprise Edition: Multi-Environment, Single Control Plane Architecture for AWS

Final_DockerEE_21 (1)

Designing a successful, cloud-based containerized application platform requires a balance of performance and security with cost, reliability, and manageability. Ensuring that a platform meets all functional and non-functional requirements, while remaining within budget and is easily maintainable, can be challenging.

As Cloud Architect and DevOps Team Lead, I recently participated in the development of two architecturally similar, lightweight, cloud-based containerized application platforms. From the start, both platforms were architected to maximize security and performance, while minimizing cost and operational complexity. The later platform was built on AWS with Docker Enterprise Edition.

Docker Enterprise Edition

Released in March of this year, Docker Enterprise Edition (Docker EE) is a secure, full-featured container-based management platform. There are currently eight versions of Docker EE, available for Windows Server, Azure, AWS, and multiple Linux distros, including RHEL, CentOS, Ubuntu, SUSE, and Oracle.

Docker EE is one of several production-grade container orchestration Platforms as a Service (PaaS). Some of the other container platforms in this category include:

Docker Community Edition (CE), Kubernetes, and Apache Mesos are free and open-source. Some providers, such as Rancher Labs, offer enterprise support for an additional fee. Cloud-based services, such as Red Hat Openshift Online, AWS, GCE, and ACS, charge the typical usage monthly fee. Docker EE, similar to Mesosphere Enterprise DC/OS and Red Hat OpenShift, is priced on a per node/per year annual subscription model.

Docker EE is currently offered in three subscription tiers, including Basic, Standard, and Advanced. Additionally, Docker offers Business Day and Business Critical support. Docker EE’s Advanced Tier adds several significant features, including secure multi-tenancy with node-based isolation, and image security scanning and continuous vulnerability scanning, as part of Docker EE’s Docker Trusted Registry.

Architecting for Affordability and Maintainability

Building an enterprise-scale application platform, using public cloud infrastructure, such as AWS, and a licensed Containers-as-a-Service (CaaS) platform, such as Docker EE, can quickly become complex and costly to build and maintain. Based on current list pricing, the cost of a single Linux node ranges from USD 75 per month for basic support, up to USD 300 per month for Docker Enterprise Edition Advanced with Business Critical support. Although cost is relative to the value generated by the application platform, none the less, architects should always strive to avoid unnecessary complexity and cost.

Reoccurring operational costs, such as licensed software subscriptions, support contracts, and monthly cloud-infrastructure charges, are often overlooked by project teams during the build phase. Accurately forecasting reoccurring costs of a fully functional Production platform, under expected normal load, is essential. Teams often overlook how Docker image registries, databases, data lakes, and data warehouses, quickly swell, inflating monthly cloud-infrastructure charges to maintain the platform. The need to control cloud costs have led to the growth of third-party cloud management solutions, such as CloudCheckr Cloud Management Platform (CMP).

Shared Docker Environment Model

Most software development projects require multiple environments in which to continuously develop, test, demonstrate, stage, and release code. Creating separate environments, replete with their own Docker EE Universal Control Plane (aka Control Plane or UCP), Docker Trusted Registry (DTR), AWS infrastructure, and third-party components, would guarantee a high-level of isolation and performance. However, replicating all elements in each environment would add considerable build and run costs, as well as unnecessary complexity.

On both recent projects, we choose to create a single AWS Virtual Private Cloud (VPC), which contained all of the non-production environments required by our project teams. In parallel, we built an entirely separate Production VPC for the Production environment. I’ve seen this same pattern repeated with Red Hat OpenStack and Microsoft Azure.

Production

Isolating Production from the lower environments is essential to ensure security, and to eliminate non-production traffic from impacting the performance of Production. Corporate compliance and regulatory policies often dictate complete Production isolation. Having separate infrastructure, security appliances, role-based access controls (RBAC), configuration and secret management, and encryption keys and SSL certificates, are all required.

For complete separation of Production, different AWS accounts are frequently used. Separate AWS accounts provide separate billing, usage reporting, and AWS Identity and Access Management (IAM), amongst other advantages.

Performance and Staging

Unlike Production, there are few reasons to completely isolate lower-environments from one another. The exception I’ve encountered is Performance and Staging. These two environments are frequently separated from other environments to ensure the accuracy of performance testing and release staging activities. Performance testing, in particular, can generate enormous load on systems, which if not isolated, will impair adjacent environments, applications, and monitoring systems.

On a few recent projects, to reduce cost and complexity, we repurposed the UAT environment for performance testing, once user-acceptance testing was complete. Performance testing was conducted during off-peak development and testing periods, with access to adjacent environments blocked.

The multi-purpose UAT environment further served as a Staging environment. Applications were deployed and released to the UAT and Performance environments, following a nearly-identical process used for Production. Hotfixes to Production were also tested in this environment.

Example of Shared Environments

To demonstrate how to architect a shared non-production Docker EE environment, which minimizes cost and complexity, let’s examine the example shown below. In the example, built on AWS with Docker EE, there are four typical non-production environments, CI/CD, Development, Test, and UAT, and one Production environment.

Docker_EE_AWS_Diagram_01

In the example, there are two separate VPCs, the Production VPC, and the Non-Production VPC. There is no reason to configure VPC Peering between the two VPCs, as there is no need for direct communication between the two. Within the Non-Production VPC, to the left in the diagram, there is a cluster of three Docker EE UCP Manager EC2 nodes, a cluster of three DTR Worker EC2 nodes, and the four environments, consisting of varying numbers of EC2 Worker nodes. Production, to the right of the diagram, has its own cluster of three UCP Manager EC2 nodes and a cluster of six EC2 Worker nodes.

Single Non-Production UCP

As a primary means of reducing cost and complexity, in the example, a single minimally-sized Docker EE UCP cluster of three Manager nodes orchestrate activities across all four non-production environments. Alternately, you would have to create a UCP cluster for each environment; that means nine more Worker Nodes to configure and maintain.

The UCP users, teams, organizations, access controls, Docker Secrets, overlay networks, and other UCP features, for all non-production environments, are managed through the single Control Plane. All deployments to all the non-production environments, from the CI/CD server, are performed through the single Control Plane. Each UCP Manager node is deployed to a different AWS Availability Zone (AZ) to ensure high-availability.

Shared DTR

As another means of reducing cost and complexity, in the example, a Docker EE DTR cluster of three Worker nodes contain all Docker image repositories. Both the non-production and the Production environments use this DTR as a secure source of all Docker images. Not having to replicate image repositories, access controls, infrastructure, and figuring out how to migrate images between two separate DTR clusters, is a significant time, cost, and complexity savings. Each DTR Worker node is also deployed to a different AZ to ensure high-availability.

Using a shared DTR between non-production and Production is an important security consideration your project team needs to consider. A single DTR, shared between non-production and Production, comes with inherent availability and security risks, which should be understood in advance.

Separate Non-Production Worker Nodes

In the shared non-production environments example, each environment has dedicated AWS EC2 instances configured as Docker EE Worker nodes. The number of Worker nodes is determined by the requirements for each environment, as dictated by the project’s Development, Testing, Security, and DevOps teams. Like the UCP and DTR clusters, each Worker node, within an individual environment, is deployed to a different AZ to ensure high-availability and mimic the Production architecture.

Minimizing the number of Worker nodes in each environment, as well as the type and size of each EC2 node, offers a significant potential cost and administrative savings.

Separate Environment Ingress

In the example, the UCP, DTR, and each of the four environments are accessed through separate URLs, using AWS Hosted Zone CNAME records (subdomains). Encrypted HTTPS traffic is routed through a series of security appliances, depending on traffic type, to individual private AWS Elastic Load Balancers (ELB), one for both UCPs, the DTR, and each of the environments. Each ELB load-balances traffic to the Docker EE nodes associated the specific traffic. All firewalls, ELBs, and the UCP and DTR are secured with a high-grade wildcard SSL certificate.

AWS_ELB

Separate Data Sources

In the shared non-production environments example, there is one Amazon Relational Database Service‎ (RDS) instance in non-Production and one Production. Both RDS instances are replicated across multiple Availability Zones. Within the single shared non-production RDS instance, there are four separate databases, one per non-production environment. This architecture sacrifices the potential database performance of separate RDS instances for additional cost and complexity.

Maintaining Environment Separation

Node Labels

To obtain sufficient environment separation while using a single UCP, each Docker EE Worker node is tagged with an environment node label. The node label indicates which environment the Worker node is associated with. For example, in the screenshot below, a Worker node is assigned to the Development environment by tagging it with the key of environment and the value of dev.

Node_Label

* The Docker EE screens shown here are from UCP 2.1.5, not the recently released 2.2.x, which has an updated UI appearance.Each service’s Docker Compose file uses deployment placement constraints, which indicate where Docker should or should not deploy services. In the hello-world Docker Compose file example below, the node.labels.environment constraint is set to the ENVIRONMENT variable, which is set during container deployment by the CI/CD server. This constraint directs Docker to only deploy the hello-world service to nodes which contain the placement constraint of node.labels.environment, whose value matches the ENVIRONMENT variable value.

Deploying from CI/CD Server

The ENVIRONMENT value is set as an environment variable, which is then used by the CI/CD server, running a docker stack deploy or a docker service update command, within a deployment pipeline. Below is an example of how to use the environment variable as part of a Jenkins pipeline as code Jenkinsfile.

Centralized Logging and Metrics Collection

Centralized logging and metrics collection systems are used for application and infrastructure dashboards, monitoring, and alerting. In the shared non-production environment examples, the centralized logging and metrics collection systems are internal to each VPC, but reside on separate EC2 instances and are not registered with the Control Plane. In this way, the logging and metrics collection systems should not impact the reliability, performance, and security of the applications running within Docker EE. In the example, Worker nodes run a containerized copy of fluentd, which collects and pushes logs to ELK’s Elasticsearch.

Logging and metrics collection systems could also be supplied by external cloud-based SaaS providers, such as LogglySysdig and Datadog, or by the platform’s cloud-provider, such as Amazon CloudWatch.

With four environments running multiple containerized copies of each service, figuring out which log entry came from which service instance, requires multiple data points. As shown in the example Kibana UI below, the environment value, along with the service name and container ID, as well as the git commit hash and branch, are added to each log entry for easier troubleshooting. To include the environment, the value of the ENVIRONMENT variable is passed to Docker’s fluentd log driver as an env option. This same labeling method is used to tag metrics.

ELK

Separate Docker Service Stacks

For further environment separation within the single Control Plane, services are deployed as part of the same Docker service stack. Each service stack contains all services that comprise an application running within a single environment. Multiple stacks may be required to support multiple, distinct applications within the same environment.

For example, in the screenshot below, a hello-world service container, built with a Docker image, tagged with build 59 of the Jenkins continuous integration pipeline, is deployed as part of both the Development (dev) and Test service stacks. The CD and UAT service stacks each contain different versions of the hello-world service.

Hello-World-UCP

Separate Docker Overlay Networks

For additional environment separation within the single non-production UCP, all Docker service stacks associated with an environment, reside on the same Docker overlay network. Overlay networks manage communications among the Docker Worker nodes, enabling service-to-service communication for all services on the same overlay network while isolating services running on one network from services running on another network.

in the example screenshot below, the hello-world service, a member of the test service stack, is running on the test_default overlay network.

Network

Cleaning Up

Having distinct environment-centric Docker service stacks and overlay networks makes it easy to clean up an environment, without impacting adjacent environments. Both service stacks and overlay networks can be removed to clear an environment’s contents.

Separate Performance Environment

In the alternative example below, a Performance environment has been added to the Non-Production VPC. To ensure a higher level of isolation, the Performance environment has its own UPC, RDS, and ELBs. The Performance environment shares the DTR, as well as the security, logging, and monitoring components, with the rest of the non-production environments.

Below, the Performance environment has half the number of Worker nodes as Production. Performance results can be scaled for expected Production performance, given more nodes. Alternately, the number of nodes can be scaled up temporarily to match Production, then scaled back down to a minimum after testing is complete.

Docker_EE_AWS_Diagram_02

Shared DevOps Tooling

All environments leverage shared Development and DevOps resources, deployed to a separate VPC. Resources include Agile Application Lifecycle Management (ALM), such as JIRA or CA Agile Central, source control repository management (SCM), such as GitLab or Bitbucket, binary repository management, such as Artifactory or Nexus, and a CI/CD solution, such as Jenkins, TeamCity, or Bamboo.

From the DevOps VPC, Docker images are pushed and pulled from the DTR in the Non-Production VPC. Deployments of container-based application are executed from the DevOps VPC CI/CD server to the non-production, Performance, and Production UCPs. Separate DevOps CI/CD pipelines and access controls are essential in maintaining the separation of the non-production and Production environments.

Docker_EE_AWS_Diagram_03

Complete Platform

Several common components found in a Docker EE cloud-based AWS platform were discussed in the post. However, a complete AWS application platform has many more moving parts. Below is a comprehensive list of components, including DevOps tooling, organized into two categories: 1) common components that can be potentially shared across the non-production environments to save cost and complexity, and 2) components that should be replicated in each non-environment for security and performance.

Shared Non-Production Components:

  • AWS
    • Virtual Private Cloud (VPC), Region, Availability Zones
    • Route Tables, Network ACLs, Internet Gateways
    • Subnets
    • Some Security Groups
    • IAM Groups, User, Roles, Policies (RBAC)
    • Relational Database Service‎ (RDS)
    • ElastiCache
    • API Gateway, Lambdas
    • S3 Buckets
    • Bastion Servers, NAT Gateways
    • Route 53 Hosted Zone (Registered Domain)
    • EC2 Key Pairs
    • Hardened Linux AMI
  • Docker EE
    • UCP and EC2 Manager Nodes
    • DTR and EC2 Worker Nodes
    • UCP and DTR Users, Teams, Organizations
    • DTR Image Repositories
    • Secret Management
  • Third-Party Components/Products
    • SSL Certificates
    • Security Components: Firewalls, Virus Scanning, VPN Servers
    • Container Security
    • End-User IAM
    • Directory Service
    • Log Aggregation
    • Metric Collection
    • Monitoring, Alerting
    • Configuration and Secret Management
  • DevOps
    • CI/CD Pipelines as Code
    • Infrastructure as Code
    • Source Code Repositories
    • Binary Artifact Repositories

Isolated Non-Production Components:

  • AWS
    • Route 53 Hosted Zones and Associated Records
    • Elastic Load Balancers (ELB)
    • Elastic Compute Cloud (EC2) Worker Nodes
    • Elastic IPs
    • ELB and EC2 Security Groups
    • RDS Databases (Single RDS Instance with Separate Databases)

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , ,

2 Comments

The Evolving Role of DevOps in Emerging Technologies

31970399_m

Growth of DevOps

The adoption of DevOps practices by global organizations has become mainstream, according to many recent industry studies. For instance, a late 2016 study, conducted by IDG Research for Unisys Corporation of global enterprise organizations, found 38 percent of respondents had already adopted DevOps, while another 29 percent were in the planning phase, and 17 percent in the evaluation stage. Adoption rates were even higher, 49 percent versus 38 percent, for larger organizations with 500 or more developers.

Another recent 2017 study by Red Gate Software, The State of Database DevOps, based on 1,000 global organizations, found 47 percent of the respondents had already adopted DevOps practices, with another 33 percent planning on adopting DevOps practices within the next 24 months. Similar to the Unisys study, prior adoption rates were considerably higher, 59 percent versus 47 percent, for larger organizations with over 1,000 employees.

Emerging Technologies

Although DevOps originated to meet the needs of Agile software development to release more frequently, DevOps is no longer just continuous integration and continuous delivery. As more organizations undergo a digital transformation and adopt disruptive technologies to drive business success, the role of DevOps continues to evolve and expand.

Emerging technology trends, such as Machine Learning, Artificial Intelligence (AI), and Internet of Things (IoT/IIoT), serve to both influence DevOps practices, as well as create the need for the application of DevOps practices to these emerging technologies. Let’s examine the impact of some of these emerging technology trends on DevOps in this brief, two-part post.

Mobile

Although mobile application development is certainly not new, DevOps practices around mobile continue to evolve as mobile becomes the primary application platform for many organizations. Mobile applications have unique development and operational requirements. Take for example UI functional testing. Whereas web application developers often test against a relatively small matrix of popular web browsers and operating systems (Desktop Browser Market Share – Net Application.com), mobile developers must test against a continuous outpouring of new mobile devices, both tablets and phones (Test on the right mobile devices – BroswerStack). The complexity of automating the testing of such a large number mobile devices has resulted in the growth of specialized cloud-based testing platforms, such as BrowserStack and SauceLabs.

Cloud

Similar to Mobile, the Cloud is certainly not new. However, as more firms move their IT operations to the Cloud, DevOps practices have had to adapt rapidly. The need to adjust is no more apparent than with Amazon Web Services. Currently, AWS lists no less than 18 categories of cloud offerings on their website, with each category containing several products and services. Categories include compute, storage, databases, networking, security, messaging, mobile, AI, IoT, and analytics.

In addition to products like compute, storage, and database, AWS now offers development, DevOps, and management tools, such as AWS OpsWorks and AWS CloudFormation. These products offer alternatives to traditional non-cloud CI/CD/RM workflows for deploying and managing complex application platforms on AWS. Learning the nuances of a growing list of AWS specific products and workflows, while simultaneously adapting your organization’s DevOps practices to them, has resulted in a whole new category of DevOps engineering specialization centered around AWS. Cloud-centric DevOps engineering specialization is also seen with other large cloud providers, such as Microsoft Azure and Google Cloud Platform.

Security

Call it DevSecOps, SecDevOps, SecOps, or Rugged DevOps, the intersection of DevOps and Security is bustling these days. As the complexity of modern application platforms grows, as well as the sophistication of threats from hackers and the requirements of government and industry compliance, security is no longer an afterthought or a process run in seeming isolation from software development and DevOps. In my recent experience, it is not uncommon to see IT security specialists actively participating on Agile development teams and embedded on DevOps and Platform teams.

Modern application platforms must be designed from day one to be bug-free, performant, compliant, and secure.

Security practices are now commonly part of the entire software development lifecycle, including enterprise architecture, software development, data governance, continuous testing, and infrastructure as code. Modern application platforms must be designed from day one to be bug-free, performant, compliant, and secure.

Take for example penetration (PEN) testing. Once a mostly manual process, done close to release time, evolving DevOps practices now allow testing for security vulnerabilities to applications and software-defined infrastructure to be done early and often in the software development lifecycle. Easily automatable and configurable cloud- and non-cloud-based tools like SonarQube, Veracode, QualysOWASP ZAP, and Chef Compliance, amongst others, are frequently incorporated into continuous integration workflows by development and DevOps teams. There is no longer an excuse for security vulnerabilities to be discovered just before release, or worse, in Production.

Modern Platforms

Along with the Cloud, modern application development trends, like the rise of the platform, microservices (or service-based architectures), containerization, NoSQL databases, and container orchestration, have likely provided the majority of fuel for the recent explosive growth of DevOps. Although innovative IT organizations have fostered these technologies for the past few years, their growth and relative maturity have risen sharply in the last 12 to 18 months.

No longer the stuff of Unicorns, platforms based on Evolutionary Architectures are being built and deployed by an increasing number of everyday organizations.

No longer the stuff of Unicorns, such as Amazon, Etsy, and Netflix, platforms based on Evolutionary Architectures are being built and deployed by an increasing number of everyday organizations. Although complexity continues to rise, the barrier to entry has been greatly reduced with technologies found across the SDLC, including  Node, Spring Boot, Docker, Consul, Terraform, and Kubernetes, amongst others.

As modern platforms become more commonplace, the DevOps practices around them continue to mature and become specialized. Imagine, with potentially hundreds of moving parts, building, testing, deploying, and actively managing a large-scale microservice-based application on a container orchestration platform requires highly-specialized knowledge. The ability to ‘do DevOps at scale’ is critical.

Legacy Systems

Legacy systems as an emerging technology trend in DevOps? As the race to build the ‘next generation’ of application platforms accelerates to meet the demands of the business and their customers, there is a growing need to support ‘last generation’ systems. Many IT organizations support multiple legacy systems, ranging in age from as short as five years old to more than 25 years old. These monolithic legacy systems, which often contain a company’s secret sauce, such as complex business algorithms and decision engines, are built on out-moded technology stacks, often lack vendor support, and require separate processes to build, test, deploy, and manage. Worse, the knowledge to maintain these systems is frequently only known to a shrinking group of IT resources. Who wants to work on the old system with so many bright and shiny toys being built?

As a cost-effective means to maintain these legacy systems, organizations are turning to modern DevOps practices. Although not possible to the same degree, depending on the legacy technology, practices include the use source control, various types of automated testing, automated provisioning, deployment and configuration of system components, and infrastructure automation (DevOps for legacy systems – Infosys white paper).

Not specifically a DevOps practice, organizations are also implementing content collaboration systems, like Atlassian Confluence and Microsoft SharePoint, to document legacy system architectures and manual processes, before the resources and their knowledge is lost.

To be Continued

In a future post, we will look additional emerging technologies and their impact on DevOps, including:

  • Big Data
  • Internet of Things (IoT/IIoT)
  • Artificial Intelligence (AI)
  • Machine Learning
  • COTS/SaaS

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

 

Illustration Copyright: Andreus / 123RF Stock Photo

, , , , , , ,

Leave a comment