In the following two-part post, we will explore the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.
Application Environment Management
Container orchestration engines, such as Kubernetes, have revolutionized the deployment and management of microservice-based architectures. Combined with a Service Mesh, such as Istio, Kubernetes provides a secure, instrumented, enterprise-grade platform for modern, distributed applications.
One of many challenges with any platform, even one built on Kubernetes, is managing multiple application environments. Whether applications run on bare-metal, virtual machines, or within containers, deploying to and managing multiple application environments increases operational complexity.
As Agile software development practices continue to increase within organizations, the need for multiple, ephemeral, on-demand environments also grows. Traditional environments that were once only composed of Development, Test, and Production, have expanded in enterprises to include a dozen or more environments, to support the many stages of the modern software development lifecycle. Current application environments often include Continous Integration and Delivery (CI), Sandbox, Development, Integration Testing (QA), User Acceptance Testing (UAT), Staging, Performance, Production, Disaster Recovery (DR), and Hotfix. Each environment requiring its own compute, security, networking, configuration, and corresponding dependencies, such as databases and message queues.
Environments and Kubernetes
There are various infrastructure architectural patterns employed by Operations and DevOps teams to provide Kubernetes-based application environments to Development teams. One pattern consists of separate physical Kubernetes clusters. Separate clusters provide a high level of isolation. Isolation offers many advantages, including increased performance and security, the ability to tune each cluster’s compute resources to meet differing SLAs, and ensuring a reduced blast radius when things go terribly wrong. Conversely, separate clusters often result in increased infrastructure costs and operational overhead, and complex deployment strategies. This pattern is often seen in heavily regulated, compliance-driven organizations, where security, auditability, and separation of duties are paramount.
Namespaces
An alternative to separate physical Kubernetes clusters is virtual clusters. Virtual clusters are created using Kubernetes Namespaces. According to Kubernetes documentation, ‘Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces’.
In most enterprises, Operations and DevOps teams deliver a combination of both virtual and physical Kubernetes clusters. For example, lower environments, such as those used for Development, Test, and UAT, often reside on the same physical cluster, each in a separate virtual cluster (namespace). At the same time, environments such as Performance, Staging, Production, and DR, often require the level of isolation only achievable with physical Kubernetes clusters.
In the Cloud, physical clusters may be further isolated and secured using separate cloud accounts. For example, with AWS you might have a Non-Production AWS account and a Production AWS account, both managed by an AWS Organization.
In a multi-environment scenario, a single physical cluster would contain multiple namespaces, into which separate versions of an application or applications are independently deployed, accessed, and tested. Below we see a simple example of a single Kubernetes non-prod cluster on the left, containing multiple versions of different microservices, deployed across three namespaces. You would likely see this type of deployment pattern as applications are deployed, tested, and promoted across lower environments, before being released to Production.
Example Application
To demonstrate the promotion and testing of an application across multiple environments, we will use a simple election-themed microservice, developed for a previous post, Developing Cloud-Native Data-Centric Spring Boot Applications for Pivotal Cloud Foundry. The Spring Boot-based application allows API consumers to create, read, update, and delete, candidates, elections, and votes, through an exposed set of resources, accessed via RESTful endpoints.
Source Code
All source code for this post can be found on GitHub. The project’s README file contains a list of the Election microservice’s endpoints. To get started quickly, use one of the two following options (gist).
# clone the official v3.0.0 release for this post | |
git clone --depth 1 --branch v3.0.0 \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo \ | |
&& git checkout -b v3.0.0 | |
# clone the latest version of code (newer than article) | |
git clone --depth 1 --branch master \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo |
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post. The scripts are designed to be easily adapted to a CI/CD DevOps workflow. You will need to modify the script’s variables to match your own environment’s configuration.
Database
The post’s Spring Boot application relies on a PostgreSQL database. In the previous post, ElephantSQL was used to host the PostgreSQL instance. This time, I have used Amazon RDS for PostgreSQL. Amazon RDS for PostgreSQL and ElephantSQL are equivalent choices. For simplicity, you might also consider a containerized version of PostgreSQL, managed as part of your Kubernetes environment.
Ideally, each environment should have a separate database instance. Separate database instances provide better isolation, fine-grained RBAC, easier test data lifecycle management, and improved performance. Although, for this post, I suggest a single, shared, minimally-sized RDS instance.
The PostgreSQL database’s sensitive connection information, including database URL, username, and password, are stored as Kubernetes Secrets, one secret for each namespace, and accessed by the Kubernetes Deployment controllers.
Istio
Although not required, Istio makes the task of managing multiple virtual and physical clusters significantly easier. Following Istio’s online installation instructions, download and install Istio 0.7.1.
To create a Google Kubernetes Engine (GKE) cluster with Istio, you could use gcloud
CLI’s container clusters create
command, followed by installing Istio manually using Istio’s supplied Kubernetes resource files. This was the method used in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).
Alternatively, you could use Istio’s Google Cloud Platform (GCP) Deployment Manager files, along with the gcloud
CLI’s deployment-manager deployments create
command to create a Kubernetes cluster, replete with Istio, in a single step. Although arguably simpler, the deployment-manager
method does not provide the same level of fine-grain control over cluster configuration as the container clusters create method. For this post, the deployment-manager
method will suffice.
The latest version of the Google Kubernetes Engine, available at the time of this post, is 1.9.6-gke.0. However, to install this version of Kubernetes Engine using the Istio’s supplied deployment Manager Jinja template requires updating the hardcoded value in the istio-cluster.jinja
file from 1.9.2-gke.1. This has been updated in the next release of Istio.
Another change, the latest version of Istio offered as an option in the istio-cluster-jinja.schema file. Specifically, the installIstioRelease
configuration variable is only 0.6.0. The template does not include 0.7.1 as an option. Modify the istio-cluster-jinja.schema
file to include the choice of 0.7.1. Optionally, I also set 0.7.1 as the default. This change should also be included in the next version of Istio.
There are a limited number of GKE and Istio configuration defaults defined in the istio-cluster.yaml
file, all of which can be overridden from the command line.
To optimize the cluster, and keep compute costs to a minimum, I have overridden several of the default configuration values using the properties flag with the gcloud CLI’s deployment-manager deployments create
command. The README file provided by Istio explains how to use this feature. Configuration changes include the name of the cluster, the version of Istio (0.7.1), the number of nodes (2), the GCP zone (us-east1-b), and the node instance type (n1-standard-1). I also disabled automatic sidecar injection and chose not to install the Istio sample book application onto the cluster (gist).
# change to match your environment | |
ISTIO_HOME="/Applications/istio-0.7.1" | |
GCP_DEPLOYMENT_MANAGER="$ISTIO_HOME/install/gcp/deployment_manager" | |
GCP_PROJECT="springdemo-199819" | |
GKE_CLUSTER="election-nonprod-cluster" | |
GCP_ZONE="us-east1-b" | |
ISTIO_VER="0.7.1" | |
NODE_COUNT="1" | |
INSTANCE_TYPE="n1-standard-1" | |
# deploy gke istio cluster | |
gcloud deployment-manager deployments create springdemo-istio-demo-deployment \ | |
--template=$GCP_DEPLOYMENT_MANAGER/istio-cluster.jinja \ | |
--properties "gkeClusterName:$GKE_CLUSTER,installIstioRelease:$ISTIO_VER,"\ | |
"zone:$GCP_ZONE,initialNodeCount:$NODE_COUNT,instanceType:$INSTANCE_TYPE,"\ | |
"enableAutomaticSidecarInjection:false,enableMutualTLS:true,enableBookInfoSample:false" | |
# get creds for cluster | |
gcloud container clusters get-credentials $GKE_CLUSTER \ | |
--zone $GCP_ZONE --project $GCP_PROJECT | |
# required dashboard access | |
kubectl apply -f ./roles/clusterrolebinding-dashboard.yaml | |
# use dashboard token to sign into dashboard: | |
kubectl -n kube-system describe secret kubernetes-dashboard-token |
Cluster Provisioning
To provision the GKE cluster and deploy Istio, first modify the variables in the part1-create-gke-cluster.sh
file (shown above), then execute the script. The script also retrieves your cluster’s credentials, to enable command line interaction with the cluster using the kubectl
CLI.
Once complete, validate the version of Istio by examining Istio’s Docker image versions, using the following command (gist).
kubectl get pods --all-namespaces -o jsonpath="{..image}" | \ | |
tr -s '[[:space:]]' '\n' | sort | uniq -c | \ | |
egrep -oE "\b(docker.io/istio).*\b" |
The result should be a list of Istio 0.7.1 Docker images.
The new cluster should be running GKE version 1.9.6.gke.0. This can be confirmed using the following command (gist).
gcloud container clusters describe election-nonprod-cluster | \ | |
egrep currentMasterVersion |
Or, from the GCP Cloud Console.
The new GKE cluster should be composed of (2) n1-standard-1 nodes, running in the us-east-1b zone.
As part of the deployment, all of the separate Istio components should be running within the istio-system
namespace.
As part of the deployment, an external IP address and a load balancer were provisioned by GCP and associated with the Istio Ingress. GCP’s Deployment Manager should have also created the necessary firewall rules for cluster ingress and egress.
Building the Environments
Next, we will create three namespaces,dev
, test
, and uat
, which represent three non-production environments. Each environment consists of a Kubernetes Namespace, Istio Ingress, and Secret. The three environments are deployed using the part2-create-environments.sh
script.
Deploying Election v1
For this demonstration, we will assume v1 of the Election service has been previously promoted, tested, and released to Production. Hence, we would expect v1 to be deployed to each of the lower environments. Additionally, a new v2 of the Election service has been developed and tested locally using Minikube. It is ready for deployment to the three environments and will undergo integration testing (detailed in Part Two of the post).
If you recall from our GKE/Istio configuration, we chose manual sidecar injection of the Istio proxy. Therefore, all election deployment scripts perform a kube-inject
command. To connect to our external Amazon RDS database, this kube-inject
command requires the includeIPRanges
flag, which contains two cluster configuration values, the cluster’s IPv4 CIDR (clusterIpv4Cidr
) and the service’s IPv4 CIDR (servicesIpv4Cidr
).
Before deployment, we export the includeIPRanges
value as an environment variable, which will be used by the deployment scripts, using the following command, export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh)
. The get-cluster-ip-ranges.sh
script is shown below (gist).
# run this command line: | |
# export IP_RANGES=$(sh ./get-cluster-ip-ranges.sh) | |
# capture the clusterIpv4Cidr and servicesIpv4Cidr values | |
# required for manual sidecar injection with kube-inject | |
# change to match your environment | |
GCP_PROJECT="springdemo-199819" | |
GKE_CLUSTER="election-nonprod-cluster" | |
GCP_ZONE="us-east1-b" | |
CLUSTER_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \ | |
| egrep clusterIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b") | |
SERVICES_IPV4_CIDR=$(gcloud container clusters describe ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} --project ${GCP_PROJECT} \ | |
| egrep servicesIpv4Cidr | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{2}\b") | |
export IP_RANGES="$CLUSTER_IPV4_CIDR,$SERVICES_IPV4_CIDR" | |
echo $IP_RANGES |
Using this method with manual sidecar injection is discussed in the previous post, Deploying and Configuring Istio on Google Kubernetes Engine (GKE).
To deploy v1 of the Election service to all three namespaces, execute the part3-deploy-v1-all-envs.sh
script.
We should now have two instances of v1 of the Election service, running in the dev
, test
, and uat
namespaces, for a total of six election-v1 Kubernetes Pods.
HTTP Request Routing
Before deploying additional versions of the Election service in Part Two of this post, we should understand how external HTTP requests will be routed to different versions of the Election service, in multiple namespaces. In the post’s simple example, we have a matrix of three namespaces and two versions of the Election service. That means we need a method to route external traffic to up to six different election versions. There multiple ways to solve this problem, each with their own pros and cons. For this post, I found a combination of DNS and HTTP request rewriting is most effective.
DNS
First, to route external HTTP requests to the correct namespace, we will use subdomains. Using my current DNS management solution, Azure DNS, I create three new A records for my registered domain, voter-demo.com
. There is one A record for each namespace, including api.dev
, api.test
, and api.uat
.
All three subdomains should resolve to the single external IP address assigned to the cluster’s load balancer.
As part of the environments creation, the script deployed an Istio Ingress, one to each environment. The ingress accepts traffic based on a match to the Request URL (gist).
apiVersion: extensions/v1beta1 | |
kind: Ingress | |
metadata: | |
name: dev-ingress | |
labels: | |
name: dev-ingress | |
namespace: dev | |
annotations: | |
kubernetes.io/ingress.class: istio | |
spec: | |
rules: | |
- host: api.dev.voter-demo.com | |
http: | |
paths: | |
- path: /.* | |
backend: | |
serviceName: election | |
servicePort: 8080 |
The istio-ingress
service load balancer, running in the istio-system
namespace, routes inbound external traffic, based on the Request URL, to the Istio Ingress in the appropriate namespace.
The Istio Ingress in the namespace then directs the traffic to one of the Kubernetes Pods, containing the Election service and the Istio sidecar proxy.
HTTP Rewrite
To direct the HTTP request to v1 or v2 of the Election service, an Istio Route Rule is used. As part of the environment creation, along with a Namespace and Ingress resources, we also deployed an Istio Route Rule to each environment. This particular route rule examines the HTTP request URL for a /v1/
or /v2/
sub-collection resource. If it finds the sub-collection resource, it performs a HTTPRewrite, removing the sub-collection resource from the HTTP request. The Route Rule then directs the HTTP request to the appropriate version of the Election service, v1 or v2 (gist).
According to Istio, ‘if there are multiple registered instances with the specified tag(s), they will be routed to based on the load balancing policy (algorithm) configured for the service (round-robin by default).’ We are using the default load balancing algorithm to distribute requests across multiple copies of each Election service.
# kubectl apply -f ./routerules/routerule-election-v1.yaml -n dev | |
apiVersion: config.istio.io/v1alpha2 | |
kind: RouteRule | |
metadata: | |
name: election-v1 | |
spec: | |
destination: | |
name: election | |
match: | |
request: | |
headers: | |
uri: | |
prefix: /v1/ | |
rewrite: | |
uri: / | |
route: | |
- labels: | |
app: election | |
version: v1 |
The final external HTTP request routing for the Election service in the Non-Production GKE cluster is shown on the left, in the diagram, below. Every Election service Pod also contains an Istio sidecar proxy instance.
Below are some examples of HTTP GET requests that would be successfully routed to our Election service, using the above-described routing strategy (gist).
# details of an election, id 5, requested from v1 elections in dev | |
curl http://api.dev.voter-demo.com/v1/elections/5 | |
# list of candidates, last name Obama, requested from v2 of elections in test | |
curl http://api.test.voter-demo.com/v2/candidates/search/findByLastName?lastName=Obama | |
# process start time metric, requested from v2 of elections in uat | |
curl http://api.test.voter-demo.com/v2/actuator/metrics/process.start.time | |
# vote summary, requested from v1 of elections in production | |
curl http://api.voter-demo.com/v1/vote-totals/summary/2012%20Presidential%20Election |
Part Two
In Part One of this post, we created the Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the Election microservice had already been released to Production, we deployed v1 to each of the three namespaces.
In Part Two of this post, we will learn how to utilize the sophisticated API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new, v2 of the Election microservice, locally, on Kubernetes Minikube. Once we are confident v2 is functioning as intended, we will promote and test v2, across the dev
, test
, and uat
namespaces.
All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.