Posts Tagged Istio 1.0
Automating Multi-Environment Kubernetes Virtual Clusters with Google Cloud DNS, Auth0, and Istio 1.0
Posted by Gary A. Stafford in Bash Scripting, Build Automation, Cloud, DevOps, Enterprise Software Development, GCP, Java Development, Kubernetes, Software Development on January 19, 2019
Kubernetes supports multiple virtual clusters within the same physical cluster. These virtual clusters are called Namespaces. Namespaces are a way to divide cluster resources between multiple users. Many enterprises use Namespaces to divide the same physical Kubernetes cluster into different virtual software development environments as part of their overall Software Development Lifecycle (SDLC). This practice is commonly used in ‘lower environments’ or ‘non-prod’ (not Production) environments. These environments commonly include Continous Integration and Delivery (CI/CD), Development, Integration, Testing/Quality Assurance (QA), User Acceptance Testing (UAT), Staging, Demo, and Hotfix. Namespaces provide a basic form of what is referred to as soft multi-tenancy.
Generally, the security boundaries and performance requirements between non-prod environments, within the same enterprise, are less restrictive than Production or Disaster Recovery (DR) environments. This allows for multi-tenant environments, while Production and DR are normally single-tenant environments. In order to approximate the performance characteristics of Production, the Performance Testing environment is also often isolated to a single-tenant. A typical enterprise would minimally have a non-prod, performance, production, and DR environment.
Using Namespaces to create virtual separation on the same physical Kubernetes cluster provides enterprises with more efficient use of virtual compute resources, reduces Cloud costs, eases the management burden, and often expedites and simplifies the release process.
Demonstration
In this post, we will re-examine the topic of virtual clusters, similar to the recent post, Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1 and Part 2. We will focus specifically on automating the creation of the virtual clusters on GKE with Istio 1.0, managing the Google Cloud DNS records associated with the cluster’s environments, and enabling both HTTPS and token-based OAuth access to each environment. We will use the Storefront API for our demonstration, featured in the previous three posts, including Building a Microservices Platform with Confluent Cloud, MongoDB Atlas, Istio, and Google Kubernetes Engine.
Source Code
The source code for this post may be found on the gke
branch of the storefront-kafka-docker GitHub repository.
git clone --branch gke --single-branch --depth 1 --no-tags \ https://github.com/garystafford/storefront-kafka-docker.git
Source code samples in this post are displayed as GitHub Gists, which may not display correctly on all mobile and social media browsers, such as LinkedIn.
This project contains all the code to deploy and configure the GKE cluster and Kubernetes resources.
To follow along, you will need to register your own domain, arrange for an Auth0, or alternative, authentication and authorization service, and obtain an SSL/TLS certificate.
SSL/TLS Wildcard Certificate
In the recent post, Securing Your Istio Ingress Gateway with HTTPS, we examined how to create and apply an SSL/TLS certificate to our GKE cluster, to secure communications. Although we are only creating a non-prod cluster, it is more and more common to use SSL/TLS everywhere, especially in the Cloud. For this post, I have registered a single wildcard certificate, *.api.storefront-demo.com. This certificate will cover the three second-level subdomains associated with the virtual clusters: dev.api.storefront-demo.com, test.api.storefront-demo.com, and uat.api.storefront-demo.com. Setting the environment name, such as dev.*
, as the second-level subdomain of my storefront-demo
domain, following the first level api.*
subdomain, makes the use of a wildcard certificate much easier.
As shown below, my wildcard certificate contains the Subject Name and Subject Alternative Name (SAN) of *.api.storefront-demo.com. For Production, api.storefront-demo.com, I prefer to use a separate certificate.
Create GKE Cluster
With your certificate in hand, create the non-prod Kubernetes cluster. Below, the script creates a minimally-sized, three-node, multi-zone GKE cluster, running on GCP, with Kubernetes Engine cluster version 1.11.5-gke.5 and Istio on GKE version 1.0.3-gke.0. I have enabled the master authorized networks option to secure my GKE cluster master endpoint. For the demo, you can add your own IP address CIDR on line 9 (i.e. 1.2.3.4/32
), or remove lines 30 – 31 to remove the restriction (gist).
- Lines 16–39: Create a 3-node, multi-zone GKE cluster with Istio;
- Line 48: Creates three non-prod Namespaces: dev, test, and uat;
- Lines 51–53: Enable Istio automatic sidecar injection within each Namespace;
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Create non-prod Kubernetes cluster on GKE | |
# Constants – CHANGE ME! | |
readonly PROJECT='gke-confluent-atlas' | |
readonly CLUSTER='storefront-api-non-prod' | |
readonly REGION='us-central1' | |
readonly MASTER_AUTH_NETS='<your_ip_cidr>' | |
readonly NAMESPACES=( 'dev' 'test' 'uat' ) | |
# Build a 3-node, single-region, multi-zone GKE cluster | |
time gcloud beta container \ | |
–project $PROJECT clusters create $CLUSTER \ | |
–region $REGION \ | |
–no-enable-basic-auth \ | |
–no-issue-client-certificate \ | |
–cluster-version "1.11.5-gke.5" \ | |
–machine-type "n1-standard-2" \ | |
–image-type "COS" \ | |
–disk-type "pd-standard" \ | |
–disk-size "100" \ | |
–scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ | |
–num-nodes "1" \ | |
–enable-stackdriver-kubernetes \ | |
–enable-ip-alias \ | |
–enable-master-authorized-networks \ | |
–master-authorized-networks $MASTER_AUTH_NETS \ | |
–network "projects/${PROJECT}/global/networks/default" \ | |
–subnetwork "projects/${PROJECT}/regions/${REGION}/subnetworks/default" \ | |
–default-max-pods-per-node "110" \ | |
–addons HorizontalPodAutoscaling,HttpLoadBalancing,Istio \ | |
–istio-config auth=MTLS_STRICT \ | |
–metadata disable-legacy-endpoints=true \ | |
–enable-autoupgrade \ | |
–enable-autorepair | |
# Get cluster creds | |
gcloud container clusters get-credentials $CLUSTER \ | |
–region $REGION –project $PROJECT | |
kubectl config current-context | |
# Create Namespaces | |
kubectl apply -f ./resources/other/namespaces.yaml | |
# Enable automatic Istio sidecar injection | |
for namespace in ${NAMESPACES[@]}; do | |
kubectl label namespace $namespace istio-injection=enabled | |
done |
If successful, the results should look similar to the output, below.
The cluster will contain a pool of three minimally-sized VMs, the Kubernetes nodes.
Deploying Resources
The Istio Gateway and three ServiceEntry resources are the primary resources responsible for routing the traffic from the ingress router to the Services, within the multiple Namespaces. Both of these resource types are new to Istio 1.0 (gist).
- Lines 9–16: Port config that only accepts HTTPS traffic on port 443 using TLS;
- Lines 18–20: The three subdomains being routed to the non-prod GKE cluster;
- Lines 28, 63, 98: The three subdomains being routed to the non-prod GKE cluster;
- Lines 39, 47, 65, 74, 82, 90, 109, 117, 125: Routing to FQDN of Storefront API Services within the three Namespaces;
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: Gateway | |
metadata: | |
name: storefront-gateway | |
spec: | |
selector: | |
istio: ingressgateway | |
servers: | |
– port: | |
number: 443 | |
name: https | |
protocol: HTTPS | |
tls: | |
mode: SIMPLE | |
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt | |
privateKey: /etc/istio/ingressgateway-certs/tls.key | |
hosts: | |
– dev.api.storefront-demo.com | |
– test.api.storefront-demo.com | |
– uat.api.storefront-demo.com | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-dev | |
spec: | |
hosts: | |
– dev.api.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.dev.svc.cluster.local | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-test | |
spec: | |
hosts: | |
– test.api.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.test.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.test.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.test.svc.cluster.local | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-uat | |
spec: | |
hosts: | |
– uat.api.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.uat.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.uat.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.uat.svc.cluster.local |
Next, deploy the Istio and Kubernetes resources to the new GKE cluster. For the sake of brevity, we will deploy the same number of instances and the same version of each the three Storefront API services (Accounts, Orders, Fulfillment) to each of the three non-prod environments (dev, test, uat). In reality, you would have varying numbers of instances of each service, and each environment would contain progressive versions of each service, as part of the SDLC of each microservice (gist).
- Lines 13–14: Deploy the SSL/TLS certificate and the private key;
- Line 17: Deploy the Istio Gateway and three ServiceEntry resources;
- Lines 20–22: Deploy the Istio Authentication Policy resources each Namespace;
- Lines 26–37: Deploy the same set of resources to the dev, test, and uat Namespaces;
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Deploy Kubernetes/Istio resources | |
# Constants – CHANGE ME! | |
readonly CERT_PATH=~/Documents/Articles/gke-kafka/sslforfree_non_prod | |
readonly NAMESPACES=( 'dev' 'test' 'uat' ) | |
# Kubernetes Secret to hold the server’s certificate and private key | |
kubectl create -n istio-system secret tls istio-ingressgateway-certs \ | |
–key $CERT_PATH/private.key –cert $CERT_PATH/certificate.crt | |
# Istio Gateway and three ServiceEntry resources | |
kubectl apply -f ./resources/other/istio-gateway.yaml | |
# End-user auth applied per environment | |
kubectl apply -f ./resources/other/auth-policy-dev.yaml | |
kubectl apply -f ./resources/other/auth-policy-test.yaml | |
kubectl apply -f ./resources/other/auth-policy-uat.yaml | |
# Loop through each non-prod Namespace (environment) | |
# Re-use same resources (incld. credentials) for all environments, just for the demo | |
for namespace in ${NAMESPACES[@]}; do | |
kubectl apply -n $namespace -f ./resources/config/confluent-cloud-kafka-configmap.yaml | |
kubectl apply -n $namespace -f ./resources/config/mongodb-atlas-secret.yaml | |
kubectl apply -n $namespace -f ./resources/config/confluent-cloud-kafka-secret.yaml | |
kubectl apply -n $namespace -f ./resources/other/mongodb-atlas-external-mesh.yaml | |
kubectl apply -n $namespace -f ./resources/other/confluent-cloud-external-mesh.yaml | |
kubectl apply -n $namespace -f ./resources/services/accounts.yaml | |
kubectl apply -n $namespace -f ./resources/services/fulfillment.yaml | |
kubectl apply -n $namespace -f ./resources/services/orders.yaml | |
done |
The deployed Storefront API Services should look as follows.
Google Cloud DNS
Next, we need to enable DNS access to the GKE cluster using Google Cloud DNS. According to Google, Cloud DNS is a scalable, reliable and managed authoritative Domain Name System (DNS) service running on the same infrastructure as Google. It has low latency, high availability, and is a cost-effective way to make your applications and services available to your users.
Whenever a new GKE cluster is created, a new Network Load Balancer is also created. By default, the load balancer’s front-end is an external IP address.
Using a forwarding rule, traffic directed at the external IP address is redirected to the load balancer’s back-end. The load balancer’s back-end is comprised of three VM instances, which are the three Kubernete nodes in the GKE cluster.
If you are following along with this post’s demonstration, we will assume you have a domain registered and configured with Google Cloud DNS. I am using the storefront-demo.com domain, which I have used in the last three posts to demonstrate Istio and GKE.
Google Cloud DNS has a fully functional web console, part of the Google Cloud Console. However, using the Cloud DNS web console is impractical in a DevOps CI/CD workflow, where Kubernetes clusters, Namespaces, and Workloads are ephemeral. Therefore we will use the following script. Within the script, we reset the IP address associated with the A records for each non-prod subdomains associated with storefront-demo.com domain (gist).
- Lines 23–25: Find the previous load balancer’s front-end IP address;
- Lines 27–29: Find the new load balancer’s front-end IP address;
- Line 35: Start the Cloud DNS transaction;
- Lines 37–47: Add the DNS record changes to the transaction;
- Line 49: Execute the Cloud DNS transaction;
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Update Cloud DNS A Records | |
# Constants – CHANGE ME! | |
readonly PROJECT='gke-confluent-atlas' | |
readonly DOMAIN='storefront-demo.com' | |
readonly ZONE='storefront-demo-com-zone' | |
readonly REGION='us-central1' | |
readonly TTL=300 | |
readonly RECORDS=('dev' 'test' 'uat') | |
# Make sure any old load balancers were removed | |
if [ $(gcloud compute forwarding-rules list –filter "region:($REGION)" | wc -l | awk '{$1=$1};1') -gt 2 ]; then | |
echo "More than one load balancer detected, exiting script." | |
exit 1 | |
fi | |
# Get load balancer IP address from first record | |
readonly OLD_IP=$(gcloud dns record-sets list \ | |
–filter "name=${RECORDS[0]}.api.${DOMAIN}." –zone $ZONE \ | |
| awk 'NR==2 {print $4}') | |
readonly NEW_IP=$(gcloud compute forwarding-rules list \ | |
–filter "region:($REGION)" \ | |
| awk 'NR==2 {print $3}') | |
echo "Old LB IP Address: ${OLD_IP}" | |
echo "New LB IP Address: ${NEW_IP}" | |
# Update DNS records | |
gcloud dns record-sets transaction start –zone $ZONE | |
for record in ${RECORDS[@]}; do | |
echo "${record}.api.${DOMAIN}." | |
gcloud dns record-sets transaction remove \ | |
–name "${record}.api.${DOMAIN}." –ttl $TTL \ | |
–type A –zone $ZONE "${OLD_IP}" | |
gcloud dns record-sets transaction add \ | |
–name "${record}.api.${DOMAIN}." –ttl $TTL \ | |
–type A –zone $ZONE "${NEW_IP}" | |
done | |
gcloud dns record-sets transaction execute –zone $ZONE |
The outcome of the script is shown below. Note how changes are executed as part of a transaction, by automatically creating a transaction.yaml
file. The file contains the six DNS changes, three additions and three deletions. The command executes the transaction and then deletes the transaction.yaml
file.
> sh ./part3_set_cloud_dns.sh
Old LB IP Address: 35.193.208.115 New LB IP Address: 35.238.196.231 Transaction started [transaction.yaml]. dev.api.storefront-demo.com. Record removal appended to transaction at [transaction.yaml]. Record addition appended to transaction at [transaction.yaml]. test.api.storefront-demo.com. Record removal appended to transaction at [transaction.yaml]. Record addition appended to transaction at [transaction.yaml]. uat.api.storefront-demo.com. Record removal appended to transaction at [transaction.yaml]. Record addition appended to transaction at [transaction.yaml]. Executed transaction [transaction.yaml] for managed-zone [storefront-demo-com-zone]. Created [https://www.googleapis.com/dns/v1/projects/gke-confluent-atlas/managedZones/storefront-demo-com-zone/changes/53]. ID START_TIME STATUS 55 2019-01-16T04:54:14.984Z pending
Based on my own domain and cluster details, the transaction.yaml
file looks as follows. Again, note the six DNS changes, three additions, followed by three deletions (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
— | |
additions: | |
– kind: dns#resourceRecordSet | |
name: storefront-demo.com. | |
rrdatas: | |
– ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 25 21600 3600 | |
259200 300 | |
ttl: 21600 | |
type: SOA | |
– kind: dns#resourceRecordSet | |
name: dev.api.storefront-demo.com. | |
rrdatas: | |
– 35.238.196.231 | |
ttl: 300 | |
type: A | |
– kind: dns#resourceRecordSet | |
name: test.api.storefront-demo.com. | |
rrdatas: | |
– 35.238.196.231 | |
ttl: 300 | |
type: A | |
– kind: dns#resourceRecordSet | |
name: uat.api.storefront-demo.com. | |
rrdatas: | |
– 35.238.196.231 | |
ttl: 300 | |
type: A | |
deletions: | |
– kind: dns#resourceRecordSet | |
name: storefront-demo.com. | |
rrdatas: | |
– ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 24 21600 3600 | |
259200 300 | |
ttl: 21600 | |
type: SOA | |
– kind: dns#resourceRecordSet | |
name: dev.api.storefront-demo.com. | |
rrdatas: | |
– 35.193.208.115 | |
ttl: 300 | |
type: A | |
– kind: dns#resourceRecordSet | |
name: test.api.storefront-demo.com. | |
rrdatas: | |
– 35.193.208.115 | |
ttl: 300 | |
type: A | |
– kind: dns#resourceRecordSet | |
name: uat.api.storefront-demo.com. | |
rrdatas: | |
– 35.193.208.115 | |
ttl: 300 | |
type: A |
Confirm DNS Changes
Use the dig
command to confirm the DNS records are now correct and that DNS propagation has occurred. The IP address returned by dig
should be the external IP address assigned to the front-end of the Google Cloud Load Balancer.
> dig dev.api.storefront-demo.com +short 35.238.196.231
Or, all the three records.
echo \ "dev.api.storefront-demo.com\n" \ "test.api.storefront-demo.com\n" \ "uat.api.storefront-demo.com" \ > records.txt | dig -f records.txt +short 35.238.196.231 35.238.196.231 35.238.196.231
Optionally, more verbosely by removing the +short
option.
> dig +nocmd dev.api.storefront-demo.com ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30763 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;dev.api.storefront-demo.com. IN A ;; ANSWER SECTION: dev.api.storefront-demo.com. 299 IN A 35.238.196.231 ;; Query time: 27 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Wed Jan 16 18:00:49 EST 2019 ;; MSG SIZE rcvd: 72
The resulting records in the Google Cloud DNS management console should look as follows.
JWT-based Authentication
As discussed in the previous post, Istio End-User Authentication for Kubernetes using JSON Web Tokens (JWT) and Auth0, it is typical to limit restrict access to the Kubernetes cluster, Namespaces within the cluster, or Services running within Namespaces to end-users, whether they are humans or other applications. In that previous post, we saw an example of applying a machine-to-machine (M2M) Istio Authentication Policy to only the uat Namespace. This scenario is common when you want to control access to resources in non-production environments, such as UAT, to outside test teams, accessing the uat Namespace through an external application. To simulate this scenario, we will apply the following Istio Authentication Policy to the uat Namespace. (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: authentication.istio.io/v1alpha1 | |
kind: Policy | |
metadata: | |
name: default | |
namespace: uat | |
spec: | |
peers: | |
– mtls: {} | |
origins: | |
– jwt: | |
audiences: | |
– "storefront-api-uat" | |
issuer: "https://storefront-demo.auth0.com/" | |
jwksUri: "https://storefront-demo.auth0.com/.well-known/jwks.json" | |
principalBinding: USE_ORIGIN |
For the dev and test Namespaces, we will apply an additional, different Istio Authentication Policy. This policy will protect against the possibility of dev and test M2M API consumers interfering with uat M2M API consumers and vice-versa. Below is the dev and test version of the Policy (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: authentication.istio.io/v1alpha1 | |
kind: Policy | |
metadata: | |
name: default | |
namespace: dev | |
spec: | |
peers: | |
– mtls: {} | |
origins: | |
– jwt: | |
audiences: | |
– "storefront-api-dev-test" | |
issuer: "https://storefront-demo.auth0.com/" | |
jwksUri: "https://storefront-demo.auth0.com/.well-known/jwks.json" | |
principalBinding: USE_ORIGIN |
Testing Authentication
Using Postman, with the ‘Bearer Token’ type authentication method, as detailed in the previous post, a call a Storefront API resource in the uat Namespace should succeed. This also confirms DNS and HTTPS are working properly.
The dev and test Namespaces require different authentication. Trying to use no Authentication, or authenticating as a UAT API consumer, will result in a 401 Unauthorized
HTTP status, along with the Origin authentication failed.
error message.
Conclusion
In this brief post, we demonstrated how to create a GKE cluster with Istio 1.0.x, containing three virtual clusters, or Namespaces. Each Namespace represents an environment, which is part of an application’s SDLC. We enforced HTTP over TLS (HTTPS) using a wildcard SSL/TLS certificate. We also enforced end-user authentication using JWT-based OAuth 2.0 with Auth0. Lastly, we provided user-friendly DNS routing to each environment, using Google Cloud DNS. Short of a fully managed API Gateway, like Apigee, and automating the execution of the scripts with Jenkins or Spinnaker, this cluster is ready to provide a functional path to Production for developing our Storefront API.
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
Securing Your Istio Ingress Gateway with HTTPS
Posted by Gary A. Stafford in Bash Scripting, Cloud, Enterprise Software Development, GCP, Software Development on January 3, 2019
In the last post, Building a Microservices Platform with Confluent Cloud, MongoDB Atlas, Istio, and Google Kubernetes Engine, we built and deployed a microservice-based, cloud-native API to Google Kubernetes Engine (GKE), with Istio 1.0, on Google Cloud Platform (GCP). For brevity, we neglected a few key API features, required in Production, including HTTPS, OAuth for authentication, request quotas, request throttling, and the integration of a full lifecycle API management tool, like Google Apigee.
In this brief post, we will revisit the previous post’s project. We will disable HTTP, and secure the GKE cluster with HTTPS, using simple TLS, as opposed to mutual TLS authentication (mTLS). This post assumes you have created the GKE cluster and deployed the Storefront API and its associated resources, as explained in the previous post.
What is HTTPS?
According to Wikipedia, Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP) for securing communications over a computer network. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS), or, formerly, its predecessor, Secure Sockets Layer (SSL). The protocol is therefore also often referred to as HTTP over TLS, or HTTP over SSL.
Further, according to Wikipedia, the principal motivation for HTTPS is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit. It protects against man-in-the-middle attacks. The bidirectional encryption of communications between a client and server provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor.
Public Key Infrastructure
According to Comodo, both the TLS and SSL protocols use what is known as an asymmetric Public Key Infrastructure (PKI) system. An asymmetric system uses two keys to encrypt communications, a public key and a private key. Anything encrypted with the public key can only be decrypted by the private key and vice-versa.
Again, according to Wikipedia, a PKI is an arrangement that binds public keys with respective identities of entities, like people and organizations. The binding is established through a process of registration and issuance of certificates at and by a certificate authority (CA).
SSL/TLS Digital Certificate
Again, according to Comodo, when you request an HTTPS connection to a webpage, the website will initially send its SSL certificate to your browser. This certificate contains the public key needed to begin the secure session. Based on this initial exchange, your browser and the website then initiate the SSL handshake (actually, TLS handshake). The handshake involves the generation of shared secrets to establish a uniquely secure connection between yourself and the website. When a trusted SSL digital certificate is used during an HTTPS connection, users will see the padlock icon in the browser’s address bar.
Registered Domain
In order to secure an SSL Digital Certificate, required to enable HTTPS with the GKE cluster, we must first have a registered domain name. For the last post, and this post, I am using my own personal domain, storefront-demo.com
. The domain’s primary A record (‘@’) and all sub-domain A records, such as api.dev
, are all resolve to the external IP address on the front-end of the GCP load balancer.
For DNS hosting, I happen to be using Azure DNS to host the domain, storefront-demo.com
. All DNS hosting services basically work the same way, whether you chose Azure, AWS, GCP, or another third party provider.
Let’s Encrypt
If you have used Let’s Encrypt before, then you know how easy it is to get free SSL/TLS Certificates. Let’s Encrypt is the first free, automated, and open certificate authority (CA) brought to you by the non-profit Internet Security Research Group (ISRG).
According to Let’s Encrypt, to enable HTTPS on your website, you need to get a certificate from a Certificate Authority (CA); Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain. With Let’s Encrypt, you do this using software that uses the ACME protocol, which typically runs on your web host. If you have generated certificates with Let’s Encrypt, you also know the domain validation by installing the Certbot ACME client can be a bit daunting, depending on your level of access and technical expertise.
SSL For Free
This is where SSL For Free comes in. SSL For Free acts as a proxy of sorts to Let’s Encrypt. SSL For Free generates certificates using their ACME server by using domain validation. Private Keys are generated in your browser and never transmitted.
SSL For Free offers three domain validation methods:
- Automatic FTP Verification: Enter FTP information to automatically verify the domain;
- Manual Verification: Upload verification files manually to your domain to verify ownership;
- Manual Verification (DNS): Add
TXT
records to your DNS server;
Using the third domain validation method, manual verification using DNS, is extremely easy, if you have access to your domain’s DNS recordset.
SSL For Free provides TXT
records for each domain you are adding to the certificate. Below, I am adding a single domain to the certificate.
Add the TXT
records to your domain’s recordset. Shown below is an example of a single TXT
record that has been to my recordset using the Azure DNS service.
SSL For Free then uses the TXT
record to validate your domain is actually yours.
With the TXT
record in place and validation successful, you can download a ZIPped package containing the certificate, private key, and CA bundle. The CA bundle containing the end-entity root and intermediate certificates.
Decoding PEM Encoded SSL Certificate
Using a tool like SSL Shopper’s Certificate Decoder, we can decode our Privacy-Enhanced Mail (PEM) encoded SSL certificates and view all of the certificate’s information. Decoding the information contained in my certificate.crt
, I see the following.
Certificate Information: Common Name: api.dev.storefront-demo.com Subject Alternative Names: api.dev.storefront-demo.com Valid From: December 26, 2018 Valid To: March 26, 2019 Issuer: Let's Encrypt Authority X3, Let's Encrypt Serial Number: 03a5ec86bf79de65fb679ee7741ba07df1e4
Decoding the information contained in my ca_bundle.crt
, I see the following.
Certificate Information: Common Name: Let's Encrypt Authority X3 Organization: Let's Encrypt Country: US Valid From: March 17, 2016 Valid To: March 17, 2021 Issuer: DST Root CA X3, Digital Signature Trust Co. Serial Number: 0a0141420000015385736a0b85eca708
The Let’s Encrypt intermediate certificate is also cross-signed by another certificate authority, IdenTrust, whose root is already trusted in all major browsers. IdenTrust cross-signs the Let’s Encrypt intermediate certificate using their DST Root CA X3. Thus, the Issuer, shown above.
Configure Istio Ingress Gateway
Unzip the sslforfree.zip
package and place the individual files in a location you have access to from the command line.
unzip -l ~/Downloads/sslforfree.zip Archive: /Users/garystafford/Downloads/sslforfree.zip Length Date Time Name --------- ---------- ----- ---- 1943 12-26-2018 18:35 certificate.crt 1707 12-26-2018 18:35 private.key 1646 12-26-2018 18:35 ca_bundle.crt --------- ------- 5296 3 files
Following the process outlined in the Istio documentation, Securing Gateways with HTTPS, run the following command. This will place the istio-ingressgateway-certs
Secret in the istio-system
namespace, on the GKE cluster.
kubectl create -n istio-system secret tls istio-ingressgateway-certs \ --key path_to_files/sslforfree/private.key \ --cert path_to_files/sslforfree/certificate.crt
Modify the existing Istio Gateway from the previous project, istio-gateway.yaml. Remove the HTTP port
configuration item and replace with the HTTPS protocol item (gist). Redeploy the Istio Gateway to the GKE cluster.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: Gateway | |
metadata: | |
name: storefront-gateway | |
spec: | |
selector: | |
istio: ingressgateway | |
servers: | |
# – port: | |
# number: 80 | |
# name: http | |
# protocol: HTTP | |
# hosts: | |
# – api.dev.storefront-demo.com | |
– port: | |
number: 443 | |
name: https | |
protocol: HTTPS | |
tls: | |
mode: SIMPLE | |
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt | |
privateKey: /etc/istio/ingressgateway-certs/tls.key | |
hosts: | |
– api.dev.storefront-demo.com | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-dev | |
spec: | |
hosts: | |
– api.dev.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.dev.svc.cluster.local |
By deploying the new istio-ingressgateway-certs
Secret and redeploying the Gateway, the certificate and private key were deployed to the /etc/istio/ingressgateway-certs/
directory of the istio-proxy
container, running on the istio-ingressgateway
Pod. To confirm both the certificate and private key were deployed correctly, run the following command.
kubectl exec -it -n istio-system \ $(kubectl -n istio-system get pods \ -l istio=ingressgateway \ -o jsonpath='{.items[0].metadata.name}') \ -- ls -l /etc/istio/ingressgateway-certs/ lrwxrwxrwx 1 root root 14 Jan 2 17:53 tls.crt -> ..data/tls.crt lrwxrwxrwx 1 root root 14 Jan 2 17:53 tls.key -> ..data/tls.key
That’s it. We should now have simple TLS enabled on the Istio Gateway, providing bidirectional encryption of communications between a client (Storefront API consumer) and server (Storefront API running on the GKE cluster). Users accessing the API will now have to use HTTPS.
Confirm HTTPS is Working
After completing the deployment, as outlined in the previous post, test the Storefront API by using HTTP, first. Since we removed the HTTP port item configuration in the Istio Gateway, the HTTP request should fail with a connection refused error. Insecure traffic is no longer allowed by the Storefront API.
Now try switching from HTTP to HTTPS. The page should be displayed and the black lock icon should appear in the browser’s address bar. Clicking on the lock icon, we will see the SSL certificate, used by the GKE cluster is valid.
By clicking on the valid certificate indicator, we may observe more details about the SSL certificate, used to secure the Storefront API. Observe the certificate is issued by Let’s Encrypt Authority X3. It is valid for 90 days from its time of issuance. Let’s Encrypt only issues certificates with a 90-day lifetime. Observe the public key uses SHA-256 with RSA (Rivest–Shamir–Adleman) encryption.
In Chrome, we can also use the Developer Tools Security tab to inspect the certificate. The certificate is recognized as valid and trusted. Also important, note the connection to this Storefront API is encrypted and authenticated using TLS 1.2 (a strong protocol), ECDHE_RSA with X25519 (a strong key exchange), and AES_128_GCM (a strong cipher). According to How’s My SSL?, TLS 1.2 is the latest version of TLS. The TLS 1.2 protocol provides access to advanced cipher suites that support elliptical curve cryptography and AEAD block cipher modes. TLS 1.2 is an improvement on previous TLS 1.1, 1.0, and SSLv3 or earlier.
Lastly, the best way to really understand what is happening with HTTPS, the Storefront API, and Istio, is verbosely curl
an API endpoint.
curl -Iv https://api.dev.storefront-demo.com/accounts/
Using the above curl
command, we can see exactly how the client successfully verifies the server, negotiates a secure HTTP/2 connection (HTTP/2 over TLS 1.2), and makes a request (gist).
- Line 3: DNS resolution of the URL to the external IP address of the GCP load-balancer
- Line 3: HTTPS traffic is routed to TCP port 443
- Lines 4 – 5: Application-Layer Protocol Negotiation (ALPN) starts to occur with the server
- Lines 7 – 9: Certificate to verify located
- Lines 10 – 20: TLS handshake is performed and is successful using TLS 1.2 protocol
- Line 20: CHACHA is the stream cipher and POLY1305 is the authenticator in the Transport Layer Security (TLS) 1.2 protocol ChaCha20-Poly1305 Cipher Suite
- Lines 22 – 27: SSL certificate details
- Line 28: Certificate verified
- Lines 29 – 38: Establishing HTTP/2 connection with the server
- Lines 33 – 36: Request headers
- Lines 39 – 46: Response headers containing the expected 204 HTTP return code
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Trying 35.226.121.90… | |
* TCP_NODELAY set | |
* Connected to api.dev.storefront-demo.com (35.226.121.90) port 443 (#0) | |
* ALPN, offering h2 | |
* ALPN, offering http/1.1 | |
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH | |
* successfully set certificate verify locations: | |
* CAfile: /etc/ssl/cert.pem | |
CApath: none | |
* TLSv1.2 (OUT), TLS handshake, Client hello (1): | |
* TLSv1.2 (IN), TLS handshake, Server hello (2): | |
* TLSv1.2 (IN), TLS handshake, Certificate (11): | |
* TLSv1.2 (IN), TLS handshake, Server key exchange (12): | |
* TLSv1.2 (IN), TLS handshake, Server finished (14): | |
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): | |
* TLSv1.2 (OUT), TLS change cipher, Client hello (1): | |
* TLSv1.2 (OUT), TLS handshake, Finished (20): | |
* TLSv1.2 (IN), TLS change cipher, Client hello (1): | |
* TLSv1.2 (IN), TLS handshake, Finished (20): | |
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305 | |
* ALPN, server accepted to use h2 | |
* Server certificate: | |
* subject: CN=api.dev.storefront-demo.com | |
* start date: Dec 26 22:35:31 2018 GMT | |
* expire date: Mar 26 22:35:31 2019 GMT | |
* subjectAltName: host "api.dev.storefront-demo.com" matched cert's "api.dev.storefront-demo.com" | |
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 | |
* SSL certificate verify ok. | |
* Using HTTP2, server supports multi-use | |
* Connection state changed (HTTP/2 confirmed) | |
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 | |
* Using Stream ID: 1 (easy handle 0x7ff997006600) | |
> HEAD /accounts/ HTTP/2 | |
> Host: api.dev.storefront-demo.com | |
> User-Agent: curl/7.54.0 | |
> Accept: */* | |
> | |
* Connection state changed (MAX_CONCURRENT_STREAMS updated)! | |
< HTTP/2 204 | |
HTTP/2 204 | |
< date: Fri, 04 Jan 2019 03:42:14 GMT | |
date: Fri, 04 Jan 2019 03:42:14 GMT | |
< x-envoy-upstream-service-time: 23 | |
x-envoy-upstream-service-time: 23 | |
< server: envoy | |
server: envoy | |
< | |
* Connection #0 to host api.dev.storefront-demo.com left intact |
Mutual TLS
Istio also supports mutual authentication using the TLS protocol, known as mutual TLS authentication (mTLS), between external clients and the gateway, as outlined in the Istio 1.0 documentation. According to Wikipedia, mutual authentication or two-way authentication refers to two parties authenticating each other at the same time. Mutual authentication a default mode of authentication in some protocols (IKE, SSH), but optional in TLS.
Again, according to Wikipedia, by default, TLS only proves the identity of the server to the client using X.509 certificates. The authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. As it requires provisioning of the certificates to the clients and involves less user-friendly experience, it is rarely used in end-user applications. Mutual TLS is much more widespread in B2B applications, where a limited number of programmatic clients are connecting to specific web services. The operational burden is limited and security requirements are usually much higher as compared to consumer environments.
This form of mutual authentication would be beneficial if we had external applications or other services outside our GKE cluster, consuming our API. Using mTLS, we could further enhance the security of those types of interactions.
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.