Posts Tagged Atlas
Building a Microservices Platform with Confluent Cloud, MongoDB Atlas, Istio, and Google Kubernetes Engine
Posted by Gary A. Stafford in Bash Scripting, Cloud, DevOps, Enterprise Software Development, GCP, Java Development, Python, Software Development on December 28, 2018
Leading SaaS providers have sufficiently matured the integration capabilities of their product offerings to a point where it is now reasonable for enterprises to architect multi-vendor, single- and multi-cloud Production platforms, without re-engineering existing cloud-native applications. In previous posts, we have integrated other SaaS products, including as MongoDB Atlas fully-managed MongoDB-as-a-service, ElephantSQL fully-manage PostgreSQL-as-a-service, and CloudAMQP RabbitMQ-as-a-service, into cloud-native applications on Azure, AWS, GCP, and PCF.
In this post, we will build and deploy an existing, Spring Framework, microservice-based, cloud-native API to Google Kubernetes Engine (GKE), replete with Istio 1.0, on Google Cloud Platform (GCP). The API will rely on Confluent Cloud to provide a fully-managed, Kafka-based messaging-as-a-service (MaaS). Similarly, the API will rely on MongoDB Atlas to provide a fully-managed, MongoDB-based Database-as-a-service (DBaaS).
Background
In a previous two-part post, Using Eventual Consistency and Spring for Kafka to Manage a Distributed Data Model: Part 1 and Part 2, we examined the role of Apache Kafka in an event-driven, eventually consistent, distributed system architecture. The system, an online storefront RESTful API simulation, was composed of multiple, Java Spring Boot microservices, each with their own MongoDB database. The microservices used a publish/subscribe model to communicate with each other using Kafka-based messaging. The Spring services were built using the Spring for Apache Kafka and Spring Data MongoDB projects.
Given the use case of placing an order through the Storefront API, we examined the interactions of three microservices, the Accounts, Fulfillment, and Orders service. We examined how the three services used Kafka to communicate state changes to each other, in a fully-decoupled manner.
The Storefront API’s microservices were managed behind an API Gateway, Netflix’s Zuul. Service discovery and load balancing were handled by Netflix’s Eureka. Both Zuul and Eureka are part of the Spring Cloud Netflix project. In that post, the entire containerized system was deployed to Docker Swarm.
Developing the services, not operationalizing the platform, was the primary objective of the previous post.
Featured Technologies
The following technologies are featured prominently in this post.
Confluent Cloud
In May 2018, Google announced a partnership with Confluence to provide Confluent Cloud on GCP, a managed Apache Kafka solution for the Google Cloud Platform. Confluent, founded by the creators of Kafka, Jay Kreps, Neha Narkhede, and Jun Rao, is known for their commercial, Kafka-based streaming platform for the Enterprise.
Confluent Cloud is a fully-managed, cloud-based streaming service based on Apache Kafka. Confluent Cloud delivers a low-latency, resilient, scalable streaming service, deployable in minutes. Confluent deploys, upgrades, and maintains your Kafka clusters. Confluent Cloud is currently available on both AWS and GCP.
Confluent Cloud offers two plans, Professional and Enterprise. The Professional plan is optimized for projects under development, and for smaller organizations and applications. Professional plan rates for Confluent Cloud start at $0.55/hour. The Enterprise plan adds full enterprise capabilities such as service-level agreements (SLAs) with a 99.95% uptime and virtual private cloud (VPC) peering. The limitations and supported features of both plans are detailed, here.
MongoDB Atlas
Similar to Confluent Cloud, MongoDB Atlas is a fully-managed MongoDB-as-a-Service, available on AWS, Azure, and GCP. Atlas, a mature SaaS product, offers high-availability, uptime SLAs, elastic scalability, cross-region replication, enterprise-grade security, LDAP integration, BI Connector, and much more.
MongoDB Atlas currently offers four pricing plans, Free, Basic, Pro, and Enterprise. Plans range from the smallest, M0-sized MongoDB cluster, with shared RAM and 512 MB storage, up to the massive M400 MongoDB cluster, with 488 GB of RAM and 3 TB of storage.
MongoDB Atlas has been featured in several past posts, including Deploying and Configuring Istio on Google Kubernetes Engine (GKE) and Developing Applications for the Cloud with Azure App Services and MongoDB Atlas.
Kubernetes Engine
According to Google, Google Kubernetes Engine (GKE) provides a fully-managed, production-ready Kubernetes environment for deploying, managing, and scaling your containerized applications using Google infrastructure. GKE consists of multiple Google Compute Engine instances, grouped together to form a cluster.
A forerunner to other managed Kubernetes platforms, like EKS (AWS), AKS (Azure), PKS (Pivotal), and IBM Cloud Kubernetes Service, GKE launched publicly in 2015. GKE was built on Google’s experience of running hyper-scale services like Gmail and YouTube in containers for over 12 years.
GKE’s pricing is based on a pay-as-you-go, per-second-billing plan, with no up-front or termination fees, similar to Confluent Cloud and MongoDB Atlas. Cluster sizes range from 1 – 1,000 nodes. Node machine types may be optimized for standard workloads, CPU, memory, GPU, or high-availability. Compute power ranges from 1 – 96 vCPUs and memory from 1 – 624 GB of RAM.
Demonstration
In this post, we will deploy the three Storefront API microservices to a GKE cluster on GCP. Confluent Cloud on GCP will replace the previous Docker-based Kafka implementation. Similarly, MongoDB Atlas will replace the previous Docker-based MongoDB implementation.
Kubernetes and Istio 1.0 will replace Netflix’s Zuul and Eureka for API management, load-balancing, routing, and service discovery. Google Stackdriver will provide logging and monitoring. Docker Images for the services will be stored in Google Container Registry. Although not fully operationalized, the Storefront API will be closer to a Production-like platform, than previously demonstrated on Docker Swarm.
For brevity, we will not enable standard API security features like HTTPS, OAuth for authentication, and request quotas and throttling, all of which are essential in Production. Nor, will we integrate a full lifecycle API management tool, like Google Apigee.
Source Code
The source code for this demonstration is contained in four separate GitHub repositories, storefront-kafka-docker, storefront-demo-accounts, storefront-demo-orders, and, storefront-demo-fulfillment. However, since the Docker Images for the three storefront services are available on Docker Hub, it is only necessary to clone the storefront-kafka-docker project. This project contains all the code to deploy and configure the GKE cluster and Kubernetes resources (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
git clone –branch master –single-branch –depth 1 –no-tags \ | |
https://github.com/garystafford/storefront-kafka-docker.git | |
# optional repositories | |
git clone –branch gke –single-branch –depth 1 –no-tags \ | |
https://github.com/garystafford/storefront-demo-accounts.git | |
git clone –branch gke –single-branch –depth 1 –no-tags \ | |
https://github.com/garystafford/storefront-demo-orders.git | |
git clone –branch gke –single-branch –depth 1 –no-tags \ | |
https://github.com/garystafford/storefront-demo-fulfillment.git |
Source code samples in this post are displayed as GitHub Gists, which may not display correctly on all mobile and social media browsers.
Setup Process
The setup of the Storefront API platform is divided into a few logical steps:
- Create the MongoDB Atlas cluster;
- Create the Confluent Cloud Kafka cluster;
- Create Kafka topics;
- Modify the Kubernetes resources;
- Modify the microservices to support Confluent Cloud configuration;
- Create the GKE cluster with Istio on GCP;
- Apply the Kubernetes resources to the GKE cluster;
- Test the Storefront API, Kafka, and MongoDB are functioning properly;
MongoDB Atlas Cluster
This post assumes you already have a MongoDB Atlas account and an existing project created. MongoDB Atlas accounts are free to set up if you do not already have one. Account creation does require the use of a Credit Card.
For minimal latency, we will be creating the MongoDB Atlas, Confluent Cloud Kafka, and GKE clusters, all on the Google Cloud Platform’s us-central1 Region. Available GCP Regions and Zones for MongoDB Atlas, Confluent Cloud, and GKE, vary, based on multiple factors.
For this demo, I suggest creating a free, M0-sized MongoDB cluster. The M0-sized 3-data node cluster, with shared RAM and 512 MB of storage, and currently running MongoDB 4.0.4, is fine for individual development. The us-central1 Region is the only available US Region for the free-tier M0-cluster on GCP. An M0-sized Atlas cluster may take between 7-10 minutes to provision.
MongoDB Atlas’ Web-based management console provides convenient links to cluster details, metrics, alerts, and documentation.
Once the cluster is ready, you can review details about the cluster and each individual cluster node.
In addition to the account owner, create a demo_user
account. This account will be used to authenticate and connect with the MongoDB databases from the storefront services. For this demo, we will use the same, single user account for all three services. In Production, you would most likely have individual users for each service.
Again, for security purposes, Atlas requires you to whitelist the IP address or CIDR block from which the storefront services will connect to the cluster. For now, open the access to your specific IP address using whatsmyip.com, or much less-securely, to all IP addresses (0.0.0.0/0
). Once the GKE cluster and external static IP addresses are created, make sure to come back and update this value; do not leave this wide open to the Internet.
The Java Spring Boot storefront services use a Spring Profile, gke
. According to Spring, Spring Profiles provide a way to segregate parts of your application configuration and make it available only in certain environments. The gke
Spring Profile’s configuration values may be set in a number of ways. For this demo, the majority of the values will be set using Kubernetes Deployment, ConfigMap and Secret resources, shown later.
The first two Spring configuration values will need are the MongoDB Atlas cluster’s connection string and the demo_user
account password. Note these both for later use.
Confluent Cloud Kafka Cluster
Similar to MongoDB Atlas, this post assumes you already have a Confluent Cloud account and an existing project. It is free to set up a Professional account and a new project if you do not already have one. Atlas account creation does require the use of a Credit Card.
The Confluent Cloud web-based management console is shown below. Experienced users of other SaaS platforms may find the Confluent Cloud web-based console a bit sparse on features. In my opinion, the console lacks some necessary features, like cluster observability, individual Kafka topic management, detailed billing history (always says $0?), and persistent history of cluster activities, which survives cluster deletion. It seems like Confluent prefers users to download and configure their Confluent Control Center to get the functionality you might normally expect from a web-based Saas management tool.
As explained earlier, for minimal latency, I suggest creating the MongoDB Atlas cluster, Confluent Cloud Kafka cluster, and the GKE cluster, all on the Google Cloud Platform’s us-central1 Region. For this demo, choose the smallest cluster size available on GCP, in the us-central1 Region, with 1 MB/s R/W throughput and 500 MB of storage. As shown below, the cost will be approximately $0.55/hour. Don’t forget to delete this cluster when you are done with the demonstration, or you will continue to be charged.
Cluster creation of the minimally-sized Confluent Cloud cluster is pretty quick.
Once the cluster is ready, Confluent provides instructions on how to interact with the cluster via the Confluent Cloud CLI. Install the Confluent Cloud CLI, locally, for use later.
As explained earlier, the Java Spring Boot storefront services use a Spring Profile, gke
. Like MongoDB Atlas, the Confluent Cloud Kafka cluster configuration values will be set using Kubernetes ConfigMap and Secret resources, shown later. There are several Confluent Cloud Java configuration values shown in the Client Config Java tab; we will need these for later use.
SASL and JAAS
Some users may not be familiar with the terms, SASL and JAAS. According to Wikipedia, Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. According to Confluent, Kafka brokers support client authentication via SASL. SASL authentication can be enabled concurrently with SSL encryption (SSL client authentication will be disabled).
There are numerous SASL mechanisms. The PLAIN SASL mechanism (SASL/PLAIN), used by Confluent, is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. Kafka supports a default implementation for SASL/PLAIN which can be extended for production use. The SASL/PLAIN mechanism should only be used with SSL as a transport layer to ensure that clear passwords are not transmitted on the wire without encryption.
According to Wikipedia, Java Authentication and Authorization Service (JAAS) is the Java implementation of the standard Pluggable Authentication Module (PAM) information security framework. According to Confluent, Kafka uses the JAAS for SASL configuration. You must provide JAAS configurations for all SASL authentication mechanisms.
Cluster Authentication
Similar to MongoDB Atlas, we need to authenticate with the Confluent Cloud cluster from the storefront services. The authentication to Confluent Cloud is done with an API Key. Create a new API Key, and note the Key and Secret; these two additional pieces of configuration will be needed later.
Confluent Cloud API Keys can be created and deleted as necessary. For security in Production, API Keys should be created for each service and regularly rotated.
Kafka Topics
With the cluster created, create the storefront service’s three Kafka topics manually, using the Confluent Cloud’s ccloud
CLI tool. First, configure the Confluent Cloud CLI using the ccloud init
command, using your new cluster’s Bootstrap Servers address, API Key, and API Secret. The instructions are shown above Clusters Client Config tab of the Confluent Cloud web-based management interface.
Create the storefront service’s three Kafka topics using the ccloud topic create
command. Use the list
command to confirm they are created.
# manually create kafka topics ccloud topic create accounts.customer.change ccloud topic create fulfillment.order.change ccloud topic create orders.order.fulfill # list kafka topics ccloud topic list accounts.customer.change fulfillment.order.change orders.order.fulfill
Another useful ccloud
command, topic describe
, displays topic replication details. The new topics will have a replication factor of 3 and a partition count of 12.
Adding the --verbose
flag to the command, ccloud --verbose topic describe
, displays low-level topic and cluster configuration details, as well as a log of all topic-related activities.
Kubernetes Resources
The deployment of the three storefront microservices to the dev
Namespace will minimally require the following Kubernetes configuration resources.
- (1) Kubernetes Namespace;
- (3) Kubernetes Deployments;
- (3) Kubernetes Services;
- (1) Kubernetes ConfigMap;
- (2) Kubernetes Secrets;
- (1) Istio 1.0 Gateway;
- (1) Istio 1.0 VirtualService;
- (2) Istio 1.0 ServiceEntry;
The Istio networking.istio.io
v1alpha3
API introduced the last three configuration resources in the list, to control traffic routing into, within, and out of the mesh. There are a total of four new io networking.istio.io
v1alpha3
API routing resources: Gateway, VirtualService, DestinationRule, and ServiceEntry.
Creating and managing such a large number of resources is a common complaint regarding the complexity of Kubernetes. Imagine the resource sprawl when you have dozens of microservices replicated across several namespaces. Fortunately, all resource files for this post are included in the storefront-kafka-docker project’s gke directory.
To follow along with the demo, you will need to make minor modifications to a few of these resources, including the Istio Gateway, Istio VirtualService, two Istio ServiceEntry resources, and two Kubernetes Secret resources.
Istio Gateway & VirtualService
Both the Istio Gateway and VirtualService configuration resources are contained in a single file, istio-gateway.yaml. For the demo, I am using a personal domain, storefront-demo.com
, along with the sub-domain, api.dev
, to host the Storefront API. The domain’s primary A record (‘@’) and sub-domain A record are both associated with the external IP address on the frontend of the load balancer. In the file, this host is configured for the Gateway and VirtualService resources. You can choose to replace the host with your own domain, or simply remove the host block altogether on lines 13–14 and 21–22. Removing the host blocks, you would then use the external IP address on the frontend of the load balancer (explained later in the post) to access the Storefront API (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: Gateway | |
metadata: | |
name: storefront-gateway | |
spec: | |
selector: | |
istio: ingressgateway | |
servers: | |
– port: | |
number: 80 | |
name: http | |
protocol: HTTP | |
hosts: | |
– api.dev.storefront-demo.com | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-dev | |
spec: | |
hosts: | |
– api.dev.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.dev.svc.cluster.local |
Istio ServiceEntry
There are two Istio ServiceEntry configuration resources. Both ServiceEntry resources control egress traffic from the Storefront API services, both of their ServiceEntry Location items are set to MESH_INTERNAL
. The first ServiceEntry, mongodb-atlas-external-mesh.yaml, defines MongoDB Atlas cluster egress traffic from the Storefront API (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: ServiceEntry | |
metadata: | |
name: mongdb-atlas-external-mesh | |
spec: | |
hosts: | |
– <your_atlas_url.gcp.mongodb.net> | |
ports: | |
– name: mongo | |
number: 27017 | |
protocol: MONGO | |
location: MESH_EXTERNAL | |
resolution: NONE |
The other ServiceEntry, confluent-cloud-external-mesh.yaml, defines Confluent Cloud Kafka cluster egress traffic from the Storefront API (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: ServiceEntry | |
metadata: | |
name: confluent-cloud-external-mesh | |
spec: | |
hosts: | |
– <your_cluster_url.us-central1.gcp.confluent.cloud> | |
ports: | |
– name: kafka | |
number: 9092 | |
protocol: TLS | |
location: MESH_EXTERNAL | |
resolution: NONE |
Both need to have their host
items replaced with the appropriate Atlas and Confluent URLs.
Inspecting Istio Resources
The easiest way to view Istio resources is from the command line using the istioctl
and kubectl
CLI tools.
istioctl get gateway istioctl get virtualservices istioctl get serviceentry kubectl describe gateway kubectl describe virtualservices kubectl describe serviceentry
Multiple Namespaces
In this demo, we are only deploying to a single Kubernetes Namespace, dev
. However, Istio will also support routing traffic to multiple namespaces. For example, a typical non-prod Kubernetes cluster might support dev
, test
, and uat
, each associated with a different sub-domain. One way to support multiple Namespaces with Istio 1.0 is to add each host to the Istio Gateway (lines 14–16, below), then create a separate Istio VirtualService for each Namespace. All the VirtualServices are associated with the single Gateway. In the VirtualService, each service’s host address is the fully qualified domain name (FQDN) of the service. Part of the FQDN is the Namespace, which we change for each for each VirtualService (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: networking.istio.io/v1alpha3 | |
kind: Gateway | |
metadata: | |
name: storefront-gateway | |
spec: | |
selector: | |
istio: ingressgateway | |
servers: | |
– port: | |
number: 80 | |
name: http | |
protocol: HTTP | |
hosts: | |
– api.dev.storefront-demo.com | |
– api.test.storefront-demo.com | |
– api.uat.storefront-demo.com | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-dev | |
spec: | |
hosts: | |
– api.dev.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.dev.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.dev.svc.cluster.local | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-test | |
spec: | |
hosts: | |
– api.test.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.test.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.test.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.test.svc.cluster.local | |
— | |
apiVersion: networking.istio.io/v1alpha3 | |
kind: VirtualService | |
metadata: | |
name: storefront-uat | |
spec: | |
hosts: | |
– api.uat.storefront-demo.com | |
gateways: | |
– storefront-gateway | |
http: | |
– match: | |
– uri: | |
prefix: /accounts | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: accounts.uat.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /fulfillment | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: fulfillment.uat.svc.cluster.local | |
– match: | |
– uri: | |
prefix: /orders | |
route: | |
– destination: | |
port: | |
number: 8080 | |
host: orders.uat.svc.cluster.local |
MongoDB Atlas Secret
There is one Kubernetes Secret for the sensitive MongoDB configuration and one Secret for the sensitive Confluent Cloud configuration. The Kubernetes Secret object type is intended to hold sensitive information, such as passwords, OAuth tokens, and SSH keys.
The mongodb-atlas-secret.yaml file contains the MongoDB Atlas cluster connection string, with the demo_user
username and password, one for each of the storefront service’s databases (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: Secret | |
metadata: | |
name: mongodb-atlas | |
namespace: dev | |
type: Opaque | |
data: | |
mongodb.uri.accounts: your_base64_encoded_value | |
mongodb.uri.fulfillment: your_base64_encoded_value | |
mongodb.uri.orders: your_base64_encoded_value |
Kubernetes Secrets are Base64 encoded. The easiest way to encode the secret values is using the Linux base64
program. The base64
program encodes and decodes Base64 data, as specified in RFC 4648. Pass each MongoDB URI string to the base64
program using echo -n
.
MONGODB_URI=mongodb+srv://demo_user:your_password@your_cluster_address/accounts?retryWrites=true echo -n $MONGODB_URI | base64 bW9uZ29kYitzcnY6Ly9kZW1vX3VzZXI6eW91cl9wYXNzd29yZEB5b3VyX2NsdXN0ZXJfYWRkcmVzcy9hY2NvdW50cz9yZXRyeVdyaXRlcz10cnVl
Repeat this process for the three MongoDB connection strings.
Confluent Cloud Secret
The confluent-cloud-kafka-secret.yaml file contains two data fields in the Secret’s data map, bootstrap.servers
and sasl.jaas.config
. These configuration items were both listed in the Client Config Java tab of the Confluent Cloud web-based management console, as shown previously. The sasl.jaas.config
data field requires the Confluent Cloud cluster API Key and Secret you created earlier. Again, use the base64 encoding process for these two data fields (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: Secret | |
metadata: | |
name: confluent-cloud-kafka | |
namespace: dev | |
type: Opaque | |
data: | |
bootstrap.servers: your_base64_encoded_value | |
sasl.jaas.config: your_base64_encoded_value |
Confluent Cloud ConfigMap
The remaining five Confluent Cloud Kafka cluster configuration values are not sensitive, and therefore, may be placed in a Kubernetes ConfigMap, confluent-cloud-kafka-configmap.yaml (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: confluent-cloud-kafka | |
data: | |
ssl.endpoint.identification.algorithm: "https" | |
sasl.mechanism: "PLAIN" | |
request.timeout.ms: "20000" | |
retry.backoff.ms: "500" | |
security.protocol: "SASL_SSL" |
Accounts Deployment Resource
To see how the services consume the ConfigMap and Secret values, review the Accounts Deployment resource, shown below. Note the environment variables section, on lines 44–90, are a mix of hard-coded values and values referenced from the ConfigMap and two Secrets, shown above (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: accounts | |
labels: | |
app: accounts | |
spec: | |
ports: | |
– name: http | |
port: 8080 | |
selector: | |
app: accounts | |
— | |
apiVersion: extensions/v1beta1 | |
kind: Deployment | |
metadata: | |
name: accounts | |
labels: | |
app: accounts | |
spec: | |
replicas: 2 | |
strategy: | |
type: Recreate | |
selector: | |
matchLabels: | |
app: accounts | |
template: | |
metadata: | |
labels: | |
app: accounts | |
annotations: | |
sidecar.istio.io/inject: "true" | |
spec: | |
containers: | |
– name: accounts | |
image: garystafford/storefront-accounts:gke-2.2.0 | |
resources: | |
requests: | |
memory: "250M" | |
cpu: "100m" | |
limits: | |
memory: "400M" | |
cpu: "250m" | |
env: | |
– name: SPRING_PROFILES_ACTIVE | |
value: "gke" | |
– name: SERVER_SERVLET_CONTEXT-PATH | |
value: "/accounts" | |
– name: LOGGING_LEVEL_ROOT | |
value: "INFO" | |
– name: SPRING_DATA_MONGODB_URI | |
valueFrom: | |
secretKeyRef: | |
name: mongodb-atlas | |
key: mongodb.uri.accounts | |
– name: SPRING_KAFKA_BOOTSTRAP-SERVERS | |
valueFrom: | |
secretKeyRef: | |
name: confluent-cloud-kafka | |
key: bootstrap.servers | |
– name: SPRING_KAFKA_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM | |
valueFrom: | |
configMapKeyRef: | |
name: confluent-cloud-kafka | |
key: ssl.endpoint.identification.algorithm | |
– name: SPRING_KAFKA_PROPERTIES_SASL_MECHANISM | |
valueFrom: | |
configMapKeyRef: | |
name: confluent-cloud-kafka | |
key: sasl.mechanism | |
– name: SPRING_KAFKA_PROPERTIES_REQUEST_TIMEOUT_MS | |
valueFrom: | |
configMapKeyRef: | |
name: confluent-cloud-kafka | |
key: request.timeout.ms | |
– name: SPRING_KAFKA_PROPERTIES_RETRY_BACKOFF_MS | |
valueFrom: | |
configMapKeyRef: | |
name: confluent-cloud-kafka | |
key: retry.backoff.ms | |
– name: SPRING_KAFKA_PROPERTIES_SASL_JAAS_CONFIG | |
valueFrom: | |
secretKeyRef: | |
name: confluent-cloud-kafka | |
key: sasl.jaas.config | |
– name: SPRING_KAFKA_PROPERTIES_SECURITY_PROTOCOL | |
valueFrom: | |
configMapKeyRef: | |
name: confluent-cloud-kafka | |
key: security.protocol | |
ports: | |
– containerPort: 8080 | |
imagePullPolicy: IfNotPresent |
Modify Microservices for Confluent Cloud
As explained earlier, Confluent Cloud’s Kafka cluster requires some very specific configuration, based largely on the security features of Confluent Cloud. Connecting to Confluent Cloud requires some minor modifications to the existing storefront service source code. The changes are identical for all three services. To understand the service’s code, I suggest reviewing the previous post, Using Eventual Consistency and Spring for Kafka to Manage a Distributed Data Model: Part 1. Note the following changes are already made to the source code in the gke
git branch, and not necessary for this demo.
The previous Kafka SenderConfig
and ReceiverConfig
Java classes have been converted to Java interfaces. There are four new SenderConfigConfluent
, SenderConfigNonConfluent
, ReceiverConfigConfluent
, and ReceiverConfigNonConfluent
classes, which implement one of the new interfaces. The new classes contain the Spring Boot Profile class-level annotation. One set of Sender and Receiver classes are assigned the @Profile("gke")
annotation, and the others, the @Profile("!gke")
annotation. When the services start, one of the two class implementations are is loaded, depending on the Active Spring Profile, gke
or not gke
. To understand the changes better, examine the Account service’s SenderConfigConfluent.java file (gist).
Line 20: Designates this class as belonging to the gke
Spring Profile.
Line 23: The class now implements an interface.
Lines 25–44: Reference the Confluent Cloud Kafka cluster configuration. The values for these variables will come from the Kubernetes ConfigMap and Secret, described previously, when the services are deployed to GKE.
Lines 55–59: Additional properties that have been added to the Kafka Sender configuration properties, specifically for Confluent Cloud.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package com.storefront.config; | |
import com.storefront.kafka.Sender; | |
import com.storefront.model.CustomerChangeEvent; | |
import org.apache.kafka.clients.producer.ProducerConfig; | |
import org.apache.kafka.common.serialization.StringSerializer; | |
import org.springframework.beans.factory.annotation.Value; | |
import org.springframework.context.annotation.Bean; | |
import org.springframework.context.annotation.Configuration; | |
import org.springframework.context.annotation.Profile; | |
import org.springframework.kafka.annotation.EnableKafka; | |
import org.springframework.kafka.core.DefaultKafkaProducerFactory; | |
import org.springframework.kafka.core.KafkaTemplate; | |
import org.springframework.kafka.core.ProducerFactory; | |
import org.springframework.kafka.support.serializer.JsonSerializer; | |
import java.util.HashMap; | |
import java.util.Map; | |
@Profile("gke") | |
@Configuration | |
@EnableKafka | |
public class SenderConfigConfluent implements SenderConfig { | |
@Value("${spring.kafka.bootstrap-servers}") | |
private String bootstrapServers; | |
@Value("${spring.kafka.properties.ssl.endpoint.identification.algorithm}") | |
private String sslEndpointIdentificationAlgorithm; | |
@Value("${spring.kafka.properties.sasl.mechanism}") | |
private String saslMechanism; | |
@Value("${spring.kafka.properties.request.timeout.ms}") | |
private String requestTimeoutMs; | |
@Value("${spring.kafka.properties.retry.backoff.ms}") | |
private String retryBackoffMs; | |
@Value("${spring.kafka.properties.security.protocol}") | |
private String securityProtocol; | |
@Value("${spring.kafka.properties.sasl.jaas.config}") | |
private String saslJaasConfig; | |
@Override | |
@Bean | |
public Map<String, Object> producerConfigs() { | |
Map<String, Object> props = new HashMap<>(); | |
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); | |
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); | |
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); | |
props.put("ssl.endpoint.identification.algorithm", sslEndpointIdentificationAlgorithm); | |
props.put("sasl.mechanism", saslMechanism); | |
props.put("request.timeout.ms", requestTimeoutMs); | |
props.put("retry.backoff.ms", retryBackoffMs); | |
props.put("security.protocol", securityProtocol); | |
props.put("sasl.jaas.config", saslJaasConfig); | |
return props; | |
} | |
@Override | |
@Bean | |
public ProducerFactory<String, CustomerChangeEvent> producerFactory() { | |
return new DefaultKafkaProducerFactory<>(producerConfigs()); | |
} | |
@Override | |
@Bean | |
public KafkaTemplate<String, CustomerChangeEvent> kafkaTemplate() { | |
return new KafkaTemplate<>(producerFactory()); | |
} | |
@Override | |
@Bean | |
public Sender sender() { | |
return new Sender(); | |
} | |
} |
Once code changes were completed and tested, the Docker Image for each service was rebuilt and uploaded to Docker Hub for public access. When recreating the images, the version of the Java Docker base image was upgraded from the previous post to Alpine OpenJDK 12 (openjdk:12-jdk-alpine
).
Google Kubernetes Engine (GKE) with Istio
Having created the MongoDB Atlas and Confluent Cloud clusters, built the Kubernetes and Istio resources, modified the service’s source code, and pushed the new Docker Images to Docker Hub, the GKE cluster may now be built.
For the sake of brevity, we will manually create the cluster and deploy the resources, using the Google Cloud SDK gcloud and Kubernetes kubectl CLI tools, as opposed to automating with CI/CD tools, like Jenkins or Spinnaker. For this demonstration, I suggest a minimally-sized two-node GKE cluster using n1-standard-2 machine-type instances. The latest available release of Kubernetes on GKE at the time of this post was 1.11.5-gke.5 and Istio 1.03 (Istio on GKE still considered beta). Note Kubernetes and Istio are evolving rapidly, thus the configuration flags often change with newer versions. Check the GKE Clusters tab for the latest clusters create
command format (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Create non-prod Kubernetes cluster on GKE | |
# Constants – CHANGE ME! | |
readonly NAMESPACE='dev' | |
readonly PROJECT='gke-confluent-atlas' | |
readonly CLUSTER='storefront-api' | |
readonly REGION='us-central1' | |
readonly ZONE='us-central1-a' | |
# Create GKE cluster (time in foreground) | |
time \ | |
gcloud beta container \ | |
–project $PROJECT clusters create $CLUSTER \ | |
–zone $ZONE \ | |
–username "admin" \ | |
–cluster-version "1.11.5-gke.5" \ | |
–machine-type "n1-standard-2" \ | |
–image-type "COS" \ | |
–disk-type "pd-standard" \ | |
–disk-size "100" \ | |
–scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ | |
–num-nodes "2" \ | |
–enable-stackdriver-kubernetes \ | |
–enable-ip-alias \ | |
–network "projects/$PROJECT/global/networks/default" \ | |
–subnetwork "projects/$PROJECT/regions/$REGION/subnetworks/default" \ | |
–default-max-pods-per-node "110" \ | |
–addons HorizontalPodAutoscaling,HttpLoadBalancing,Istio \ | |
–istio-config auth=MTLS_PERMISSIVE \ | |
–issue-client-certificate \ | |
–metadata disable-legacy-endpoints=true \ | |
–enable-autoupgrade \ | |
–enable-autorepair | |
# Get cluster creds | |
gcloud container clusters get-credentials $CLUSTER \ | |
–zone $ZONE –project $PROJECT | |
kubectl config current-context | |
# Create dev Namespace | |
kubectl apply -f ./resources/other/namespaces.yaml | |
# Enable Istio automatic sidecar injection in Dev Namespace | |
kubectl label namespace $NAMESPACE istio-injection=enabled |
Executing these commands successfully will build the cluster and the dev
Namespace, into which all the resources will be deployed. The two-node cluster creation process takes about three minutes on average.
We can also observe the new GKE cluster from the GKE Clusters Details tab.
Creating the GKE cluster also creates several other GCP resources, including a TCP load balancer and three external IP addresses. Shown below in the VPC network External IP addresses tab, there is one IP address associated with each of the two GKE cluster’s VM instances, and one IP address associated with the frontend of the load balancer.
While the TCP load balancer’s frontend is associated with the external IP address, the load balancer’s backend is a target pool, containing the two GKE cluster node machine instances.
A forwarding rule associates the load balancer’s frontend IP address with the backend target pool. External requests to the frontend IP address will be routed to the GKE cluster. From there, requests will be routed by Kubernetes and Istio to the individual storefront service Pods, and through the Istio sidecar (Envoy) proxies. There is an Istio sidecar proxy deployed to each Storefront service Pod.
Below, we see the details of the load balancer’s target pool, containing the two GKE cluster’s VMs.
As shown at the start of the post, a simplified view of the GCP/GKE network routing looks as follows. For brevity, firewall rules and routes are not illustrated in the diagram.
Apply Kubernetes Resources
Again, using kubectl, deploy the three services and associated Kubernetes and Istio resources. Note the Istio Gateway and VirtualService(s) are not deployed to the dev
Namespace since their role is to control ingress and route traffic to the dev
Namespace and the services within it (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Deploy Kubernetes/Istio resources | |
# Constants – CHANGE ME! | |
readonly NAMESPACE='dev' | |
readonly PROJECT='gke-confluent-atlas' | |
readonly CLUSTER='storefront-api' | |
readonly REGION='us-central1' | |
readonly ZONE='us-central1-a' | |
kubectl apply -f ./resources/other/istio-gateway.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/other/mongodb-atlas-external-mesh.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/other/confluent-cloud-external-mesh.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/config/confluent-cloud-kafka-configmap.yaml | |
kubectl apply -f ./resources/config/mongodb-atlas-secret.yaml | |
kubectl apply -f ./resources/config/confluent-cloud-kafka-secret.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/services/accounts.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/services/fulfillment.yaml | |
kubectl apply -n $NAMESPACE -f ./resources/services/orders.yaml |
Once these commands complete successfully, on the Workloads tab, we should observe two Pods of each of the three storefront service Kubernetes Deployments deployed to the dev
Namespace, all six Pods with a Status of ‘OK’. A Deployment controller provides declarative updates for Pods and ReplicaSets.
On the Services tab, we should observe the three storefront service’s Kubernetes Services. A Service in Kubernetes is a REST object.
On the Configuration Tab, we should observe the Kubernetes ConfigMap and two Secrets also deployed to the dev Environment.
Below, we see the confluent-cloud-kafka ConfigMap resource with its data map of Confluent Cloud configuration.
Below, we see the confluent-cloud-kafka Secret with its data map of sensitive Confluent Cloud configuration.
Test the Storefront API
If you recall from part two of the previous post, there are a set of seven Storefront API endpoints that can be called to create sample data and test the API. The HTTP GET Requests hit each service, generate test data, populate the three MongoDB databases, and produce and consume Kafka messages across all three topics. Making these requests is the easiest way to confirm the Storefront API is working properly.
- Sample Customer: accounts/customers/sample
- Sample Orders: orders/customers/sample/orders
- Sample Fulfillment Requests: orders/customers/sample/fulfill
- Sample Processed Order Event: fulfillment/fulfillment/sample/process
- Sample Shipped Order Event: fulfillment/fulfillment/sample/ship
- Sample In-Transit Order Event: fulfillment/fulfillment/sample/in-transit
- Sample Received Order Event: fulfillment/fulfillment/sample/receive
Thee are a wide variety of tools to interact with the Storefront API. The project includes a simple Python script, sample_data.py, which will make HTTP GET requests to each of the above endpoints, after confirming their health, and return a success message.
Postman
Postman, my personal favorite, is also an excellent tool to explore the Storefront API resources. I have the above set of the HTTP GET requests saved in a Postman Collection. Using Postman, below, we see the response from an HTTP GET request to the /accounts/customers
endpoint.
Postman also allows us to create integration tests and run Collections of Requests in batches using Postman’s Collection Runner. To test the Storefront API, below, I used Collection Runner to run a single series of integration tests, intended to confirm the API’s functionality, by checking for expected HTTP response codes and expected values in the response payloads. Postman also shows the response times from the Storefront API. Since this platform was not built to meet Production SLAs, measuring response times is less critical in the Development environment.
Google Stackdriver
If you recall, the GKE cluster had the Stackdriver Kubernetes option enabled, which gives us, amongst other observability features, access to all cluster, node, pod, and container logs. To confirm data is flowing to the MongoDB databases and Kafka topics, we can check the logs from any of the containers. Below we see the logs from the two Accounts Pod containers. Observe the AfterSaveListener
handler firing on an onAfterSave
event, which sends a CustomerChangeEvent
payload to the accounts.customer.change
Kafka topic, without error. These entries confirm that both Atlas and Confluent Cloud are reachable by the GKE-based workloads, and appear to be functioning properly.
MongoDB Atlas Collection View
Review the MongoDB Atlas Clusters Collections tab. In this Development environment, the MongoDB databases and collections are created the first time a service tries to connects to them. In Production, the databases would be created and secured in advance of deploying resources. Once the sample data requests are completed successfully, you should now observe the three Storefront API databases, each with collections of documents.
MongoDB Compass
In addition to the Atlas web-based management console, MongoDB Compass is an excellent desktop tool to explore and manage MongoDB databases. Compass is available for Mac, Linux, and Windows. One of the many great features of Compass is the ability to visualize collection schemas and interactively filter documents. Below we see the fulfillment.requests
collection schema.
Confluent Control Center
Confluent Control Center is a downloadable, web browser-based tool for managing and monitoring Apache Kafka, including your Confluent Cloud clusters. Confluent Control Center provides rich functionality for building and monitoring production data pipelines and streaming applications. Confluent offers a free 30-day trial of Confluent Control Center. Since the Control Center is provided at an additional fee, and I found difficult to configure for Confluent Cloud clusters based on Confluent’s documentation, I chose not to cover it in detail, for this post.
Tear Down Cluster
Delete your Confluent Cloud and MongoDB clusters using their web-based management consoles. To delete the GKE cluster and all deployed Kubernetes resources, use the cluster delete
command. Also, double-check that the external IP addresses and load balancer, associated with the cluster, were also deleted as part of the cluster deletion (gist).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
# purpose: Tear down GKE cluster and associated resources | |
# Constants – CHANGE ME! | |
readonly PROJECT='gke-confluent-atlas' | |
readonly CLUSTER='storefront-api' | |
readonly REGION='us-central1' | |
readonly ZONE='us-central1-a' | |
# Delete GKE cluster (time in foreground) | |
time yes | gcloud beta container clusters delete $CLUSTER –zone $ZONE | |
# Confirm network resources are also deleted | |
gcloud compute forwarding-rules list | |
gcloud compute target-pools list | |
gcloud compute firewall-rules list | |
# In case target-pool associated with Cluster is not deleted | |
yes | gcloud compute target-pools delete \ | |
$(gcloud compute target-pools list \ | |
–filter="region:($REGION)" –project $PROJECT \ | |
| awk 'NR==2 {print $1}') |
Conclusion
In this post, we have seen how easy it is to integrate Cloud-based DBaaS and MaaS products with the managed Kubernetes services from GCP, AWS, and Azure. As this post demonstrated, leading SaaS providers have sufficiently matured the integration capabilities of their product offerings to a point where it is now reasonable for enterprises to architect multi-vendor, single- and multi-cloud Production platforms, without re-engineering existing cloud-native applications.
In future posts, we will revisit this Storefront API example, further demonstrating how to enable HTTPS (Securing Your Istio Ingress Gateway with HTTPS) and end-user authentication (Istio End-User Authentication for Kubernetes using JSON Web Tokens (JWT) and Auth0)
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
Developing Applications for the Cloud with Azure App Services and MongoDB Atlas
Posted by Gary A. Stafford in .NET Development, Azure, Cloud, Software Development on November 1, 2017
Shift Left Cloud
The continued growth of compute services by leading Cloud Service Providers (CSPs) like Microsoft, Amazon, and Google are transforming the architecture of modern software applications, as well as the software development lifecycle (SDLC). Self-service access to fully-managed, reasonably-priced, secure compute has significantly increased developer productivity. At the same time, cloud-based access to cutting-edge technologies, like Artificial Intelligence (AI), Internet Of Things (IoT), Machine Learning, and Data Analytics, has accelerated the capabilities of modern applications. Finally, as CSPs become increasingly platform agnostic, Developers are no longer limited to a single technology stack or operating system. Today, Developers are solving complex problems with multi-cloud, multi-OS polyglot solutions.
Developers now leverage the Cloud from the very start of the software development process; shift left Cloud, if you will*. Developers are no longer limited to building and testing software on local workstations or on-premise servers, then throwing it over the wall to Operations for deployment to the Cloud. Developers using Azure, AWS, and GCP, develop, build, test, and deploy their code directly to the Cloud. Existing organizations are rapidly moving development environments from on-premise to the Cloud. New organizations are born in the Cloud, without the burden of legacy on-premise data-centers and servers under desks to manage.
Example Application
To demonstrate the ease of developing a modern application for the Cloud, let’s explore a simple API-based, NoSQL-backed web application. The application, The .NET Diner, simulates a rudimentary restaurant menu ordering interface. It consists of a single-page application (SPA) and two microservices backed by MongoDB. For simplicity, there is no API Gateway between the UI and the two services, as normally would be in place. An earlier version of this application was used in two previous posts, including Cloud-based Continuous Integration and Deployment for .NET Development.
The original restaurant order application was written with JQuery and RESTful .NET WCF Services. The new application, used in this post, has been completely re-written and modernized. The web-based user interface (UI) is written with Google’s Angular 4 framework using TypeScript. The UI relies on a microservices-based API, built with C# using Microsoft’s Web API 2 and .NET 4.7. The services rely on MongoDB for data persistence.
All code for this project is available on GitHub within two projects, one for the Angular UI and another for the C# services. The entire application can easily be built and run locally on Windows using MongoDB Community Edition. Alternately, to run the application in the Cloud, you will require an Azure and MongoDB Atlas account.
This post is primarily about the development experience. For brevity, the post will not delve into security, DevOps practices for CI/CD, and the complexities of staging and releasing code to Production.
Cross-Platform Development
The API, consisting of a set of C# microservices, was developed with Microsoft Visual Studio Community 2017 on Windows 10. Visual Studio touts itself as a full-featured Integrated Development Environment (IDE) for Android, iOS, Windows, web, and cloud. Visual Studio is easily integrated with Azure, AWS, and Google, through the use of Extensions. Visual Studio is an ideal IDE for cloud-centric application development.
To simulate a typical mixed-platform development environment, the Angular UI front-end was developed with JetBrains WebStorm 2017 on Mac OS X. WebStorm touts itself as a powerful IDE for modern JavaScript development.
Other tools used to develop the application include Git and GitHub for source code, MongoDB Community Edition for local database development, and Postman for API development and testing, both locally and on Azure. All the development tools used in the post are cross-platform. Versions of WebStorm, Visual Studio, MongoDB, Postman, Git, Node.js, npm, and Bash are all available for Mac, Windows, and Linux. Cross-platform flexibility is key when developing modern multi-OS polyglot applications.
Postman
Postman was used to build, test, and document the application’s API. Postman is an excellent choice for developing RESTful APIs. With Postman, you define Collections of HTTP requests for each of your APIs. You then define Environments, such as Development, Test, and Production, against which you will execute the Collections of HTTP requests. Each environment consists of environment-specific variables. Those variables can be used to define API URLs and as request parameters.
Not only can you define static variables, Postman’s runtime, built on Node.js, is scriptable using JavaScript, allowing you to programmatically set dynamic variables, based on the results of HTTP requests and responses, as demonstrated below.
Postman also allows you to write and run automated API integration tests, as well as perform load testing, as shown below.
Azure App Services
The Angular browser-based UI and the C# microservices will be deployed to Azure using the Azure App Service. Azure App Service is nearly identical to AWS Elastic BeanStalk and Google App Engine. According to Microsoft, Azure App Service allows Developers to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps, running on Windows or Linux, using .NET, .NET Core, Java, Ruby, Node.js, PHP, and Python.
App Service is a fully-managed, turn-key platform. Azure takes care of infrastructure maintenance and load balancing. App Service easily integrates with other Azure services, such as API Management, Queue Storage, Azure Active Directory (AD), Cosmos DB, and Application Insights. Microsoft suggests evaluating the following four criteria when considering Azure App Services:
- You want to deploy a web application that’s accessible through the Internet.
- You want to automatically scale your web application according to demand without needing to redeploy.
- You don’t want to maintain server infrastructure (including software updates).
- You don’t need any machine-level customizations on the servers that host your web application.
There are currently four types of Azure App Services, which are Web Apps, Web Apps for Containers, Mobile Apps, and API Apps. The application in this post will use the Azure Web Apps for the Angular browser-based UI and Azure API Apps for the C# microservices.
MongoDB Atlas
Each of the C# microservices has separate MongoDB database. In the Cloud, the services use MongoDB Atlas, a secure, highly-available, and scalable cloud-hosted MongoDB service. Cloud-based databases, like Atlas, are often referred to as Database as a Service (DBaaS). Atlas is a Cloud-based alternative to traditional on-premise databases, as well as equivalent CSP-based solutions, such as Amazon DynamoDB, GCP Cloud Bigtable, and Azure Cosmos DB.
Atlas is an example of a SaaS who offer a single service or small set of closely related services, as an alternative to the big CSP’s equivalent services. Similar providers in this category include CloudAMQP (RabbitMQ as a Service), ClearDB (MySQL DBaaS), Akamai (Content Delivery Network), and Oracle Database Cloud Service (Oracle Database, RAC, and Exadata as a Service). Many of these providers, such as Atlas, are themselves hosted on AWS or other CSPs.
There are three pricing levels for MongoDB Atlas: Free, Essential, and Professional. To follow along with this post, the Free level is sufficient. According to MongoDB, with the Free account level, you get 512 MB of storage with shared RAM, a highly-available 3-node replica set, end-to-end encryption, secure authentication, fully managed upgrades, monitoring and alerts, and a management API. Atlas provides the ability to upgrade your account and CSP specifics at any time.
Once you register for an Atlas account, you will be able to log into Atlas, set up your users, whitelist your IP addresses for security, and obtain necessary connection information. You will need this connection information in the next section to configure the Azure API Apps.
With the Free Atlas tier, you can view detailed Metrics about database cluster activity. However, with the free tier, you do not get access to Real-Time data insights or the ability to use the Data Explorer to view your data through the Atlas UI.
Azure API Apps
The example application’s API consists of two RESTful microservices built with C#, the RestaurantMenu
service and RestaurantOrder
service. Both services are deployed as Azure API Apps. API Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.
Microsoft Visual Studio has done an excellent job providing Extensions to make cloud integration a breeze. I will be using Visual Studio Tools for Azure in this post. Similar to how you create a Publish Profile for deploying applications to Internet Information Services (IIS), you create a Publish Profile for Azure App Services. Using the step-by-step user interface, you create a Microsft Azure App Service Web Deploy Publish Profile for each service. To create a new Profile, choose the Microsoft Azure App Service Target.
You must be connected to your Azure account to create the Publish Profile. Give the service an App Name, choose your Subscription, and select or create a Resource Group and an App Service Plan.
The App Service Plan defines the Location and Size for your API App container; these will determine the cost of the compute. I suggest putting the two API Apps and the Web App in the same location, in this case, East US.
The Publish Profile is now available for deploying the services to Azure. No command line interaction is required. The services can be built and published to Azure with a single click from within Visual Studio.
Configuration Management
Azure App Services is highly configurable. For example, each API App requires a different configuration, in each environment, to connect to different instances of MongoDB Atlas databases. For security, sensitive Atlas credentials are not stored in the source code. The Atlas URL and sensitive credentials are stored in App Settings on Azure. For this post, the settings were input directly into the Azure UI, as shown below. You will need to input your own Atlas URL and credentials.
The compiled C# services expect certain environment variables to be present at runtime to connect to MongoDB Atlas. These are provided through Azure’s App Settings. Access to the App Settings in Azure should be tightly controlled through Azure AD and fine-grained Azure Role-Based Access Control (RBAC) service.
CORS
If you want to deploy the application from this post to Azure, there is one code change you will need to make to each service, which deals with Cross-Origin Resource Sharing (CORS). The services are currently configured to only accept traffic from my temporary Angular UI App Service’s URL. You will need to adjust the CORS configuration in the \App_Start\WebApiConfig.cs
file in each service, to match your own App Service’s new URL.
Angular UI Web App
The Angular UI application will be deployed as an Azure Web App, one of four types of Azure App Services, mentioned previously. According to Microsoft, Web Apps allow Developers to program in their favorite languages, including .NET, Java, Node.js, PHP, and Python on Windows or .NET Core, Node.js, PHP or Ruby on Linux. Web Apps is a fully-managed platform. Azure performs OS patching, capacity provisioning, server management, and load balancing.
Using the Azure Portal, setting up a new Web App for the Angular UI is simple.
Provide an App Name, Subscription, Resource Group, OS Type, and select whether or not you want Application Insights enabled for the Web App.
Although an amazing IDE for web development, WebStorm lacks some of the direct integrations with Azure, AWS, and Google, available with other IDE’s, like Visual Studio. Since the Angular application was developed in WebStorm on Mac, we will take advantage of Azure App Service’s Continuous Deployment feature.
Azure Web Apps can be deployed automatically from most common source code management platforms, including Visual Studio Team Services (VSTS), GitHub, Bitbucket, OneDrive, and local Git repositories.
For this post, I chose GitHub. To configure deployment from GitHub, select the GitHub Account, Organization, Project, and Branch from which Azure will deploy the Angular Web App.
Configuring GitHub in the Azure Portal, Azure becomes an Authorized OAuth App on the GitHub account. Azure creates a Webhook, which fires each time files are pushed (git push
) to the dist
branch of the GitHub project’s repository.
Using the ng build --dev --env=prod
command, the Angular UI application must be first transpiled from TypeScript to JavaScript and bundled for deployment. The ng build
command can be run from within WebStorm or from the command line.
The the --env=prod
flag ensures that the Production environment configuration, containing the correct Azure API endpoints, issued transpiled into the build. This configuration is stored in the \src\environments\environment.prod.ts
file, shown below. You will need to update these two endpoints to your own endpoints from the two API Apps you previously deployed to Azure.
Optionally, the code should be optimized for Production, by replacing the --dev
flag with the --prod
flag. Amongst other optimizations, the Production version of the code is uglified using UglifyJS. Note the difference in the build files shown below for Production, as compared to files above for Development.
Since I chose GitHub for deployment to Azure, I used Git to manually push the local build files to the dist
branch on GitHub.
Every time the webhook fires, Azure pulls and deploys the new build, overwriting the previously deployed version, as shown below.
To stage new code and not immediately overwrite running code, Azure has a feature called Deployment slots. According to Microsoft, Deployment slots allow Developers to deploy different versions of Web Apps to different URLs. You can test a certain version and then swap content and configuration between slots. This is likely how you would eventually deploy your code to Production.
Up and Running
Below, the three Azure App Services are shown in the Azure Portal, successfully deployed and running. Note their Status, App Type, Location, and Subscription.
Before exploring the deployed UI, the two Azure API Apps should be tested using Postman. Once the API is confirmed to be working properly, populated by making an HTTP Post request to the menuitems
API, the RestaurantOrderService
Azure API Service. When the HTTP Post request is made, the RestaurantOrderService
stores a set of menu items in the RestaurantMenu
Atlas MongoDB database, in the menus
collection.
The Angular UI, the RestaurantWeb
Azure Web App, is viewed by using the URL provided in the Web App’s Overview
tab. The menu items displayed in the drop-down are supplied by an HTTP GET request to the menuitems
API, provided by the RestaurantMenuService
Azure API Service.
Your order is placed through an HTTP Post request to the orders
API, the RestaurantOrderService
Azure API Service. The RestaurantOrderService
stores the order in the RestaurantOrder
Atlas MongoDB database, in the orders
collection. The order details are returned in the response body and displayed in the UI.
Once you have the development version of the application successfully up and running on Atlas and Azure, you can start to configure, test, and deploy additional application versions, as App Services, into higher Azure environments, such as Test, Performance, and eventually, Production.
Monitoring
Azure provides in-depth monitoring and performance analytics capabilities for your deployed applications with services like Application Insights. With Azure’s monitoring resources, you can monitor the live performance of your application and set up alerts for application errors and performance issues. Real-time monitoring useful when conducting performance tests. Developers can analyze response time of each API method and optimize the application, Azure configuration, and MongoDB databases, before deploying to Production.
Conclusion
This post demonstrated how the Cloud has shifted application development to a Cloud-first model. Future posts will demonstrate how an application, such as the app featured in this post, is secured, and how it is continuously built, tested, and deployed, using DevOps practices.
All opinions in this post are my own, and not necessarily the views of my current or past employers or their clients.