Voice and text-based conversational interfaces, such as chatbots, have recently seen tremendous growth in popularity. Much of this growth can be attributed to leading Cloud providers, such as Google, Amazon, and Microsoft, who now provide affordable, end-to-end development, machine learning-based training, and hosting platforms for conversational interfaces.
Cloud-based machine learning services greatly improve a conversational interface’s ability to interpret user intent with greater accuracy. However, the ability to return relevant responses to user inquiries, also requires interfaces have access to rich informational datastores, and the ability to quickly and efficiently query and analyze that data.
In this two-part post, we will enhance the capabilities of a voice and text-based conversational interface by integrating it with a search and analytics engine. By interfacing an Action for Google Assistant conversational interface with Elasticsearch, we will improve the Action’s ability to provide relevant results to the end-user. Instead of querying a traditional database for static responses to user intent, our Action will access a Near Realtime (NRT) Elasticsearch index of searchable documents. The Action will leverage Elasticsearch’s advanced search and analytics capabilities to optimize and shape user responses, based on their intent.
Action Preview
Here is a brief YouTube video preview of the final Action for Google Assistant, integrated with Elasticsearch, running on an Apple iPhone.
Google Technologies
The high-level architecture of our search engine-enhanced Action for Google Assistant will look as follows.
Here is a brief overview of the key technologies we will incorporate into our architecture.
Actions on Google
According to Google, Actions on Google is the platform for developers to extend the Google Assistant. Actions on Google is a web-based platform that provides a streamlined user-experience to create, manage, and deploy Actions. We will use the Actions on Google platform to develop our Action in this post.
Dialogflow
According to Google, Dialogflow is an enterprise-grade NLU platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. Dialogflow is powered by Google’s machine learning for Natural Language Processing (NLP).
Google Cloud Functions
Google Cloud Functions are part of Google’s event-driven, serverless compute platform, part of the Google Cloud Platform (GCP). Google Cloud Functions are analogous to Amazon’s AWS Lambda and Azure Functions. Features include automatic scaling, high availability, fault tolerance, no servers to provision, manage, patch or update, and a payment model based on the function’s execution time.
Google Kubernetes Engine
Kubernetes Engine is a managed, production-ready environment, available on GCP, for deploying containerized applications. According to Google, Kubernetes Engine is a reliable, efficient, and secure way to run Kubernetes clusters in the Cloud.
Elasticsearch
Elasticsearch is a leading, distributed, RESTful search and analytics engine. Elasticsearch is a product of Elastic, the company behind the Elastic Stack, which includes Elasticsearch, Kibana, Beats, Logstash, X-Pack, and Elastic Cloud. Elasticsearch provides a distributed, multitenant-capable, full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is similar to Apache Solr in terms of features and functionality. Both Solr and Elasticsearch is based on Apache Lucene.
Other Technologies
In addition to the major technologies highlighted above, the project also relies on the following:
- Google Container Registry – As an alternative to Docker Hub, we will store the Spring Boot API service’s Docker Image in Google Container Registry, making deployment to GKE a breeze.
- Google Cloud Deployment Manager – Google Cloud Deployment Manager allows users to specify all the resources needed for application in a declarative format using YAML. The Elastic Stack will be deployed with Deployment Manager.
- Google Compute Engine – Google Compute Engine delivers scalable, high-performance virtual machines (VMs) running in Google’s data centers, on their worldwide fiber network.
- Google Stackdriver – Stackdriver aggregates metrics, logs, and events from our Cloud-based project infrastructure, for troubleshooting. We are also integrating Stackdriver Logging for Winston into our Cloud Function for fast application feedback.
- Google Cloud DNS – Hosts the primary project domain and subdomains for the search engine and API. Google Cloud DNS is a scalable, reliable and managed authoritative Domain Name System (DNS) service running on the same infrastructure as Google.
- Google VPC Network Firewall – Firewall rules provide fine-grain, secure access controls to our API and search engine. We will several firewall port openings to talk to the Elastic Stack.
- Spring Boot – Pivotal’s Spring Boot project makes it easy to create stand-alone, production-grade Spring-based Java applications, such as our Spring Boot service.
- Spring Data Elasticsearch – Pivotal Software’s Spring Data Elasticsearch project provides easy integration to Elasticsearch from our Java-based Spring Boot service.
Demonstration
To demonstrate an Action for Google Assistant with search engine integration, we need an index of content to search. In this post, we will build an informational Action, the Programmatic Ponderings Search Action, that responds to a user’s interests in certain technical topics, by returning post suggestions from the Programmatic Ponderings blog. For this demonstration, I have indexed the last two years worth of blog posts into Elasticsearch, using the ElasticPress WordPress plugin.
Source Code
All open-sourced code for this post can be found on GitHub in two repositories, one for the Spring Boot Service and one for the Action for Google Assistant. Code samples in this post are displayed as GitHub Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
Development Process
This post will focus on the development and integration of the Action for Google Assistant with Elasticsearch, via a Google Cloud Function, Kubernetes Engine, and the Spring Boot API service. The post is not intended to be a general how-to on developing for Actions for Google Assistant, Google Cloud Platform, Elasticsearch, or WordPress.
Building and integrating the Action will involve the following steps:
- Design the Action’s conversation model;
- Provision the Elastic Stack on Google Compute Engine using Deployment Manager;
- Create an Elasticsearch index of blog posts;
- Provision the Kubernetes cluster on GCP with GKE;
- Develop and deploy the Spring Boot API service to Kubernetes;
Covered in Part Two of the Post:
- Create a new Actions project using the Actions on Google;
- Develop the Action’s Intents using the Dialogflow;
- Develop, deploy, and test the Cloud Function to GCP;
Let’s explore each step in more detail.
Conversational Model
The conversational model design of the Programmatic Ponderings Search Action for Google Assistant will have the option to invoke the Action in two ways, with or without intent. Below on the left, we see an example of an invocation of the Action – ‘Talk to Programmatic Ponderings’. Google Assistant then responds to the user for more information (intent) – ‘What topic are you interested in reading about?’.
Below on the left, we see an invocation of the Action, which includes the intent – ‘Ask Programmatic Ponderings to find a post about Kubernetes’. Google Assistant will respond directly, both verbally and visually with the most relevant post.
When a user requests a single result, for example, ‘Find a post about Docker’, Google Assistant will include Simple Response, Basic Card, and Suggestion Chip response types for devices with a display. This is shown in the center, above. The user may continue to ask for additional facts or choose to cancel the Action at any time.
When a user requests multiple results, for example, ‘I’m interested in Docker’, Google Assistant will include Simple Response, List, and Suggestion Chip response types for devices with a display. An example of a List Response is shown in the center of the previous set of screengrabs, above. The user will receive up to six results in the list, with a relevance score of 1.0 or greater. The user may choose to click on any of the post results in the list, which will initiate a new search using the post’s unique ID, as shown on the right, in the first set of screengrabs, above.
The conversational model also understands a request for help and to cancel the interaction.
GCP Account and Project
The following steps assume you have an existing GCP account and you have created a project on GCP to house the Cloud Function, GKE Cluster, and Elastic Stack on Google Compute Engine. The post also assumes that you have the latest Google Cloud SDK installed on your development machine, and have authenticated your identity from the command line (gist).
# Authenticate with the Google Cloud SDK | |
export PROJECT_ID="<your_project_id>" | |
gcloud beta auth login | |
gcloud config set project ${PROJECT_ID} | |
# Update components or new runtime nodejs8 may be unknown | |
gcloud components update |
Elasticsearch on GCP
There are a number of options available to host Elasticsearch. Elastic, the company behind Elasticsearch, offers the Elasticsearch Service, a fully managed, scalable, and reliable service on AWS and GCP. AWS also offers their own managed Elasticsearch Service. I found some limitations with AWS’ Elasticsearch Service, which made integration with Spring Data Elasticsearch difficult. According to AWS, the service supports HTTP but does not support TCP transport.
For this post, we will stand up the Elastic Stack on GCP using an offering from the Google Cloud Platform Marketplace. A well-known provider of packaged applications for multiple Cloud platforms, Bitnami, offers the ELK Stack (the previous name for the Elastic Stack), running on Google Compute Engine.
GCP Marketplace Solutions are deployed using the Google Cloud Deployment Manager. The Bitnami ELK solution is a complete stack with all the necessary software and software-defined Cloud infrastructure to securely run Elasticsearch. You select the instance’s zone(s), machine type, boot disk size, and security and networking configurations. Using that configuration, the Deployment Manager will deploy the solution and provide you with information and credentials for accessing the Elastic Stack. For this demo, we will configure a minimally-sized, single VM instance to run the Elastic Stack.
Below we see the Bitnami ELK stack’s components being created on GCP, by the Deployment Manager.
Indexed Content
With the Elastic Stack fully provisioned, I then configured WordPress to index the last two years of the Programmatic Pondering blog posts to Elasticsearch on GCP. If you want to follow along with this post and content to index, there is plenty of open source and public domain indexable content available on the Internet – books, movie lists, government and weather data, online catalogs of products, and so forth. Anything in a document database is directly indexable in Elasticsearch. Elastic even provides a set of index samples, available on their GitHub site.
Firewall Ports for Elasticseach
The Deployment Manager opens up firewall ports 80 and 443. To index the WordPress posts, I also had to open port 9200. According to Elastic, Elasticsearch uses port 9200 for communicating with their RESTful API with JSON over HTTP. For security, I locked down this firewall opening to my WordPress server’s address as the source. (gist).
SOURCE_IP=<wordpress_ip_address> | |
PORT=9200 | |
gcloud compute \ | |
--project=wp-search-bot \ | |
firewall-rules create elk-1-tcp-${PORT} \ | |
--description=elk-1-tcp-${PORT} \ | |
--direction=INGRESS \ | |
--priority=1000 \ | |
--network=default \ | |
--action=ALLOW \ | |
--rules=tcp:${PORT} \ | |
--source-ranges=${SOURCE_IP} \ | |
--target-tags=elk-1-tcp-${PORT} |
The two existing firewall rules for port opening 80 and 443 should also be locked down to your own IP address as the source. Common Elasticsearch ports are constantly scanned by Hackers, who will quickly hijack your Elasticsearch contents and hold them for ransom, in addition to deleting your indexes. Similar tactics are used on well-known and unprotected ports for many platforms, including Redis, MySQL, PostgreSQL, MongoDB, and Microsoft SQL Server.
Kibana
Once the posts are indexed, the best way to view the resulting Elasticsearch documents is through Kibana, which is included as part of the Bitnami solution. Below we see approximately thirty posts, spread out across two years.
Each Elasticsearch document, representing an indexed WordPress blog post, contains over 125 fields of information. Fields include a unique post ID, post title, content, publish date, excerpt, author, URL, and so forth. All these fields are exposed through Elasticsearch’s API, and as we will see, will be available to our Spring Boot service to query.
Spring Boot Service
To ensure decoupling between the Action for Google Assistant and Elasticsearch, we will expose a RESTful search API, written in Java using Spring Boot and Spring Data Elasticsearch. The API will expose a tailored set of flexible endpoints to the Action. Google’s machine learning services will ensure our conversational model is trained to understand user intent. The API’s query algorithm and Elasticsearch’s rich Lucene-based search features will ensure the most relevant results are returned. We will host the Spring Boot service on Google Kubernetes Engine (GKE).
Will use a Spring Rest Controller to expose our RESTful web service’s resources to our Action’s Cloud Function. The current Spring Boot service contains five /elastic
resource endpoints exposed by the ElasticsearchPostController
class . Of those five, two endpoints will be called by our Action in this demo, the /{id}
and the /dismax-search
endpoints. The endpoints can be seen using the Swagger UI. Our Spring Boot service implements SpringFox, which has the option to expose the Swagger interactive API UI.
The /{id}
endpoint accepts a unique post ID as a path variable in the API call and returns a single ElasticsearchPost object wrapped in a Map object, and serialized to a JSON payload (gist).
@RequestMapping(value = "/{id}") | |
@ApiOperation(value = "Returns a post by id") | |
public Map<String, Optional<ElasticsearchPost>> findById(@PathVariable("id") long id) { | |
Optional<ElasticsearchPost> elasticsearchPost = elasticsearchPostRepository.findById(id); | |
Map<String, Optional<ElasticsearchPost>> elasticsearchPostMap = new HashMap<>(); | |
elasticsearchPostMap.put("ElasticsearchPosts", elasticsearchPost); | |
return elasticsearchPostMap; | |
} |
Below we see an example response from the Spring Boot service to an API call to the /{id}
endpoint, for post ID 22141. Since we are returning a single post, based on ID, the relevance score will always be 0.0 (gist).
# http http://api.chatbotzlabs.com/blog/api/v1/elastic/22141 | |
HTTP/1.1 200 | |
Content-Type: application/json;charset=UTF-8 | |
Date: Mon, 17 Sep 2018 23:15:01 GMT | |
Transfer-Encoding: chunked | |
{ | |
"ElasticsearchPosts": { | |
"ID": 22141, | |
"_score": 0.0, | |
"guid": "https://programmaticponderings.com/?p=22141", | |
"post_date": "2018-04-13 12:45:19", | |
"post_excerpt": "Learn to manage distributed applications, spanning multiple Kubernetes environments, using Istio on GKE.", | |
"post_title": "Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1" | |
} | |
} |
This controller’s /{id}
endpoint relies on a method exposed by the ElasticsearchPostRepository
interface. The ElasticsearchPostRepository
is a Spring Data Repository , which extends ElasticsearchRepository. The repository exposes the findById()
method, which returns a single instance of the type, ElasticsearchPost
, from Elasticsearch (gist).
package com.example.elasticsearch.repository; | |
import com.example.elasticsearch.model.ElasticsearchPost; | |
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository; | |
public interface ElasticsearchPostRepository extends ElasticsearchRepository<ElasticsearchPost, Long> { | |
} |
The ElasticsearchPost
class is annotated as an Elasticsearch Document
, similar to other Spring Data Document
annotations, such as Spring Data MongoDB. The ElasticsearchPost
class is instantiated to hold deserialized JSON documents stored in ElasticSeach stores indexed data (gist).
package com.example.elasticsearch.model; | |
import com.fasterxml.jackson.annotation.JsonIgnoreProperties; | |
import com.fasterxml.jackson.annotation.JsonProperty; | |
import org.springframework.data.annotation.Id; | |
import org.springframework.data.elasticsearch.annotations.Document; | |
import java.io.Serializable; | |
@JsonIgnoreProperties(ignoreUnknown = true) | |
@Document(indexName = "<elasticsearch_index_name>", type = "post") | |
public class ElasticsearchPost implements Serializable { | |
@Id | |
@JsonProperty("ID") | |
private long id; | |
@JsonProperty("_score") | |
private float score; | |
@JsonProperty("post_title") | |
private String title; | |
@JsonProperty("post_date") | |
private String publishDate; | |
@JsonProperty("post_excerpt") | |
private String excerpt; | |
@JsonProperty("guid") | |
private String url; | |
// Setters removed for brevity... | |
} |
Dis Max Query
The second API endpoint called by our Action is the /dismax-search
endpoint. We use this endpoint to search for a particular post topic, such as ’Docker’. This type of search, as opposed to the Spring Data Repository method used by the /{id}
endpoint, requires the use of an ElasticsearchTemplate. The ElasticsearchTemplate
allows us to form more complex Elasticsearch queries than is possible using an ElasticsearchRepository
class. Below, the /dismax-search
endpoint accepts four input request parameters in the API call, which are the topic to search for, the starting point and size of the response to return, and the minimum relevance score (gist).
@RequestMapping(value = "/dismax-search") | |
@ApiOperation(value = "Performs dismax search and returns a list of posts containing the value input") | |
public Map<String, List<ElasticsearchPost>> dismaxSearch(@RequestParam("value") String value, | |
@RequestParam("start") int start, | |
@RequestParam("size") int size, | |
@RequestParam("minScore") float minScore) { | |
List<ElasticsearchPost> elasticsearchPosts = elasticsearchService.dismaxSearch(value, start, size, minScore); | |
Map<String, List<ElasticsearchPost>> elasticsearchPostMap = new HashMap<>(); | |
elasticsearchPostMap.put("ElasticsearchPosts", elasticsearchPosts); | |
return elasticsearchPostMap; | |
} |
The logic to create and execute the ElasticsearchTemplate is
handled by the ElasticsearchService
class. The ElasticsearchPostController
calls the ElasticsearchService
. The ElasticsearchService
handles querying Elasticsearch and returning a list of ElasticsearchPost
objects to the ElasticsearchPostController
. The dismaxSearch
method, called by the /dismax-search
endpoint’s method constructs the ElasticsearchTemplate instance, used to build the request to Elasticsearch’s RESTful API (gist).
public List<ElasticsearchPost> dismaxSearch(String value, int start, int size, float minScore) { | |
QueryBuilder queryBuilder = getQueryBuilder(value); | |
Client client = elasticsearchTemplate.getClient(); | |
SearchResponse response = client.prepareSearch() | |
.setQuery(queryBuilder) | |
.setSize(size) | |
.setFrom(start) | |
.setMinScore(minScore) | |
.addSort("_score", SortOrder.DESC) | |
.setExplain(true) | |
.execute() | |
.actionGet(); | |
List<SearchHit> searchHits = Arrays.asList(response.getHits().getHits()); | |
ObjectMapper mapper = new ObjectMapper(); | |
List<ElasticsearchPost> elasticsearchPosts = new ArrayList<>(); | |
searchHits.forEach(hit -> { | |
try { | |
elasticsearchPosts.add(mapper.readValue(hit.getSourceAsString(), ElasticsearchPost.class)); | |
elasticsearchPosts.get(elasticsearchPosts.size() - 1).setScore(hit.getScore()); | |
} catch (IOException e) { | |
e.printStackTrace(); | |
} | |
}); | |
return elasticsearchPosts; | |
} |
To obtain the most relevant search results, we will use Elasticsearch’s Dis Max Query combined with the Match Phrase Query. Elastic describes the Dis Max Query as:
‘a query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.’
In short, the Dis Max Query allows us to query and weight (boost importance) multiple indexed fields, across all documents. The Match Phrase Query analyzes the text (our topic) and creates a phrase query out of the analyzed text.
After some experimentation, I found the valid search results were returned by applying greater weighting (boost) to the post’s title and excerpt, followed by the post’s tags and categories, and finally, the actual text of the post. I also limited results to a minimum score of 1.0. Just because a word or phrase is repeated in a post, doesn’t mean it is indicative of the post’s subject matter. Setting a minimum score attempts to help ensure the requested topic is featured more prominently in the resulting post or posts. Increasing the minimum score will decrease the number of search results, but theoretically, increase their relevance (gist).
private QueryBuilder getQueryBuilder(String value) { | |
value = value.toLowerCase(); | |
return QueryBuilders.disMaxQuery() | |
.add(matchPhraseQuery("post_title", value).boost(3)) | |
.add(matchPhraseQuery("post_excerpt", value).boost(3)) | |
.add(matchPhraseQuery("terms.post_tag.name", value).boost(2)) | |
.add(matchPhraseQuery("terms.category.name", value).boost(2)) | |
.add(matchPhraseQuery("post_content", value).boost(1)); | |
} |
Below we see the results of a /dismax-search
API call to our service, querying for posts about the topic, ’Istio’, with a minimum score of 2.0. The search resulted in a serialized JSON payload containing three ElasticsearchPost
objects (gist).
http http://api.chatbotzlabs.com/blog/api/v1/elastic/dismax-search?minScore=2&size=3&start=0&value=Istio | |
HTTP/1.1 200 | |
Content-Type: application/json;charset=UTF-8 | |
Date: Tue, 18 Sep 2018 03:50:35 GMT | |
Transfer-Encoding: chunked | |
{ | |
"ElasticsearchPosts": [ | |
{ | |
"ID": 21867, | |
"_score": 5.91989, | |
"guid": "https://programmaticponderings.com/?p=21867", | |
"post_date": "2017-12-22 16:04:17", | |
"post_excerpt": "Learn to deploy and configure Istio on Google Kubernetes Engine (GKE).", | |
"post_title": "Deploying and Configuring Istio on Google Kubernetes Engine (GKE)" | |
}, | |
{ | |
"ID": 22313, | |
"_score": 3.6616292, | |
"guid": "https://programmaticponderings.com/?p=22313", | |
"post_date": "2018-04-17 07:01:38", | |
"post_excerpt": "Learn to manage distributed applications, spanning multiple Kubernetes environments, using Istio on GKE.", | |
"post_title": "Managing Applications Across Multiple Kubernetes Environments with Istio: Part 2" | |
}, | |
{ | |
"ID": 22141, | |
"_score": 3.6616292, | |
"guid": "https://programmaticponderings.com/?p=22141", | |
"post_date": "2018-04-13 12:45:19", | |
"post_excerpt": "Learn to manage distributed applications, spanning multiple Kubernetes environments, using Istio on GKE.", | |
"post_title": "Managing Applications Across Multiple Kubernetes Environments with Istio: Part 1" | |
} | |
] | |
} |
Understanding Relevance Scoring
When returning search results, such as in the example above, the top result is the one with the highest score. The highest score should denote the most relevant result to the search query. According to Elastic, in their document titled, The Theory Behind Relevance Scoring, scoring is explained this way:
‘Lucene (and thus Elasticsearch) uses the Boolean model to find matching documents, and a formula called the practical scoring function to calculate relevance. This formula borrows concepts from term frequency/inverse document frequency and the vector space model but adds more-modern features like a coordination factor, field length normalization, and term or query clause boosting.’
In order to better understand this technical explanation of relevance scoring, it is much easy to see it applied to our example. Note the first search result above, Post ID 21867, has the highest score, 5.91989. Knowing that we are searching five fields (title, excerpt, tags, categories, and content), and boosting certain fields more than others, how was this score determined? Conveniently, Spring Data Elasticsearch’s SearchRequestBuilder
class exposed the setExplain
method. We can see this on line 12 of the dimaxQuery
method, shown above. By passing a boolean value of true
to the setExplain
method, we are able to see the detailed scoring algorithms used by Elasticsearch for the top result, shown above (gist).
5.9198895 = max of: | |
5.8995476 = weight(post_title:istio in 3) [PerFieldSimilarity], result of: | |
5.8995476 = score(doc=3,freq=1.0 = termFreq=1.0), product of: | |
3.0 = boost | |
1.6739764 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: | |
1.0 = docFreq | |
7.0 = docCount | |
1.1747572 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: | |
1.0 = termFreq=1.0 | |
1.2 = parameter k1 | |
0.75 = parameter b | |
11.0 = avgFieldLength | |
7.0 = fieldLength | |
5.9198895 = weight(post_excerpt:istio in 3) [PerFieldSimilarity], result of: | |
5.9198895 = score(doc=3,freq=1.0 = termFreq=1.0), product of: | |
3.0 = boost | |
1.6739764 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: | |
1.0 = docFreq | |
7.0 = docCount | |
1.1788079 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: | |
1.0 = termFreq=1.0 | |
1.2 = parameter k1 | |
0.75 = parameter b | |
12.714286 = avgFieldLength | |
8.0 = fieldLength | |
3.3479528 = weight(terms.post_tag.name:istio in 3) [PerFieldSimilarity], result of: | |
3.3479528 = score(doc=3,freq=1.0 = termFreq=1.0), product of: | |
2.0 = boost | |
1.6739764 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: | |
1.0 = docFreq | |
7.0 = docCount | |
1.0 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: | |
1.0 = termFreq=1.0 | |
1.2 = parameter k1 | |
0.75 = parameter b | |
16.0 = avgFieldLength | |
16.0 = fieldLength | |
2.52272 = weight(post_content:istio in 3) [PerFieldSimilarity], result of: | |
2.52272 = score(doc=3,freq=100.0 = termFreq=100.0), product of: | |
1.1631508 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: | |
2.0 = docFreq | |
7.0 = docCount | |
2.1688676 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: | |
100.0 = termFreq=100.0 | |
1.2 = parameter k1 | |
0.75 = parameter b | |
2251.1428 = avgFieldLength | |
2840.0 = fieldLength |
What this detail shows us is that of the five fields searched, the term ‘Istio’ was located in four of the five fields (all except ‘categories’). Using the practical scoring function described by Elasticsearch, and taking into account our boost values, we see that the post’s ‘excerpt’ field achieved the highest score of 5.9198895 (score of 1.6739764 * boost of 3.0).
Being able to view the scoring explanation helps us tune our search results. For example, according to the details, the term ‘Istio’ appeared 100 times (termFreq=100.0
) in the main body of the post (the ‘content’ field). We might ask ourselves if we are giving enough relevance to the content as opposed to other fields. We might choose to increase the boost or decrease other fields with respect to the ‘content’ field, to produce higher quality search results.
Google Kubernetes Engine
With the Elastic Stack running on Google Compute Engine, and the Spring Boot API service built, we can now provision a Kubernetes cluster to run our Spring Boot service. The service will sit between our Action’s Cloud Function and Elasticsearch. We will use Google Kubernetes Engine (GKE) to manage our Kubernete cluster on GCP. A GKE cluster is a managed group of uniform VM instances for running Kubernetes. The VMs are managed by Google Compute Engine. Google Compute Engine delivers virtual machines running in Google’s data centers, on their worldwide fiber network.
A GKE cluster can be provisioned using GCP’s Cloud Console or using the Cloud SDK, Google’s command-line interface for Google Cloud Platform products and services. I prefer using the CLI, which helps enable DevOps automation through tools like Jenkins and Travis CI (gist).
GCP_PROJECT="wp-search-bot" | |
GKE_CLUSTER="wp-search-cluster" | |
GCP_ZONE="us-east1-b" | |
NODE_COUNT="1" | |
INSTANCE_TYPE="n1-standard-1" | |
GKE_VERSION="1.10.7-gke.1" | |
gcloud beta container \ | |
--project ${GCP_PROJECT} clusters create ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} \ | |
--username "admin" \ | |
--cluster-version ${GKE_VERION} \ | |
--machine-type ${INSTANCE_TYPE} --image-type "COS" \ | |
--disk-type "pd-standard" --disk-size "100" \ | |
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ | |
--num-nodes ${NODE_COUNT} \ | |
--enable-cloud-logging --enable-cloud-monitoring \ | |
--network "projects/wp-search-bot/global/networks/default" \ | |
--subnetwork "projects/wp-search-bot/regions/us-east1/subnetworks/default" \ | |
--additional-zones "us-east1-b","us-east1-c","us-east1-d" \ | |
--addons HorizontalPodAutoscaling,HttpLoadBalancing \ | |
--no-enable-autoupgrade --enable-autorepair |
Below is the command I used to provision a minimally sized three-node GKE cluster, replete with the latest available version of Kubernetes. Although a one-node cluster is sufficient for early-stage development, testing should be done on a multi-node cluster to ensure the service will operate properly with multiple instances running behind a load-balancer (gist).
GCP_PROJECT="wp-search-bot" | |
GKE_CLUSTER="wp-search-cluster" | |
GCP_ZONE="us-east1-b" | |
NODE_COUNT="1" | |
INSTANCE_TYPE="n1-standard-1" | |
GKE_VERSION="1.10.7-gke.1" | |
gcloud beta container \ | |
--project ${GCP_PROJECT} clusters create ${GKE_CLUSTER} \ | |
--zone ${GCP_ZONE} \ | |
--username "admin" \ | |
--cluster-version ${GKE_VERION} \ | |
--machine-type ${INSTANCE_TYPE} --image-type "COS" \ | |
--disk-type "pd-standard" --disk-size "100" \ | |
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ | |
--num-nodes ${NODE_COUNT} \ | |
--enable-cloud-logging --enable-cloud-monitoring \ | |
--network "projects/wp-search-bot/global/networks/default" \ | |
--subnetwork "projects/wp-search-bot/regions/us-east1/subnetworks/default" \ | |
--additional-zones "us-east1-b","us-east1-c","us-east1-d" \ | |
--addons HorizontalPodAutoscaling,HttpLoadBalancing \ | |
--no-enable-autoupgrade --enable-autorepair |
Below, we see the three n1-standard-1
instance type worker nodes, one in each of three different specific geographical locations, referred to as zones. The three zones are in the us-east1
region. Multiple instances spread across multiple zones provide single-region high-availability for our Spring Boot service. With GKE, the Master Node is fully managed by Google.
Building Service Image
In order to deploy our Spring Boot service, we must first build a Docker Image and make that image available to our Kubernetes cluster. For lowest latency, I’ve chosen to build and publish the image to Google Container Registry, in addition to Docker Hub. The Spring Boot service’s Docker image is built on the latest Debian-based OpenJDK 10 Slim base image, available on Docker Hub. The Spring Boot JAR file is copied into the image (gist).
FROM openjdk:10.0.2-13-jdk-slim | |
LABEL maintainer="Gary A. Stafford <garystafford@rochester.rr.com>" | |
ENV REFRESHED_AT 2018-09-08 | |
EXPOSE 8080 | |
WORKDIR /tmp | |
COPY /build/libs/*.jar app.jar | |
CMD ["java", "-jar", "-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=gcp", "app.jar"] |
To automate the build and publish processes with tools such as Jenkins or Travis CI, we will use a simple shell script. The script builds the Spring Boot service using Gradle, then builds the Docker Image containing the Spring Boot JAR file, tags and publishes the Docker image to the image repository, and finally, redeploys the Spring Boot service container to GKE using kubectl (gist).
#!/usr/bin/env sh | |
# author: Gary A. Stafford | |
# site: https://programmaticponderings.com | |
# license: MIT License | |
IMAGE_REPOSITORY=<your_image_repo> | |
IMAGE_NAME=<your_image_name> | |
GCP_PROJECT=<your_project> | |
TAG=<your_image_tag> | |
# Build Spring Boot app | |
./gradlew clean build | |
# Build Docker file | |
docker build -f Docker/Dockerfile --no-cache -t ${IMAGE_REPOSITORY}/${IMAGE_NAME}:${TAG} . | |
# Push image to Docker Hub | |
docker push ${IMAGE_REPOSITORY}/${IMAGE_NAME}:${TAG} | |
# Push image to GCP Container Registry (GCR) | |
docker tag ${IMAGE_REPOSITORY}/${IMAGE_NAME}:${TAG} gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:${TAG} | |
docker push gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:${TAG} | |
# Re-deploy Workload (containerized app) to GKE | |
kubectl replace --force -f gke/${IMAGE_NAME}.yaml |
Below we see the latest version of our Spring Boot Docker image published to the Google Cloud Registry.
Deploying the Service
To deploy the Spring Boot service’s container to GKE, we will use a Kubernetes Deployment Controller. The Deployment Controller manages the Pods and ReplicaSets. As a deployment alternative, you could choose to use CoreOS’ Operator Framework to create an Operator or use Helm to create a Helm Chart. Along with the Deployment Controller, there is a ConfigMap and a Horizontal Pod Autoscaler. The ConfigMap contains environment variables that will be available to the Spring Boot service instances running in the Kubernetes Pods. Variables include the host and port of the Elasticsearch cluster on GCP and the name of the Elasticsearch index created by WordPress. These values will override any configuration values set in the service’s application.yml
Java properties file.
The Deployment Controller creates a ReplicaSet with three Pods, running the Spring Boot service, one on each worker node (gist).
--- | |
apiVersion: "v1" | |
kind: "ConfigMap" | |
metadata: | |
name: "wp-es-demo-config" | |
namespace: "dev" | |
labels: | |
app: "wp-es-demo" | |
data: | |
cluster_nodes: "<your_elasticsearch_instance_tcp_host_and_port>" | |
cluser_name: "elasticsearch" | |
--- | |
apiVersion: "extensions/v1beta1" | |
kind: "Deployment" | |
metadata: | |
name: "wp-es-demo" | |
namespace: "dev" | |
labels: | |
app: "wp-es-demo" | |
spec: | |
replicas: 3 | |
selector: | |
matchLabels: | |
app: "wp-es-demo" | |
template: | |
metadata: | |
labels: | |
app: "wp-es-demo" | |
spec: | |
containers: | |
- name: "wp-es-demo" | |
image: "gcr.io/wp-search-bot/wp-es-demo" | |
imagePullPolicy: Always | |
env: | |
- name: "SPRING_DATA_ELASTICSEARCH_CLUSTER-NODES" | |
valueFrom: | |
configMapKeyRef: | |
key: "cluster_nodes" | |
name: "wp-es-demo-config" | |
- name: "SPRING_DATA_ELASTICSEARCH_CLUSTER-NAME" | |
valueFrom: | |
configMapKeyRef: | |
key: "cluser_name" | |
name: "wp-es-demo-config" | |
--- | |
apiVersion: "autoscaling/v1" | |
kind: "HorizontalPodAutoscaler" | |
metadata: | |
name: "wp-es-demo-hpa" | |
namespace: "dev" | |
labels: | |
app: "wp-es-demo" | |
spec: | |
scaleTargetRef: | |
kind: "Deployment" | |
name: "wp-es-demo" | |
apiVersion: "apps/v1beta1" | |
minReplicas: 1 | |
maxReplicas: 3 | |
targetCPUUtilizationPercentage: 80 |
To properly load-balance the three Spring Boot service Pods, we will also deploy a Kubernetes Service of the Kubernetes ServiceType, LoadBalancer. According to Kubernetes, a Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them (gist).
--- | |
apiVersion: "v1" | |
kind: "Service" | |
metadata: | |
name: "wp-es-demo-service" | |
namespace: "dev" | |
labels: | |
app: "wp-es-demo" | |
spec: | |
ports: | |
- protocol: "TCP" | |
port: 80 | |
targetPort: 8080 | |
selector: | |
app: "wp-es-demo" | |
type: "LoadBalancer" | |
loadBalancerIP: "" |
Below, we see three instances of the Spring Boot service deployed to the GKE cluster on GCP. Each Pod, containing an instance of the Spring Boot service, is in a load-balanced pool, behind our service load balancer, and exposed on port 80.
Testing the API
We can test our API and ensure it is talking to Elasticsearch, and returning expected results using the Swagger UI, shown previously, or tools like Postman, shown below.
Communication Between GKE and Elasticsearch
Similar to port 9200, which needed to be opened for indexing content over HTTP, we also need to open firewall port 9300 between the Spring Boot service on GKE and Elasticsearch. According to Elastic, Elasticsearch Java clients talk to the Elasticsearch cluster over port 9300, using the native Elasticsearch transport protocol (TCP).
Again, locking this port down to the GKE cluster as the source is critical for security (gist).
SOURCE_IP=<gke_cluster_public_ip_address> | |
PORT=9300 | |
gcloud compute \ | |
--project=wp-search-bot \ | |
firewall-rules create elk-1-tcp-${PORT} \ | |
--description=elk-1-tcp-${PORT} \ | |
--direction=INGRESS \ | |
--priority=1000 \ | |
--network=default \ | |
--action=ALLOW \ | |
--rules=tcp:${PORT} \ | |
--source-ranges=${SOURCE_IP} \ | |
--target-tags=elk-1-tcp-${PORT} |
Part Two
In part one we have examined the creation of the Elastic Stack, the provisioning of the GKE cluster, and the development and deployment of the Spring Boot service to Kubernetes. In part two of this post, we will tie everything together by creating and integrating our Action for Google Assistant:
- Create the new Actions project using the Actions on Google console;
- Develop the Action’s Intents using the Dialogflow console;
- Develop, deploy, and test the Cloud Function to GCP;
Related Posts
If you’re interested in comparing the development of an Action for Google Assistant with that of Amazon’s Alexa and Microsoft’s LUIS-enabled chatbots, in addition to this post, I would recommend the previous three posts in this conversation interface series:
- Building Serverless Actions for Google Assistant with Google Cloud Functions, Cloud Datastore, and Cloud Storage
- Building and Integrating LUIS-enabled Chatbots with Slack, using Azure Bot Service, Bot Builder SDK, and Cosmos DB,
- Building Asynchronous, Serverless Alexa Skills with AWS Lambda, DynamoDB, S3, and Node.js.
All three article’s demonstrations leverage their respective Cloud platform’s machine learning-based Natural language understanding (NLU) services. All three take advantage of their respective Cloud platform’s NoSQL database and object storage services. Lastly, all three of the article’s demonstrations are written in a common language, Node.js.
All opinions expressed in this post are my own and not necessarily the views of my current or past employers, their clients, or Google.