Posts Tagged Testing
Unlocking the Potential of Generative AI for Synthetic Data Generation
Posted by Gary A. Stafford in AI/ML, Cloud, Enterprise Software Development, Machine Learning, Python, Software Development, Technology Consulting on April 18, 2023
Explore the capabilities and applications of generative AI to create realistic synthetic data for software development, analytics, and machine learning

Introduction
Generative AI refers to a class of artificial intelligence algorithms capable of generating new data similar to a given dataset. These algorithms learn the underlying patterns and relationships in the data and use this knowledge to create new data consistent with the original dataset. Generative AI is a rapidly evolving field that has the potential to revolutionize the way we generate and use data.
Generative AI can generate synthetic data based on patterns and relationships learned from actual data. This ability to generate synthetic data has numerous applications, from creating realistic virtual environments for training and simulation to generating new data for machine learning models. In this article, we will explore the capabilities of generative AI and its potential to generate synthetic data, both directly and indirectly, for software development, data analytics, and machine learning.
Common Forms of Synthetic Data
According to AltexSoft, in their article Synthetic Data for Machine Learning: its Nature, Types, and Ways of Generation, common forms of synthetic data include:
- Tabular data: This type of synthetic data is often used to generate datasets that resemble real-world data in terms of structure and statistical properties.
- Time series data: This type of synthetic data generates datasets that resemble real-world time series data. It is commonly used when real-world time series data is unavailable or too expensive.
- Image and video data: This synthetic data is used to generate realistic images and videos for training machine learning models or simulations.
- Text data: This synthetic data generates realistic text for natural language processing tasks or for generating training data for machine learning models.
- Sound data: This synthetic data generates realistic sound for training machine learning models or simulations.
Synthetic Tabular Data Types
Synthetic tabular data refers to artificially generated datasets that resemble real-world tabular data in terms of structure and statistical properties. Tabular data is organized into rows and columns, like tables or spreadsheets. Some specific types of synthetic tabular data include:
- Financial data: Synthetic datasets that resemble real-world financial data such as bank transactions, stock prices, or credit card information.
- Customer data: Synthetic datasets that resemble real-world customer data, such as purchase history, demographic information, or customer behavior.
- Medical data: Synthetic datasets that resemble real-world medical data, such as patient records, medical test results, or treatment history.
- Sensor data: Synthetic datasets that resemble real-world sensor data such as temperature readings, humidity levels, or air quality measurements.
- Sales data: Synthetic datasets that resemble real-world sales data, such as sales transactions, product information, or customer behavior.
- Inventory data: Synthetic datasets that resemble real-world inventory data, such as stock levels, product information, or supplier information.
- Marketing data: Synthetic datasets that resemble real-world marketing data, such as campaign performance, customer behavior, or market trends.
- Human resources data: Synthetic datasets that resemble real-world human resources data, such as employee records, performance evaluations, or salary information.
Challenges with Creating Synthetic Data
According to sources including Towards Data Science, enov8, and J.P. Morgan, there are several challenges in creating synthetic data, including:
- Technical difficulty: Properly modeling complex real-world behaviors such as synthetic data is challenging, given available technologies.
- Biased behavior: The flexible nature of synthetic data makes it prone to potentially biased results.
- Privacy concerns: Care must be taken to ensure synthetic data does not reveal sensitive information.
- Quality of the data model: If the quality of the data model is not high, wrong conclusions can be reached.
- Time and effort: Synthetic data generation requires time and effort.
Difficult Patterns and Behaviors to Model
Many patterns and behaviors can be challenging to model in synthetic data, for example:
- Rare events: If certain events rarely occur in the real world, generating synthetic data that accurately reflects their distribution can be difficult.
- Complex relationships: Synthetic data generators may struggle to capture complex relationships between variables, such as non-linear interactions, feedback loops, or conditional dependencies.
- Contextual variability: Contextual factors, such as time, location, or individual differences, can have a significant impact on the distribution of data. Modeling this variability accurately can be challenging.
- Outliers and anomalies: Synthetic data generators may be unable to generate outliers or anomalies that are realistic and representative of the real world.
- Dynamic data: If the data is dynamic and changes over time, it can be challenging to generate synthetic data that captures these changes accurately.
- Unobserved variables: Sometimes, variables may be necessary for understanding the data distribution but are not directly observable. These variables can be challenging to model in synthetic data.
- Data bias: If the real-world data is biased in some way, such as over-representation of certain groups or under-representation of others, it can be challenging to generate synthetic data that is unbiased and representative of the population.
- Time-dependent patterns: If the data exhibits time-dependent patterns, such as seasonality or trends, it can be challenging to generate synthetic data that accurately reflects them.
- Spatial patterns: If the data has a spatial component, such as location data or images, it can be challenging to generate synthetic data that captures the spatial patterns realistically.
- Data sparsity: If the data is sparse or incomplete, it can be difficult to generate synthetic data that accurately reflects the distribution of the entire dataset.
- Human behavior: If the data involves human behavior, such as in social science or behavioral economics, it can be challenging to model the complex and nuanced behaviors of individuals and groups.
- Sensitive or confidential information: In some cases, the data may contain sensitive or confidential information that cannot be shared, making it challenging to generate synthetic data that preserves privacy while accurately reflecting the underlying distribution.
Overall, many patterns can be challenging to model accurately in synthetic data. It often requires careful consideration of the specific data characteristics and the synthetic data generation techniques’ limitations.
Easily Modeled Patterns and Behaviors
There are many simple and well-understood patterns that can be easily modeled in synthetic data, for example:
- Randomness: If the data is purely random, it can be easily generated using a random number generator or other simple techniques.
- Gaussian distribution: If the data follows a Gaussian or normal distribution, it can be generated using a Gaussian random number generator.
- Uniform distribution: If the data follows a uniform distribution, it can be easily generated using a uniform random number generator.
- Linear relationships: If the data follows a linear relationship between variables, it can be modeled using simple linear regression techniques.
- Categorical variables: If the data consists of categorical variables, such as gender or occupation, it can be generated using a categorical distribution.
- Text data: If the data consists of text, it can be generated using natural language processing techniques, such as language models or text generation algorithms.
- Time series: If the data consists of time series data, such as stock prices or weather data, it can be generated using time series models, such as Autoregressive integrated moving average (ARIMA) or Long short-term memory (LSTM).
- Seasonality: If the data exhibits seasonal patterns, such as higher sales data during holiday periods, it can be generated using seasonal time series models.
- Proportions and percentages: If the data consists of proportions or percentages, such as product sales distribution across different regions, it can be generated using beta or Dirichlet distributions.
- Multivariate normal distribution: If the data follows a multivariate normal distribution, it can be generated using a multivariate Gaussian random number generator.
- Networks: If the data consists of network or graph data, such as social networks or transportation networks, it can be generated using network models, such as Erdos-Renyi or Barabasi-Albert models.
- Binary data: If the data consists of binary data, such as whether a customer churned, it can be generated using a Bernoulli distribution.
- Geospatial data: If it involves geospatial data, such as the location of points of interest, it can be easily using geospatial models, such as point processes or spatial point patterns.
- Customer behaviors: If the data involves customer behaviors, such as browsing or purchase histories, it can be generated using customer journey models, such as Markov models.
In general, simple and well-understood patterns can be easily modeled using synthetic data techniques. In contrast, more complex and nuanced patterns may require more sophisticated modeling techniques and a deeper understanding of the underlying data characteristics.
Creating Synthetic Data with Generative AI
We can use many popular generative AI-powered tools to create synthetic data for testing applications, constructing analytics pipelines, and building machine learning models. Tools include OpenAI ChatGPT, Microsoft’s all-new Bing Chat, ChatSonic, Tabnine, GitHub Copilot, and Amazon CodeWhisperer. For more information on these tools, check out my recent blog post:
Accelerating Development with Generative AI-Powered Coding Tools
Explore six popular generative AI-powered tools, including ChatGPT, Copilot, CodeWhisperer, Tabnine, Bing, and…garystafford.medium.com
Let’s start with a simple example of generating synthetic sales data. Suppose we have created a new sales forecasting application for coffee shops that we need to test using synthetic data. We might start by prompting a generative AI tool like OpenAI’s ChatGPT for some data:
Create a CSV file with 25 random sales records for a coffee shop.
Each record should include the following fields:
- id (incrementing integer starting at 1)
- date (random date between 1/1/2022 and 12/31/2022)
- time (random time between 6:00am and 9:00pm in 1-minute increments)
- product_id (incrementing integer starting at 1)
- product
- calories
- price in USD
- type (drink or food)
- quantity (random integer between 1 and 3)
- amount (price * quantity)
- payment type (cash, credit, debit, or gift card)
The content and structure of a prompt can vary, and this can strongly influence ChatGPT’s response. Based on the above prompt, the results were accurate but not very useful for testing our application in this format. ChatGPT cannot create a physical CSV file. Furthermore, ChatGPT’s response length is limited; only about twenty records were returned. According to ChatGPT, in general, ChatGPT can generate responses of up to 2,048 tokens, the maximum output length allowed by the GPT-3 model.

Instead of outputting the actual synthetic data, we could ask ChatGPT to write a program that can, in turn, generate the synthetic data. This option is certainly more scalable. Let’s prompt ChatGPT to write a Python program to generate synthetic sales data with the same characteristics as before:
Create a Python3 program to generate 100 sales of common items
sold in a coffee shop. The data should be written to a CSV file
and include a header row. Each record should include the following fields:
- id (incrementing integer starting at 1)
- date (random date between 1/1/2022 and 12/31/2022)
- time (random time between 6:00am and 9:00pm in 1-minute increments)
- product_id (incrementing integer starting at 1)
- product
- calories
- price in USD
- type (drink or food)
- quantity (random integer between 1 and 3)
- amount (price * quantity)
- payment type (cash, credit, debit, or gift card)
Using a single concise prompt, ChatGPT generated a complete Python program, including code comments, to generate synthetic sales data. Unfortunately, given ChatGPT’s response size limitation, the coffee shop menu was limited to just six items. Reprompting for more items would result in the truncation of the output and, thus, the program, making it unrunnable. Instead, we could use an additional prompt to generate a longer Python list of menu items and combine the two pieces of code in our IDE. Regardless, we will still need to copy and paste the code into our IDE to review, debug, test, and run.

ChatGPT’s Python program, copied and pasted into VS Code, ran without modifications, and wrote 100 synthetic sales records to a CSV file!

Using IDE-based Generative AI Tools
Although generating synthetic data directly or snippets of code in chat-based generative AI tools are helpful for limited use cases, writing code in IDE gives us several advantages:
- Code does not need to be copied and pasted from external sources into an IDE
- Consecutive lines of code, method, and block code completion overcome the single response size limits of chat-based tools like OpenAI ChatGPT
- Code can be reused and adapted to evolving use cases over time
- Python interpreter and debugger or equivalent for other languages
- Automatic code formatting, linting, and code style enforcement
- Unit, integration, and functional testing
- Static code analysis (SCA)
- Vulnerability scanning
- IntelliSense for code completion
- Source code management (SCM) / version control
Let’s use the same techniques we used with ChatGPT, but from within an IDE to generate three types of synthetic data. We will choose Microsoft’s VS Code with GitHub Copilot and Python as our programming language.

Source Code
All the code examples shown in this post can be found on GitHub.
Example #1: Coffee Shop Sales Data
First, we will start by outlining the program’s objective using code comments on the top of our Python file. This detailed context helps us to clearly express our goal and enables Copilot to generate an accurate response.
# Write a program that creates synthetic sales data for a coffee shop.
# The program should accept a command line argument that specifies the number of records to generate.
# The program should write the sales data to a file called 'coffee_shop_sales_data.csv'.
# The program should contain the following functions:
# - main() function that calls the other functions
# - function that returns one random product from a list of dictionaries
# - function that returns a dictionary containing one sales record
# - function that writes the sales records to a file
Following the import statements also generated with the assistance of Copilot, we will write the first function to return a random product from a list of 25 products. Again, we will use code comments as a prompt to generate the code. Copilot was able to generate 100% of the function’s code from the comments.
# Write a function to create list of dictionaries.
# The list of dictionaries should contain 15 drink items and 10 food items sold in a coffee shop.
# Include the product id, product name, calories, price, and type (Food or Drink).
# Capilize the first letter of each product name.
# Return a random item from the list of dictionaries.
Below is an example of Copilot’s ability to generate complete lines of code. Ultimately, it generated 100% of the function including choosing the items sold in a coffee shop, with a reasonable price and caloric count. Copilot is not limited to just understanding code.

Next, we will write a function to return a random sales record.
# Write a function to return a random sales record.
# The record should be a dictionary with the following fields:
# - id (an incrementing integer starting at 1)
# - date (a random date between 1/1/2022 and 12/31/2022)
# - time (a random time between 6:00am and 9:00pm in 1 minute increments)
# - product_id, product, calories, price, and type (from the get_product function)
# - quantity (a random integer between 1 and 3)
# - amount (price * quantity)
# - payment type (Cash, Credit, Debit, Gift Card, Apple Pay, Google Pay, or Venmo)
Again, Copilot generated 100% of the function’s code as a single block based on the code comments.

Lastly, we will create a function to write the sales data into a CSV file using Copilot’s help.
# Write a function to write the sales records to a CSV file called 'coffee_shop_sales.csv'.
# Use an input parameter to specify the number of records to write.
# The CSV file must have a header row and be comma delimited.
# All string values must be enclosed in double quotes.
Again below, we see an example of Copilot’s ability to generate an entire Python function. I needed to correct a few problems with the generated code. First, there was a lack of quotes for string values, which I added to the function (quotechar='"', quoting=csv.QUOTE_NONNUMERIC
). Also, the function was missing a key line of code, sale = get_sales_record()
, which would have caused the code to fail. Remember, just because the code was generated does not mean it is correct.

Here is the complete program that creates synthetic sales data for a coffee shop with Copilot assistance:
Copilot generated an astounding 80–85% of the program’s final code. The initial program took 10–15 minutes to write using code comments. I then added a few new features, including the ability to pass in the record count on the command line and the hash-based transaction id, which took another 5 minutes. Finally, I used GitHub Code Brushes to optimize the code and generate the Python docstrings, and Black Formatter and Flake8 extensions to format and lint, all of which took less than 5 minutes. With testing and debugging, the total time was about 25–30 minutes.
The most significant difference with Copilot was that I never had to leave the IDE to look up code references or find existing sales datasets or even a coffee shop menu to duplicate. The code, as well as the list of products, price, calories, and product type, were all generated by Copilot.
To make this example more realistic, you could use Copilot’s assistance to write algorithms capable of reflecting daily, weekly, and seasonal variations in product choice and sales volumes. This might include simulating increased sales during the busy morning rush hour or a preference for iced drinks in the summer months versus hot drinks during the winter months.
Here is an example of the synthetic sales data output by the example application:
Example #2: Residential Address Data
We could use these same techniques to generate a list of residential addresses. To start, we can prompt Copilot for the values in a list of common street names and street types in the United States:
# Write a function that creates a list of common street names
# in the United States, in alphabetical order.
# List should be in alphabetical order. Each name should be unique.
# Return a random street name.
def get_street_name():
street_names = [
"Ash", "Bend", "Bluff", "Branch", "Bridge", "Broadway", "Brook", "Burg",
"Bury", "Canyon", "Cape", "Cedar", "Cove", "Creek", "Crest", "Crossing",
"Dale", "Dam", "Divide", "Downs", "Elm", "Estates", "Falls", "Fifth",
"First", "Fork", "Fourth", "Glen", "Green", "Grove", "Harbor", "Heights",
"Hickory", "Hill", "Hollow", "Island", "Isle", "Knoll", "Lake", "Landing",
...
]
return random.choice(street_names)
# Write a function that creates a list of common street types
# in the United States, in alphabetical order.
# List should be in alphabetical order. Each name should be unique.
# Return a random street type.
def get_street_type():
street_types = [
"Alley", "Avenue", "Bend", "Bluff", "Boulevard", "Branch", "Bridge", "Brook",
"Burg", "Circle", "Commons", "Court", "Drive", "Highway", "Lane", "Parkway",
"Place", "Road", "Square", "Street", "Terrace", "Trail", "Way"
]
return random.choice(street_types)
Next, we can create a function that returns a property type based on a categorical distribution of common residential property types with the prompt:
# Write a function to return a random property type.
# Accept a random value between 0 and 1 as an input parameter.
# The function must return one of the following values based on the %:
# 63% Single-family, 26% Multi-family, 4% Condo,
# 3% Townhouse, 2% Mobile home, 1% Farm, 1% Other.
Again, Copilot generated 100% of the function’s code as a single block based on the code comments.

Additionally, we could have Copilot help us generate a list of the 50 largest cities in the United States with state, zip code, and population, with the prompt:
# Write a function to returns the 50 largest cities in the United States.
# List should be sorted in descending order by population.
# Include the city, state abbreviation, zip code, and population.
# Return a list of dictionaries.
Once again, Copilot generated 100% of the function’s code using a combination of single lines and code blocks based on the code comments.

When randomly choosing a city, we can use a categorical distribution of populations of all the cities to control the distribution of cities in the final synthetic dataset. For example, there will be more addresses in larger cities like New York City or Los Angeles than in smaller cities like Buffalo or Virginia Beach, with the prompt:
# Write a function that calculates the total population of the list of cities.
# Add a 'pcnt_of_total_population' and 'pcnt_running_total' columns to list.
# Returns a sorted list of cities by population.
Here is the complete program that creates synthetic US-based address data with Copilot assistance:
To make this example more realistic, you could use Copilot’s assistance to write algorithms capable of accurately reflecting assessed property values based on the type of residence and the zip code.
Here is an example of the synthetic US-based residential address data output by the example application:
Example #3: Demographic Data
We could use the same techniques again to generate synthetic demographic data. With the assistance of Copilot, we can write functions that randomly return typical feminine or masculine first names (forenames) and common last names (surnames) found in the United States, for this use case.
# Write a function that generates a list of common feminine first names in the United States.
# List should be in alphabetical order.
# Each name should be unique.
# Return random first name.
def get_first_name_feminine():
first_name_feminine = [
"Alice", "Amanda", "Amy", "Angela", "Ann", "Anna", "Barbara", "Betty",
"Brenda", "Carol", "Carolyn", "Catherine", "Christine", "Cynthia", "Deborah", "Debra",
"Diane", "Donna", "Doris", "Dorothy", "Elizabeth", "Frances", "Gloria", "Heather",
"Helen", "Janet", "Jennifer", "Jessica", "Joyce", "Julie", "Karen", "Kathleen", "Kimberly",
]
return random.choice(first_name_feminine)
With the assistance of Copilot, we can also write functions that return demographic information, such as age, gender, race, marital status, religion, and political affiliation. Similar to the previous sales data example, we can influence the final synthetic dataset based on categorical distributions of different demographic categories, for instance, with the prompt:
# Write a function that returns a person's martial status.
# Accept a random value between 0 and 1 as an input parameter.
# The function must return one of the following values based on the %:
# 50% Married, 33% Single, 17% Unknown.
def get_martial_status(rnd_value):
if rnd_value < 0.50:
return "Married"
elif rnd_value < 0.83:
return "Single"
else:
return "Unknown"
By altering the categorical distributions, we can quickly alter the resulting synthetic dataset to reflect differing demographic characteristics: an older or younger population, the predominance of a single race, religious affiliation, or marital status, or the ratio of males to females.
Next, we can use a Gaussian distribution (aka normal distribution) to return the year of birth in a bell-shaped curve, given a mean year and a standard deviation, using Python’s random.normalvariate
function.
# Write a function that generates a normal distribution of date of births.
# with a mean year of 1975 and a standard deviation of 10.
# Return random date of birth as a string in the format YYYY-MM-DD
def get_dob():
day_of_year = random.randint(1, 365)
year_of_birth = int(random.normalvariate(1975, 10))
dob = date(int(year_of_birth), 1, 1) + timedelta(day_of_year - 1)
dob = dob.strftime("%Y-%m-%d")
return dob
Here is the complete program that creates synthetic demographic data with Copilot’s assistance:
To make this example more realistic, you could use Copilot’s assistance to write algorithms capable of more accurately representing the nuanced associations and correlations between age, gender, race, marital status, religion, and political affiliation.
Here is an example of the synthetic demographic data output by the example application:
Generative AI Tools for Unit Testing
In addition to writing code and documentation, a common use of generative AI code assistants like Copilot is unit tests. For example, we can create unit tests for each function in our coffee shop sales data code generator, using the same method of prompting with code comments.

Conclusion
In this post, we learned how Generative AI could assist us in creating synthetic data for software development, analytics, and machine learning. The examples herein generated data using simple techniques. Using advanced modeling techniques, we could generate increasingly complex, realistic synthetic data.
To learn about other ways Generative AI can be used to assist in writing code, please read my previous article, Ten Ways to Leverage Generative AI for Development on AWS.
🔔 To keep up with future content, follow Gary Stafford on LinkedIn.
This blog represents my viewpoints and not those of my employer, Amazon Web Services (AWS). All product names, logos, and brands are the property of their respective owners.
Managing AWS Infrastructure as Code using Ansible, CloudFormation, and CodeBuild
Posted by Gary A. Stafford in AWS, Build Automation, Cloud, Continuous Delivery, DevOps, Python, Technology Consulting on July 30, 2019
Introduction
When it comes to provisioning and configuring resources on the AWS cloud platform, there is a wide variety of services, tools, and workflows you could choose from. You could decide to exclusively use the cloud-based services provided by AWS, such as CodeBuild, CodePipeline, CodeStar, and OpsWorks. Alternatively, you could choose open-source software (OSS) for provisioning and configuring AWS resources, such as community editions of Jenkins, HashiCorp Terraform, Pulumi, Chef, and Puppet. You might also choose to use licensed products, such as Octopus Deploy, TeamCity, CloudBees Core, Travis CI Enterprise, and XebiaLabs XL Release. You might even decide to write your own custom tools or scripts in Python, Go, JavaScript, Bash, or other common languages.
The reality in most enterprises I have worked with, teams integrate a combination of AWS services, open-source software, custom scripts, and occasionally licensed products to construct complete, end-to-end, infrastructure as code-based workflows for provisioning and configuring AWS resources. Choices are most often based on team experience, vendor relationships, and an enterprise’s specific business use cases.
In the following post, we will explore one such set of easily-integrated tools for provisioning and configuring AWS resources. The tool-stack is comprised of Red Hat Ansible, AWS CloudFormation, and AWS CodeBuild, along with several complementary AWS technologies. Using these tools, we will provision a relatively simple AWS environment, then deploy, configure, and test a highly-available set of Apache HTTP Servers. The demonstration is similar to the one featured in a previous post, Getting Started with Red Hat Ansible for Google Cloud Platform.
Why Ansible?
With its simplicity, ease-of-use, broad compatibility with most major cloud, database, network, storage, and identity providers amongst other categories, Ansible has been a popular choice of Engineering teams for configuration-management since 2012. Given the wide variety of polyglot technologies used within modern Enterprises and the growing predominance of multi-cloud and hybrid cloud architectures, Ansible provides a common platform for enabling mature DevOps and infrastructure as code practices. Ansible is easily integrated with higher-level orchestration systems, such as AWS CodeBuild, Jenkins, or Red Hat AWX and Tower.
Technologies
The primary technologies used in this post include the following.
Red Hat Ansible
Ansible, purchased by Red Hat in October 2015, seamlessly provides workflow orchestration with configuration management, provisioning, and application deployment in a single platform. Unlike similar tools, Ansible’s workflow automation is agentless, relying on Secure Shell (SSH) and Windows Remote Management (WinRM). If you are interested in learning more on the advantages of Ansible, they’ve published a whitepaper on The Benefits of Agentless Architecture.
According to G2 Crowd, Ansible is a clear leader in the Configuration Management Software category, ranked right behind GitLab. Competitors in the category include GitLab, AWS Config, Puppet, Chef, Codenvy, HashiCorp Terraform, Octopus Deploy, and JetBrains TeamCity.
AWS CloudFormation
According to AWS, CloudFormation provides a common language to describe and provision all the infrastructure resources within AWS-based cloud environments. CloudFormation allows you to use a JSON- or YAML-based template to model and provision, in an automated and secure manner, all the resources needed for your applications across all AWS regions and accounts.
Codifying your infrastructure, often referred to as ‘Infrastructure as Code,’ allows you to treat your infrastructure as just code. You can author it with any IDE, check it into a version control system, and review the files with team members before deploying it.
AWS CodeBuild
According to AWS, CodeBuild is a fully managed continuous integration service that compiles your source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
CloudBuild integrates seamlessly with other AWS Developer tools, including CodeStar, CodeCommit, CodeDeploy, and CodePipeline.
According to G2 Crowd, the main competitors to AWS CodeBuild, in the Build Automation Software category, include Jenkins, CircleCI, CloudBees Core and CodeShip, Travis CI, JetBrains TeamCity, and Atlassian Bamboo.
Other Technologies
In addition to the major technologies noted above, we will also be leveraging the following services and tools to a lesser extent, in the demonstration:
- AWS CodeCommit
- AWS CodePipeline
- AWS Systems Manager Parameter Store
- Amazon Simple Storage Service (S3)
- AWS Identity and Access Management (IAM)
- AWS Command Line Interface (CLI)
- CloudFormation Linter
- Apache HTTP Server
Demonstration
Source Code
All source code for this post is contained in two GitHub repositories. The CloudFormation templates and associated files are in the ansible-aws-cfn GitHub repository. The Ansible Roles and related files are in the ansible-aws-roles GitHub repository. Both repositories may be cloned using the following commands.
git clone --branch master --single-branch --depth 1 --no-tags \ https://github.com/garystafford/ansible-aws-cfn.git git clone --branch master --single-branch --depth 1 --no-tags \ https://github.com/garystafford/ansible-aws-roles.git
Development Process
The general process we will follow for provisioning and configuring resources in this demonstration are as follows:
- Create an S3 bucket to store the validated CloudFormation templates
- Create an Amazon EC2 Key Pair for Ansible
- Create two AWS CodeCommit Repositories to store the project’s source code
- Put parameters in Parameter Store
- Write and test the CloudFormation templates
- Configure Ansible and AWS Dynamic Inventory script
- Write and test the Ansible Roles and Playbooks
- Write the CodeBuild build specification files
- Create an IAM Role for CodeBuild and CodePipeline
- Create and test CodeBuild Projects and CodePipeline Pipelines
- Provision, deploy, and configure the complete web platform to AWS
- Test the final web platform
Prerequisites
For this demonstration, I will assume you already have an AWS account, the AWS CLI, Python, and Ansible installed locally, an S3 bucket to store the final CloudFormation templates and an Amazon EC2 Key Pair for Ansible to use for SSH.
Continuous Integration and Delivery Overview
In this demonstration, we will be building multiple CI/CD pipelines for provisioning and configuring our resources to AWS, using several AWS services. These services include CodeCommit, CodeBuild, CodePipeline, Systems Manager Parameter Store, and Amazon Simple Storage Service (S3). The diagram below shows the complete CI/CD workflow we will build using these AWS services, along with Ansible.
AWS CodeCommit
According to Amazon, AWS CodeCommit is a fully-managed source control service that makes it easy to host secure and highly scalable private Git repositories. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure.
Start by creating two AWS CodeCommit repositories to hold the two GitHub projects your cloned earlier. Commit both projects to your own AWS CodeCommit repositories.
Configuration Management
We have several options for storing the configuration values necessary to provision and configure the resources on AWS. We could set configuration values as environment variables directly in CodeBuild. We could set configuration values from within our Ansible Roles. We could use AWS Systems Manager Parameter Store to store configuration values. For this demonstration, we will use a combination of all three options.
AWS Systems Manager Parameter Store
According to Amazon, AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values, as either plain text or encrypted.
The demonstration uses two CloudFormation templates. The two templates have several parameters. A majority of those parameter values will be stored in Parameter Store, retrieved by CloudBuild, and injected into the CloudFormation template during provisioning.
The Ansible GitHub project includes a shell script, parameter_store_values.sh
, to put the necessary parameters into Parameter Store. The script requires the AWS Command Line Interface (CLI) to be installed locally. You will need to change the KEY_PATH
key value in the script (snippet shown below) to match the location your private key, part of the Amazon EC2 Key Pair you created earlier for use by Ansible.
KEY_PATH="/path/to/private/key" # put encrypted parameter to Parameter Store aws ssm put-parameter \ --name $PARAMETER_PATH/ansible_private_key \ --type SecureString \ --value "file://${KEY_PATH}" \ --description "Ansible private key for EC2 instances" \ --overwrite
SecureString
Whereas all other parameters are stored in Parameter Store as String datatypes, the private key is stored as a SecureString datatype. Parameter Store uses an AWS Key Management Service (KMS) customer master key (CMK) to encrypt the SecureString parameter value. The IAM Role used by CodeBuild (discussed later) will have the correct permissions to use the KMS key to retrieve and decrypt the private key SecureString parameter value.
CloudFormation
The demonstration uses two CloudFormation templates. The first template, network-stack.template
, contains the AWS network stack resources. The template includes one VPC, one Internet Gateway, two NAT Gateways, four Subnets, two Elastic IP Addresses, and associated Route Tables and Security Groups. The second template, compute-stack.template
, contains the webserver compute stack resources. The template includes an Auto Scaling Group, Launch Configuration, Application Load Balancer (ALB), ALB Listener, ALB Target Group, and an Instance Security Group. Both templates originated from the AWS CloudFormation template sample library, and were modified for this demonstration.
The two templates are located in the cfn_templates
directory of the CloudFormation project, as shown below in the tree view.
. ├── LICENSE.md ├── README.md ├── buildspec_files │ ├── build.sh │ └── buildspec.yml ├── cfn_templates │ ├── compute-stack.template │ └── network-stack.template ├── codebuild_projects │ ├── build.sh │ └── cfn-validate-s3.json ├── codepipeline_pipelines │ ├── build.sh │ └── cfn-validate-s3.json └── requirements.txt
The templates require no modifications for the demonstration. All parameters are in Parameter store or set by the Ansible Roles, and consumed by the Ansible Playbooks via CodeBuild.
Ansible
We will use Red Hat Ansible to provision the network and compute resources by interacting directly with CloudFormation, deploy and configure Apache HTTP Server, and finally, perform final integration tests of the system. In my opinion, the closest equivalent to Ansible on the AWS platform is AWS OpsWorks. OpsWorks lets you use Chef and Puppet (direct competitors to Ansible) to automate how servers are configured, deployed, and managed across Amazon EC2 instances or on-premises compute environments.
Ansible Config
To use Ansible with AWS and CloudFormation, you will first want to customize your project’s ansible.cfg
file to enable the aws_ec2
inventory plugin. Below is part of my configuration file as a reference.
[defaults] gathering = smart fact_caching = jsonfile fact_caching_connection = /tmp fact_caching_timeout = 300 host_key_checking = False roles_path = roles inventory = inventories/hosts remote_user = ec2-user private_key_file = ~/.ssh/ansible [inventory] enable_plugins = host_list, script, yaml, ini, auto, aws_ec2
Ansible Roles
According to Ansible, Roles are ways of automatically loading certain variable files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users. For the demonstration, I have written four roles, located in the roles
directory, as shown below in the project tree view. The default, common
role is not used in this demonstration.
. ├── LICENSE.md ├── README.md ├── ansible.cfg ├── buildspec_files │ ├── buildspec_compute.yml │ ├── buildspec_integration_tests.yml │ ├── buildspec_network.yml │ └── buildspec_web_config.yml ├── codebuild_projects │ ├── ansible-test.json │ ├── ansible-web-config.json │ ├── build.sh │ ├── cfn-compute.json │ ├── cfn-network.json │ └── notes.md ├── filter_plugins ├── group_vars ├── host_vars ├── inventories │ ├── aws_ec2.yml │ ├── ec2.ini │ ├── ec2.py │ └── hosts ├── library ├── module_utils ├── notes.md ├── parameter_store_values.sh ├── playbooks │ ├── 10_cfn_network.yml │ ├── 20_cfn_compute.yml │ ├── 30_web_config.yml │ └── 40_integration_tests.yml ├── production ├── requirements.txt ├── roles │ ├── cfn_compute │ ├── cfn_network │ ├── common │ ├── httpd │ └── integration_tests ├── site.yml └── staging
The four roles include a role for provisioning the network, the cfn_network
role. A role for configuring the compute resources, the cfn_compute
role. A role for deploying and configuring the Apache servers, the httpd
role. Finally, a role to perform final integration tests of the platform, the integration_tests
role. The individual roles help separate the project’s major parts, network, compute, and middleware, into logical code files. Each role was initially built using Ansible Galaxy (ansible-galaxy init
). They follow Galaxy’s standard file structure, as shown in the tree view below, of the cfn_network
role.
. ├── README.md ├── defaults │ └── main.yml ├── files ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ ├── create.yml │ ├── delete.yml │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml
Testing Ansible Roles
In addition to checking each role during development and on each code commit with Ansible Lint, each role contains a set of unit tests, in the tests
directory, to confirm the success or failure of the role’s tasks. Below we see a basic set of tests for the cfn_compute
role. First, we gather Facts about the deployed EC2 instances. Facts information Ansible can automatically derive from your remote systems. We check the facts for expected properties of the running EC2 instances, including timezone, Operating System, major OS version, and the UserID. Note the use of the failed_when
conditional. This Ansible playbook error handling conditional is used to confirm the success or failure of tasks.
--- - name: Test cfn_compute Ansible role gather_facts: True hosts: tag_Group_webservers pre_tasks: - name: List all ansible facts debug: msg: "{{ ansible_facts }}" tasks: - name: Check if EC2 instance's timezone is set to 'UTC' debug: msg: Timezone is UTC failed_when: ansible_facts['date_time']['tz'] != 'UTC' - name: Check if EC2 instance's OS is 'Amazon' debug: msg: OS is Amazon failed_when: ansible_facts['distribution_file_variety'] != 'Amazon' - name: Check if EC2 instance's OS major version is '2018' debug: msg: OS major version is 2018 failed_when: ansible_facts['distribution_major_version'] != '2018' - name: Check if EC2 instance's UserID is 'ec2-user' debug: msg: UserID is ec2-user failed_when: ansible_facts['user_id'] != 'ec2-user'
If we were to run the test on their own, against the two correctly provisioned and configured EC2 web servers, we would see results similar to the following.
In the cfn_network
role unit tests, below, note the use of the Ansible cloudformation_facts module. This module allows us to obtain facts about the successfully completed AWS CloudFormation stack. We can then use these facts to drive additional provisioning and configuration, or testing. In the task below, we get the network CloudFormation stack’s Outputs. These are the exact same values we would see in the stack’s Output tab, from the AWS CloudFormation management console.
--- - name: Test cfn_network Ansible role gather_facts: False hosts: localhost pre_tasks: - name: Get facts about the newly created cfn network stack cloudformation_facts: stack_name: "ansible-cfn-demo-network" register: cfn_network_stack_facts - name: List 'stack_outputs' from cached facts debug: msg: "{{ cloudformation['ansible-cfn-demo-network'].stack_outputs }}" tasks: - name: Check if the AWS Region of the VPC is {{ lookup('env','AWS_REGION') }} debug: msg: "AWS Region of the VPC is {{ lookup('env','AWS_REGION') }}" failed_when: cloudformation['ansible-cfn-demo-network'].stack_outputs['VpcRegion'] != lookup('env','AWS_REGION')
Similar to the CloudFormation templates, the Ansible roles require no modifications. Most of the project’s parameters are decoupled from the code and stored in Parameter Store or CodeBuild buildspec files (discussed next). The few parameters found in the roles, in the defaults/main.yml
files are neither account- or environment-specific.
Ansible Playbooks
The roles will be called by our Ansible Playbooks. There is a create
and a delete
set of tasks for the cfn_network
and cfn_compute
roles. Either create
or delete
tasks are accessible through the role, using the main.yml
file and referencing the create
or delete
Ansible Tags.
--- - import_tasks: create.yml tags: - create - import_tasks: delete.yml tags: - delete
Below, we see the create
tasks for the cfn_network
role, create.yml
, referenced above by main.yml
. The use of the cloudcormation module in the first task allows us to create or delete AWS CloudFormation stacks and demonstrates the real power of Ansible—the ability to execute complex AWS resource provisioning, by extending its core functionality via a module. By switching the Cloud module, we could just as easily provision resources on Google Cloud, Azure, AliCloud, OpenStack, or VMWare, to name but a few.
--- - name: create a stack, pass in the template via an S3 URL cloudformation: stack_name: "{{ stack_name }}" state: present region: "{{ lookup('env','AWS_REGION') }}" disable_rollback: false template_url: "{{ lookup('env','TEMPLATE_URL') }}" template_parameters: VpcCIDR: "{{ lookup('env','VPC_CIDR') }}" PublicSubnet1CIDR: "{{ lookup('env','PUBLIC_SUBNET_1_CIDR') }}" PublicSubnet2CIDR: "{{ lookup('env','PUBLIC_SUBNET_2_CIDR') }}" PrivateSubnet1CIDR: "{{ lookup('env','PRIVATE_SUBNET_1_CIDR') }}" PrivateSubnet2CIDR: "{{ lookup('env','PRIVATE_SUBNET_2_CIDR') }}" TagEnv: "{{ lookup('env','TAG_ENVIRONMENT') }}" tags: Stack: "{{ stack_name }}"
The CloudFormation parameters in the above task are mainly derived from environment variables, whose values were retrieved from the Parameter Store by CodeBuild and set in the environment. We obtain these external values using Ansible’s Lookup Plugins. The stack_name
variable’s value is derived from the role’s defaults/main.yml
file. The task variables use the Python Jinja2 templating system style of encoding.
The associated Ansible Playbooks, which call the tasks, are located in the playbooks
directory, as shown previously in the tree view. The playbooks define a few required parameters, like where the list of hosts will be derived and calls the appropriate roles. For our simple demonstration, only a single role is called per playbook. Typically, in a larger project, you would call multiple roles from a single playbook. Below, we see the Network playbook, playbooks/10_cfn_network.yml
, which calls the cfn_network
role.
--- - name: Provision VPC and Subnets hosts: localhost connection: local gather_facts: False roles: - role: cfn_network
Dynamic Inventory
Another principal feature of Ansible is demonstrated in the Web Server Configuration playbook, playbooks/30_web_config.yml
, shown below. Note the hosts to which we want to deploy and configure Apache HTTP Server is based on an AWS tag value, indicated by the reference to tag_Group_webservers
. This indirectly refers to an AWS tag, named Group, with the value of webservers
, which was applied to our EC2 hosts by CloudFormation. The ability to generate a Dynamic Inventory, using a dynamic external inventory system, is a key feature of Ansible.
--- - name: Configure Apache Web Servers hosts: tag_Group_webservers gather_facts: False become: yes become_method: sudo roles: - role: httpd
To generate a dynamic inventory of EC2 hosts, we are using the Ansible AWS EC2 Dynamic Inventory script, inventories/ec2.py
and inventories/ec2.ini
files. The script dynamically queries AWS for all the EC2 hosts containing specific AWS tags, belonging to a particular Security Group, Region, Availability Zone, and so forth.
I have customized the AWS EC2 Dynamic Inventory script’s configuration in the inventories/aws_ec2.yml
file. Amongst other configuration items, the file defines keyed_groups
. This instructs the script to inventory EC2 hosts according to their unique AWS tags and tag values.
plugin: aws_ec2 remote_user: ec2-user private_key_file: ~/.ssh/ansible regions: - us-east-1 keyed_groups: - key: tags.Name prefix: tag_Name_ separator: '' - key: tags.Group prefix: tag_Group_ separator: '' hostnames: - dns-name - ip-address - private-dns-name - private-ip-address compose: ansible_host: ip_address
Once you have built the CloudFormation compute stack in the proceeding section of the demonstration, to build the dynamic EC2 inventory of hosts, you would use the following command.
ansible-inventory -i inventories/aws_ec2.yml --graph
You would then see an inventory of all your EC2 hosts, resembling the following.
@all: |--@aws_ec2: | |--ec2-18-234-137-73.compute-1.amazonaws.com | |--ec2-3-95-215-112.compute-1.amazonaws.com |--@tag_Group_webservers: | |--ec2-18-234-137-73.compute-1.amazonaws.com | |--ec2-3-95-215-112.compute-1.amazonaws.com |--@tag_Name_Apache_Web_Server: | |--ec2-18-234-137-73.compute-1.amazonaws.com | |--ec2-3-95-215-112.compute-1.amazonaws.com |--@ungrouped:
Note the two EC2 web servers instances, listed under tag_Group_webservers
. They represent the target inventory onto which we will install Apache HTTP Server. We could also use the tag, Name, with the value tag_Name_Apache_Web_Server
.
AWS CodeBuild
Recalling our diagram, you will note the use of CodeBuild is a vital part of each of our five DevOps workflows. CodeBuild is used to 1) validate the CloudFormation templates, 2) provision the network resources, 3) provision the compute resources, 4) install and configure the web servers, and 5) run integration tests.
Splitting these processes into separate workflows, we can redeploy the web servers without impacting the compute resources or redeploy the compute resources without affecting the network resources. Often, different teams within a large enterprise are responsible for each of these resources categories—architecture, security (IAM), network, compute, web servers, and code deployments. Separating concerns makes a shared ownership model easier to manage.
Build Specifications
CodeBuild projects rely on a build specification or buildspec file for its configuration, as shown below. CodeBuild’s buildspec file is synonymous to Jenkins’ Jenkinsfile. Each of our five workflows will use CodeBuild. Each CodeBuild project references a separate buildspec file, included in the two GitHub projects, which by now you have pushed to your two CodeCommit repositories.
Below we see an example of the buildspec file for the CodeBuild project that deploys our AWS network resources, buildspec_files/buildspec_network.yml
.
version: 0.2 env: variables: TEMPLATE_URL: "https://s3.amazonaws.com/garystafford_cloud_formation/cf_demo/network-stack.template" AWS_REGION: "us-east-1" TAG_ENVIRONMENT: "ansible-cfn-demo" parameter-store: VPC_CIDR: "/ansible_demo/vpc_cidr" PUBLIC_SUBNET_1_CIDR: "/ansible_demo/public_subnet_1_cidr" PUBLIC_SUBNET_2_CIDR: "/ansible_demo/public_subnet_2_cidr" PRIVATE_SUBNET_1_CIDR: "/ansible_demo/private_subnet_1_cidr" PRIVATE_SUBNET_2_CIDR: "/ansible_demo/private_subnet_2_cidr" phases: install: runtime-versions: python: 3.7 commands: - pip install -r requirements.txt -q build: commands: - ansible-playbook -i inventories/aws_ec2.yml playbooks/10_cfn_network.yml --tags create -v post_build: commands: - ansible-playbook -i inventories/aws_ec2.yml roles/cfn_network/tests/test.yml
There are several distinct sections to the buildspec file. First, in the variables
section, we define our variables. They are a combination of three static variable values and five variable values retrieved from the Parameter Store. Any of these may be overwritten at build-time, using the AWS CLI, SDK, or from the CodeBuild management console. You will need to update some of the variables to match your particular environment, such as the TEMPLATE_URL
to match your S3 bucket path.
Next, the phases
of our build. Again, if you are familiar with Jenkins, think of these as Stages with multiple Steps. The first phase, install
, builds a Docker container, in which the build process is executed. Here we are using Python 3.7. We also run a pip command to install the required Python packages from our requirements.txt
file. Next, we perform our build
phase by executing an Ansible command.
ansible-playbook \ -i inventories/aws_ec2.yml \ playbooks/10_cfn_network.yml --tags create -v
The command calls our playbook, playbooks/10_cfn_network.yml
. The command references the create
tag. This causes the playbook to run to cfn_network
role’s create tasks (roles/cfn_network/tasks/create.yml
), as defined in the main.yml
file (roles/cfn_network/tasks/main.yml
). Lastly, in our post_build
phase, we execute our role’s unit tests (roles/cfn_network/tests/test.yml
), using a second Ansible command.
CodeBuild Projects
Next, we need to create CodeBuild projects. You can do this using the AWS CLI or from the CodeBuild management console (shown below). I have included individual templates and a creation script in each project, in the codebuild_projects
directory, which you could use to build the projects, using the AWS CLI. You would have to modify the JSON templates, replacing all references to my specific, unique AWS resources, with your own. For the demonstration, I suggest creating the five projects manually in the CodeBuild management console, using the supplied CodeBuild project templates as a guide.
CodeBuild IAM Role
To execute our CodeBuild projects, we need an IAM Role or Roles CodeBuild with permission to such resources as CodeCommit, S3, and CloudWatch. For this demonstration, I chose to create a single IAM Role for all workflows. I then allowed CodeBuild to assign the required policies to the Role as needed, which is a feature of CodeBuild.
CodePipeline Pipeline
In addition to CodeBuild, we are using CodePipeline for our first of five workflows. CodePipeline validates the CloudFormation templates and pushes them to our S3 bucket. The pipeline calls the corresponding CodeBuild project to validate each template, then deploys the valid CloudFormation templates to S3.
In true CI/CD fashion, the pipeline is automatically executed every time source code from the CloudFormation project is committed to the CodeCommit repository.
CodePipeline calls CodeBuild, which performs a build, based its buildspec file. This particular CodeBuild buildspec file also demonstrates another ability of CodeBuild, executing an external script. When we have a complex build phase, we may choose to call an external script, such as a Bash or Python script, verses embedding the commands in the buildspec.
version: 0.2 phases: install: runtime-versions: python: 3.7 pre_build: commands: - pip install -r requirements.txt -q - cfn-lint -v build: commands: - sh buildspec_files/build.sh artifacts: files: - '**/*' base-directory: 'cfn_templates' discard-paths: yes
Below, we see the script that is called. Here we are using both the CloudFormation Linter, cfn-lint
, and the cloudformation validate-template
command to validate our templates for comparison. The two tools give slightly different, yet relevant, linting results.
#!/usr/bin/env bash set -e for filename in cfn_templates/*.*; do cfn-lint -t ${filename} aws cloudformation validate-template \ --template-body file://${filename} done
Similar to the CodeBuild project templates, I have included a CodePipeline template, in the codepipeline_pipelines
directory, which you could modify and create using the AWS CLI. Alternatively, I suggest using the CodePipeline management console to create the pipeline for the demo, using the supplied CodePipeline template as a guide.
Below, the stage view of the final CodePipleine pipeline.
Build the Platform
With all the resources, code, and DevOps workflows in place, we should be ready to build our platform on AWS. The CodePipeline project comes first, to validate the CloudFormation templates and place them into your S3 bucket. Since you are probably not committing new code to the CloudFormation file CodeCommit repository, which would trigger the pipeline, you can start the pipeline using the AWS CLI (shown below) or via the management console.
# list names of pipelines aws codepipeline list-pipelines # execute the validation pipeline aws codepipeline start-pipeline-execution --name cfn-validate-s3
The pipeline should complete within a few seconds.
Next, execute each of the four CodeBuild projects in the following order.
# list the names of the projects aws codebuild list-projects # execute the builds in order aws codebuild start-build --project-name cfn-network aws codebuild start-build --project-name cfn-compute # ensure EC2 instance checks are complete before starting # the ansible-web-config build! aws codebuild start-build --project-name ansible-web-config aws codebuild start-build --project-name ansible-test
As the code comment above states, be careful not to start the ansible-web-config build until you have confirmed the EC2 instance Status Checks have completed and have passed, as shown below. The previous, cfn-compute
build will complete when CloudFormation finishes building the new compute stack. However, the fact CloudFormation finished does not indicate that the EC2 instances are fully up and running. Failure to wait will result in a failed build of the ansible-web-config
CodeBuild project, which installs and configures the Apache HTTP Servers.
Below, we see the cfn_network
CodeBuild project first building a Python-based Docker container, within which to perform the build. Each build is executed in a fresh, separate Docker container, something that can trip you up if you are expecting a previous cache of Ansible Facts or previously defined environment variables, persisted across multiple builds.
Below, we see the two completed CloudFormation Stacks, a result of our CodeBuild projects and Ansible.
The fifth and final CodeBuild build tests our platform by attempting to hit the Apache HTTP Server’s default home page, using the Application Load Balancer’s public DNS name.
Below, we see an example of what happens when a build fails. In this case, one of the final integration tests failed to return the expected results from the ALB endpoint.
Below, with the bug is fixed, we rerun the build, which re-executed the tests, successfully.
We can manually confirm the platform is working by hitting the same public DNS name of the ALB as our tests in our browser. The request should load-balance our request to one of the two running web server’s default home page. Normally, at this point, you would deploy your application to Apache, using a software continuous deployment tool, such as Jenkins, CodeDeploy, Travis CI, TeamCity, or Bamboo.
Cleaning Up
To clean up the running AWS resources from the demonstration, first delete the CloudFormation compute stack, then delete the network stack. To do so, execute the following commands, one at a time. The commands call the same playbooks we called to create the stacks, except this time, we use the delete
tag, as opposed to the create
tag.
# first delete cfn compute stack ansible-playbook \ -i inventories/aws_ec2.yml \ playbooks/20_cfn_compute.yml -t delete -v # then delete cfn network stack ansible-playbook \ -i inventories/aws_ec2.yml \ playbooks/10_cfn_network.yml -t delete -v
You should observe the following output, indicating both CloudFormation stacks have been deleted.
Confirm the stacks were deleted from the CloudFormation management console or from the AWS CLI.
All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.
Managing Applications Across Multiple Kubernetes Environments with Istio: Part 2
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, GCP, Software Development on April 17, 2018
In this two-part post, we are exploring the creation of a GKE cluster, replete with the latest version of Istio, often referred to as IoK (Istio on Kubernetes). We will then deploy, perform integration testing, and promote an application across multiple environments within the cluster.
Part Two
In Part One of this post, we created a Kubernetes cluster on the Google Cloud Platform, installed Istio, provisioned a PostgreSQL database, and configured DNS for routing. Under the assumption that v1 of the Election microservice had already been released to Production, we deployed v1 to each of the three namespaces.
In Part Two of this post, we will learn how to utilize the advanced API testing capabilities of Postman and Newman to ensure v2 is ready for UAT and release to Production. We will deploy and perform integration testing of a new v2 of the Election microservice, locally on Kubernetes Minikube. Once confident v2 is functioning as intended, we will promote and test v2 across the dev
, test
, and uat
namespaces.
Source Code
As a reminder, all source code for this post can be found on GitHub. The project’s README file contains a list of the Election microservice’s endpoints. To get started quickly, use one of the two following options (gist).
# clone the official v3.0.0 release for this post | |
git clone --depth 1 --branch v3.0.0 \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo \ | |
&& git checkout -b v3.0.0 | |
# clone the latest version of code (newer than article) | |
git clone --depth 1 --branch master \ | |
https://github.com/garystafford/spring-postgresql-demo.git \ | |
&& cd spring-postgresql-demo |
Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.
This project includes a kubernetes sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the example shown in the post.
Testing Locally with Minikube
Deploying to GKE, no matter how automated, takes time and resources, whether those resources are team members or just compute and system resources. Before deploying v2 of the Election service to the non-prod GKE cluster, we should ensure that it has been thoroughly tested locally. Local testing should include the following test criteria:
- Source code builds successfully
- All unit-tests pass
- A new Docker Image can be created from the build artifact
- The Service can be deployed to Kubernetes (Minikube)
- The deployed instance can connect to the database and execute the Liquibase changesets
- The deployed instance passes a minimal set of integration tests
Minikube gives us the ability to quickly iterate and test an application, as well as the Kubernetes and Istio resources required for its operation, before promoting to GKE. These resources include Kubernetes Namespaces, Secrets, Deployments, Services, Route Rules, and Istio Ingresses. Since Minikube is just that, a miniature version of our GKE cluster, we should be able to have a nearly one-to-one parity between the Kubernetes resources we apply locally and those applied to GKE. This post assumes you have the latest version of Minikube installed, and are familiar with its operation.
This project includes a minikube sub-directory, containing all the Kubernetes resource files and scripts necessary to recreate the Minikube deployment example shown in this post. The three included scripts are designed to be easily adapted to a CI/CD DevOps workflow. You may need to modify the scripts to match your environment’s configuration. Note this Minikube-deployed version of the Election service relies on the external Amazon RDS database instance.
Local Database Version
To eliminate the AWS costs, I have included a second, alternate version of the Minikube Kubernetes resource files, minikube_db_local This version deploys a single containerized PostgreSQL database instance to Minikube, as opposed to relying on the external Amazon RDS instance. Be aware, the database does not have persistent storage or an Istio sidecar proxy.
Minikube Cluster
If you do not have a running Minikube cluster, create one with the minikube start
command.
Minikube allows you to use normal kubectl
CLI commands to interact with the Minikube cluster. Using the kubectl get nodes
command, we should see a single Minikube node running the latest Kubernetes v1.10.0.
Istio on Minikube
Next, install Istio following Istio’s online installation instructions. A basic Istio installation on Minikube, without the additional add-ons, should only require a single Istio install script.
If successful, you should observe a new istio-system
namespace, containing the four main Istio components: istio-ca
, istio-ingress
, istio-mixer
, and istio-pilot
.
Deploy v2 to Minikube
Next, create a Minikube Development environment, consisting of a dev
Namespace, Istio Ingress, and Secret, using the part1-create-environment.sh
script. Next, deploy v2 of the Election service to thedev
Namespace, along with an associated Route Rule, using the part2-deploy-v2.sh
script. One v2 instance should be sufficient to satisfy the testing requirements.
Access to v2 of the Election service on Minikube is a bit different than with GKE. When routing external HTTP requests, there is no load balancer, no external public IP address, and no public DNS or subdomains. To access the single instance of v2 running on Minikube, we use the local IP address of the Minikube cluster, obtained with the minikube ip
command. The access port required is the Node Port (nodePort
) of the istio-ingress
Service. The command is shown below (gist) and included in the part3-smoke-test.sh
script.
export GATEWAY_URL="$(minikube ip):"\ | |
"$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')" | |
echo $GATEWAY_URL | |
curl $GATEWAY_URL/v2/actuator/health && echo |
The second part of our HTTP request routing is the same as with GKE, relying on an Istio Route Rules. The /v2/
sub-collection resource in the HTTP request URL is rewritten and routed to the v2 election Pod by the Route Rule. To confirm v2 of the Election service is running and addressable, curl the /v2/actuator/health
endpoint. Spring Actuator’s /health
endpoint is frequently used at the end of a CI/CD server’s deployment pipeline to confirm success. The Spring Boot application can take a few minutes to fully start up and be responsive to requests, depending on the speed of your local machine.
Using the Kubernetes Dashboard, we should see our deployment of the single Election service Pod is running successfully in Minikube’s dev
namespace.
Once deployed, we run a battery of integration tests to confirm that the new v2 functionality is working as intended before deploying to GKE. In the next section of this post, we will explore the process creating and managing Postman Collections and Postman Environments, and how to automate those Collections of tests with Newman and Jenkins.
Integration Testing
The typical reason an application is deployed to lower environments, prior to Production, is to perform application testing. Although definitions vary across organizations, testing commonly includes some or all of the following types: Integration Testing, Functional Testing, System Testing, Stress or Load Testing, Performance Testing, Security Testing, Usability Testing, Acceptance Testing, Regression Testing, Alpha and Beta Testing, and End-to-End Testing. Test teams may also refer to other testing forms, such as Whitebox (Glassbox), Blackbox Testing, Smoke, Validation, or Sanity Testing, and Happy Path Testing.
The site, softwaretestinghelp.com, defines integration testing as, ‘testing of all integrated modules to verify the combined functionality after integration is termed so. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.’
In this post, we are concerned that our integrated modules are functioning cohesively, primarily the Election service, Amazon RDS database, DNS, Istio Ingress, Route Rules, and the Istio sidecar Proxy. Unlike Unit Testing and Static Code Analysis (SCA), which is done pre-deployment, integration testing requires an application to be deployed and running in an environment.
Postman
I have chosen Postman, along with Newman, to execute a Collection of integration tests before promoting to the next environment. The integration tests confirm the deployed application’s name and version. The integration tests then perform a series of HTTP GET, POST, PUT, PATCH, and DELETE actions against the service’s resources. The integration tests verify a successful HTTP response code is returned, based on the type of request made.
Postman tests are written in JavaScript, similar to other popular, modern testing frameworks. Postman offers advanced features such as test-chaining. Tests can be chained together through the use of environment variables to store response values and pass them onto to other tests. Values shared between tests are also stored in the Postman Environments. Below, we store the ID of the new candidate, the result of an HTTP POST to the /candidates
endpoint. We then use the stored candidate ID in proceeding HTTP GET, PUT, and PATCH test requests to the same /candidates
endpoint.
Environment-specific variables, such as the resource host, port, and environment sub-collection resource, are abstracted and stored as key/value pairs within Postman Environments, and called through variables in the request URL and within the tests. Thus, the same Postman Collection of tests may be run against multiple environments using different Postman Environments.
Postman Runner allows us to run multiple iterations of our Collection. We also have the option to build in delays between tests. Lastly, Postman Runner can load external JSON and CSV formatted test data, which is beyond the scope of this post.
Postman contains a simple Run Summary UI for viewing test results.
Test Automation
To support running tests from the command line, Postman provides Newman. According to Postman, Newman is a command-line collection runner for Postman. Newman offers the same functionality as Postman’s Collection Runner, all part of the newman
CLI. Newman is Node.js module, installed globally as an npm package, npm install newman --global
.
Typically, Development and Testing teams compose Postman Collections and define Postman Environments, locally. Teams run their tests locally in Postman, during their development cycle. Then, those same Postman Collections are executed from the command line, or more commonly as part of a CI/CD pipeline, such as with Jenkins.
Below, the same Collection of integration tests ran in the Postman Runner UI, are run from the command line, using Newman.
Jenkins
Without a doubt, Jenkins is the leading open-source CI/CD automation server. The building, testing, publishing, and deployment of microservices to Kubernetes is relatively easy with Jenkins. Generally, you would build, unit-test, push a new Docker image, and then deploy your application to Kubernetes using a series of CI/CD pipelines. Below, we see examples of these pipelines using Jenkins Blue Ocean, starting with a continuous integration pipeline, which includes unit-testing and Static Code Analysis (SCA) with SonarQube.
Followed by a pipeline to build the Docker Image, using the build artifact from the above pipeline, and pushes the Image to Docker Hub.
The third pipeline that demonstrates building the three Kubernetes environments and deploying v1 of the Election service to the dev namespace. This pipeline is just for demonstration purposes; typically, you would separate these functions.
Spinnaker
An alternative to Jenkins for the deployment of microservices is Spinnaker, created by Netflix. According to Netflix, ‘Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.’ Spinnaker is designed to integrate easily with Jenkins, dividing responsibilities for continuous integration and delivery, with deployment. Below, Spinnaker two sample deployment pipelines, similar to Jenkins, for deploying v1 and v2 of the Election service to the non-prod GKE cluster.
Below, Spinnaker has deployed v2 of the Election service to dev using a Highlander deployment strategy. Subsequently, Spinnaker has deployed v2 to test using a Red/Black deployment strategy, leaving the previously released v1 Server Group in place, in case a rollback is required.
Once Spinnaker is has completed the deployment tasks, the Postman Collections of smoke and integration tests are executed by Newman, as part of another Jenkins CI/CD pipeline.
In this pipeline, a set of basic smoke tests is run first to ensure the new deployment is running properly, and then the integration tests are executed.
In this simple example, we have a three-stage pipeline created from a Jenkinsfile (gist).
#!groovy | |
def ACCOUNT = "garystafford" | |
def PROJECT_NAME = "spring-postgresql-demo" | |
def ENVIRONMENT = "dev" // assumes 'api.dev.voter-demo.com' reachable | |
pipeline { | |
agent any | |
stages { | |
stage('Checkout SCM') { | |
steps { | |
git changelog: true, poll: false, | |
branch: 'master', | |
url: "https://github.com/${ACCOUNT}/${PROJECT_NAME}" | |
} | |
} | |
stage('Smoke Test') { | |
steps { | |
dir('postman') { | |
nodejs('nodejs') { | |
sh "sh ./newman-smoke-tests-${ENVIRONMENT}.sh" | |
} | |
junit '**/newman/*.xml' | |
} | |
} | |
} | |
stage('Integration Tests') { | |
steps { | |
dir('postman') { | |
nodejs('nodejs') { | |
sh "sh ./newman-integration-tests-${ENVIRONMENT}.sh" | |
} | |
junit '**/newman/*.xml' | |
} | |
} | |
} | |
} | |
} |
Test Results
Newman offers several options for displaying test results. For easy integration with Jenkins, Newman results can be delivered in a format that can be displayed as JUnit test reports. The JUnit test report format, XML, is a popular method of standardizing test results from different testing tools. Below is a truncated example of a test report file (gist).
<?xml version="1.0" encoding="UTF-8"?> | |
<testsuites name="spring-postgresql-demo-v2" time="13.339000000000002"> | |
<testsuite name="/candidates/{{candidateId}}" id="31cee570-95a1-4768-9ac3-3d714fc7e139" tests="1" time="0.669"> | |
<testcase name="Status code is 200" time="0.669"/> | |
</testsuite> | |
<testsuite name="/candidates/{{candidateId}}" id="a5a62fe9-6271-4c89-a076-c95bba458ef8" tests="1" time="0.575"> | |
<testcase name="Status code is 200" time="0.575"/> | |
</testsuite> | |
<testsuite name="/candidates/{{candidateId}}" id="2fc4c902-b931-4b35-b28a-7e264f40ee9c" tests="1" time="0.568"> | |
<testcase name="Status code is 204" time="0.568"/> | |
</testsuite> | |
<testsuite name="/candidates/summary" id="94fe972e-32f4-4f58-a5d5-999cacdf7460" tests="1" time="0.337"> | |
<testcase name="Status code is 200" time="0.337"/> | |
</testsuite> | |
<testsuite name="/candidates/summary/{election}" id="f8f817c8-4785-49f1-8d09-8055b84c4fc0" tests="1" time="0.351"> | |
<testcase name="Status code is 200" time="0.351"/> | |
</testsuite> | |
<testsuite name="/candidates/search/findByLastName?lastName=Paul" id="504f8741-e9d2-4f05-b1ad-c14136030f34" tests="1" time="0.256"> | |
<testcase name="Status code is 200" time="0.256"/> | |
</testsuite> | |
</testsuites> |
Translating Newman test results to JUnit reports allows the percentage of test cases successfully executed, to be tracked over multiple deployments, a universal testing metric. Below we see the JUnit Test Reports Test Result Trend graph for a series of test runs.
Deploying to Development
Development environments typically have a rapid turnover of application versions. Many teams use their Development environment as a continuous integration environment, where every commit that successfully builds and passes all unit tests, is deployed. The purpose of the CI deployments is to ensure build artifacts will successfully deploy through the CI/CD pipeline, start properly, and pass a basic set of smoke tests.
Other teams use the Development environments as an extension of their local Minikube environment. The Development environment will possess some or all of the required external integration points, which the Developer’s local Minikube environment may not. The goal of the Development environment is to help Developers ensure their application is functioning correctly and is ready for the Test teams to evaluate, prior to promotion to the Test environment.
Some external integration points, such as external payment gateways, customer relationship management (CRM) systems, content management systems (CMS), or data analytics engines, are often stubbed-out in lower environments. Generally, third-party providers only offer a limited number of parallel non-Production integration environments. While an application may pass through several non-prod environments, testing against all external integration points will only occur in one or two of those environments.
With v2 of the Election service ready for testing on GKE, we deploy it to the GKE cluster’s dev namespace using the part4a-deploy-v2-dev.sh
script. We will also delete the previous v1 version of the Election service. Similar to the v1 deployment script, the v2 scripts perform a kube-inject
command, which manually injects the Istio sidecar proxy alongside the Election service, into each election v2 Pod. The deployment script also deploys an alternate Istio Route Rule, which routes requests to api.dev.voter-demo.com/v2/*
resource of v2 of the Election service.
Once deployed, we run our Postman Collection of integration tests with Newman or as part of a CI/CD pipeline. In the Development environment, we may choose to run a limited set of tests for the sake of expediency, or because not all external integration points are accessible.
Promotion to Test
With local Minikube and Development environment testing complete, we promote and deploy v2 of the Election service to the Test environment, using the part4b-deploy-v2-test.sh
script. In Test, we will not delete v1 of the Election service.
Often, an organization will maintain a running copy of all versions of an application currently deployed to Production, in a lower environment. Let’s look at two scenarios where this is common. First, v1 of the Election service has an issue in Production, which needs to be confirmed and may require a hot-fix by the Development team. Validation of the v1 Production bug is often done in a lower environment. The second scenario for having both versions running in an environment is when v1 and v2 both need to co-exist in Production. Organizations frequently support multiple API versions. Cutting over an entire API user-base to a new API version is often completed over a series of releases, and requires careful coordination with API consumers.
Testing All Versions
An essential role of integration testing should be to confirm that both versions of the Election service are functioning correctly, while simultaneously running in the same namespace. For example, we want to verify traffic is routed correctly, based on the HTTP request URL, to the correct version. Another common test scenario is database schema changes. Suppose we make what we believe are backward-compatible database changes to v2 of the Election service. We should be able to prove, through testing, that both the old and new versions function correctly against the latest version of the database schema.
There are different automation strategies that could be employed to test multiple versions of an application without creating separate Collections and Environments. A simple solution would be to templatize the Environments file, and then programmatically change the Postman Environment’s version
variable injected from a pipeline parameter (abridged environment file shown below).
Once initial automated integration testing is complete, Test teams will typically execute additional forms of application testing if necessary, before signing off for UAT and Performance Testing to begin.
User-Acceptance Testing
With testing in the Test environments completed, we continue onto UAT. The term UAT suggest that a set of actual end-users (API consumers) of the Election service will perform their own testing. Frequently, UAT is only done for a short, fixed period of time, often with a specialized team of Testers. Issues experienced during UAT can be expensive and impact the ability to release an application to Production on-time if sign-off is delayed.
After deploying v2 of the Election service to UAT, and before opening it up to the UAT team, we would naturally want to repeat the same integration testing process we conducted in the previous Test environment. We must ensure that v2 is functioning as expected before our end-users begin their testing. This is where leveraging a tool like Jenkins makes automated integration testing more manageable and repeatable. One strategy would be to duplicate our existing Development and Test pipelines, and re-target the new pipeline to call v2 of the Election service in UAT.
Again, in a JUnit report format, we can examine individual results through the Jenkins Console.
We can also examine individual results from each test run using a specific build’s Console Output.
Testing and Instrumentation
To fully evaluate the integration test results, you must look beyond just the percentage of test cases executed successfully. It makes little sense to release a new version of an application if it passes all functional tests, but significantly increases client response times, unnecessarily increases memory consumption or wastes other compute resources, or is grossly inefficient in the number of calls it makes to the database or third-party dependencies. Often times, integration testing uncovers potential performance bottlenecks that are incorporated into performance test plans.
Critical intelligence about the performance of the application can only be obtained through the use of logging and metrics collection and instrumentation. Istio provides this telemetry out-of-the-box with Zipkin, Jaeger, Service Graph, Fluentd, Prometheus, and Grafana. In the included Grafana Istio Dashboard below, we see the performance of v1 of the Election service, under test, in the Test environment. We can compare request and response payload size and timing, as well as request and response times to external integration points, such as our Amazon RDS database. We are able to observe the impact of individual test requests on the application and all its integration points.
As part of integration testing, we should monitor the Amazon RDS CloudWatch metrics. CloudWatch allows us to evaluate critical database performance metrics, such as the number of concurrent database connections, CPU utilization, read and write IOPS, Memory consumption, and disk storage requirements.
A discussion of metrics starts moving us toward load and performance testing against Production service-level agreements (SLAs). Using a similar approach to integration testing, with load and performance testing, we should be able to accurately estimate the sizing requirements our new application for Production. Load and Performance Testing helps answer questions like the type and size of compute resources are required for our GKE Production cluster and for our Amazon RDS database, or how many compute nodes and number of instances (Pods) are necessary to support the expected user-load.
All opinions expressed in this post are my own, and not necessarily the views of my current or past employers, or their clients.
Using soapUI to Test RESTful Web Services
Posted by Gary A. Stafford in Java Development on April 16, 2013
Introduction
There are many excellent tools available to test RESTful web services. Applications like Fiddler, cURL, Firefox with Firebug, and Google Chrome’s Advanced REST Client and REST Console are commonly used by developers and test engineers. Another powerful tool, used by many enterprise software development organizations, is soapUI by SmatBear Software.
SmartBear offers several versions, from the free open-source edition (shown here), to the full-featured soapUI Pro. Although, soapUI Pro has many useful advanced features, the free edition is fine to start with. Here is a product comparison of the various editions available from SmartBear.
SmartBear’s soapUI is available for Windows, Mac OS, and Linux (shown here). The application supports a wide range of technologies, including SOAP, WSDL, REST, HTTP, HTTPS, AMF, JDBC, JMS, WS-I Integeration, WS-Security, WS-Addressing, WS-Reliable Messaging, according to the SmartBear web site. It is easy to download and install.
Testing RESTful Web Services with soapUI
In my last post, we used JDBC to map JPA entity classes to tables and views within a MySQL database. We then built RESTful web services, EJB classes, which communicated with MySQL through the entities. The RESTful web services, part of a Java Web Application, were deployed to GlassFish.
Using that post’s RESTful web services, here is a quick example of how easy it is to use the free, open-source edition of soapUI to test those services. Start by locating the address of the RESTful service’s WADL. The WADL address is displayed in the upper left corner of the NetBeans’ browser-based test page, shown in the earlier post.
Next, create a new soapUI project. Give the project the WADL address and a project name.
Using the WADL, soapUI will create sample HTTP Request for each service resource’s methods (left-side of screen). Populating the sample request with any required input parameters, you make an HTTP Request to the service’s method.
In this first example, I call the Actor resource’s ‘findAll’ method using an HTTP GET method. The call to ‘http://localhost:8080/MySQLDemoService/webresources/com.mysql.entities.actor’ results in an HTTP Response with the list of Actor objects, mapped (serialized) to JSON (right-side of screen).
In this second example, I call the Film resource’s ‘Id’ method, to locate a single Film object, using the HTTP GET method. The call to ‘http://localhost:8080/MySQLDemoService/webresources/com.mysql.entities.film/719’ results in an HTTP Response with the a single Film object, identified by Id 719. This time the object is marshalled to XML, instead of JSON.