Originally published on the AWS APN Blog.
You’re a software vendor selling a SaaS product hosted on the cloud. As part of your solution, the SaaS product needs to interact with your customer’s cloud-based resources—their data, their applications, or perhaps their compute resources. As a vendor, you must architect your SaaS product, and its capability to interact with your customer’s cloud-based resources to be scalable, reliable, performant, cost optimized, and most importantly, secure. In this post, we will explore several common AWS services and architectural patterns used by SaaS vendors to interact with their customer’s AWS accounts. Although multi-cloud and hybrid cloud SaaS interactions are also prevalent in many SaaS solutions, for brevity we will limit ourselves to interactions across accounts within AWS.
Reasons for Interaction
There are many reasons SaaS products need to interact with a customer’s AWS Accounts. Examples of SaaS products that require some level of account interaction often fall into the categories of logging and monitoring, security, compliance, data analytics, DevOps, workflow management, and resource optimization. SaaS products, such as the ones in these categories, regularly interact with resources in the subscribing customer’s AWS Account. Examples of interactions include the following.
- Data Analytics: A product that connects to private data sources within their customer’s cloud accounts;
- Logging and Monitoring: A product that continuously aggregates, analyzes and visualizes logs from their customer’s cloud accounts;
- Security: A product that uses ML and AI to continuously scan their customer’s cloud accounts for suspicious activities;
- Workflow Management: A product that remotely manages resources in their customer’s cloud account to achieve cost optimization;
- Security: A product that intercepts and inspects traffic, in real-time, to and from their customer’s cloud-based applications, detecting fraud and other malicious activities;
- DevOps: A product that creates resources and deploys assets to their customer’s cloud accounts;
- Compliance: A product that continuously audits their customer’s cloud account activities for regulatory compliance and corporate governance violations;
As individuals and businesses, we often interact and share resources between our own AWS Accounts. We can delegate access across our accounts using AWS Single Sign-On (SSO) or IAM Roles, referred to as cross-account access. A typical example is allowing a Developer, an IAM User in the Development account, limited access to the Production account, to troubleshoot a customer issue. We can implement cross-account VPC Sharing within our AWS Organization or VPC Peering to route traffic, privately, between two or more accounts. These methods are used when applications, deployed to different AWS Accounts, such as accounting and shipping, need to communicate with each other.
Below, we see a typical example of an Enterprise AWS account structure, modeled around the Software Development Life Cycle (SDLC) and using good DevOps practices.
Cloud services and architectural patterns that work for an individual customer of the cloud, may not scale for a SaaS vendor. Implementing a scalable and reliable architecture to interact with tens, hundreds, or thousands of a SaaS vendor’s customer’s AWS Accounts, can be vastly more complex and challenging than across a single organization’s accounts, even a large organization. Similarly, losing or exposing data can be devastating to an organization. Losing or exposing hundreds or thousands of SaaS customer’s data and potentially their own customer’s data is likely fatal to a SaaS vendor’s reputation and their business.
Below, we see a typical example of a SaaS vendor’s AWS account structure, modeled around a multi-tier pricing and support strategy. Free Trial and Business customers are pooled together using a multi-tenant architecture, while enterprise customers are isolated using a single-tenant architecture.
The tenant isolation strategy of a SaaS vendor can have a direct impact on the methods the vendor implements to integrate with their customer’s accounts. For those unfamiliar with the terms Tenancy Model or Tenant Isolation, think housing. A tenant (a SaaS vendor’s customer) might reside in a single-family house, referred to as a single-tenant architecture. The tenant is isolated or siloed from their neighbors (other SaaS customers). Alternately, a tenant might reside in an apartment building, living adjacent to other tenants, referred to as multi-tenant architecture. Tenants (customers) share the same structure with limited isolation compared to single-tenant architecture. Multiple customer resources are pooled. To learn more about tenancy, I recommend the presentation, AWS re:Invent 2019: SaaS tenant isolation patterns, available on YouTube, by Tod Golding of AWS.
An application’s tenancy model is seldom binary. The degree to which SaaS customer’s resources are isolated or conversely, pooled, varies based on a vendor’s architecture and is influenced by the underlying technologies used by the vendor. Different tiers of the application—front-end, back-end, data—may differ in their degree and means of tenant isolation. For example, on AWS, isolation may be achieved at different levels of the network. A SaaS vendor may assign each customer to their own AWS Account, providing a very high level of isolation. Alternatively, a vendor might choose to use a single account and separate customers within their own VPC. Customers may be housed within the same VPC, but separated by compute instance or other software constructs.
Additionally, many SaaS vendors have more than one tenancy model. Frequently, a SaaS vendor will offer a multi-tier pricing structure. For example, a Business Tier, which is based on a shared, multi-tenant architecture, and an Enterprise Tier, which is based on an isolated, single-tenant architecture. Vendors may also offer a free tier or trial offer, which is almost always multi-tenant.
The cloud services and architectural patterns used by vendors to integrate with customer accounts, each have their own advantages and disadvantages. Some AWS services and architectural patterns may be inherently better at scaling to meet a SaaS vendor’s large growing customer base. They are well-suited for multi-tenant architecture. Other services and architectural patterns may provide a high level of interaction but lack scalability. They may be better suited for a single-tenant architecture.
There are several AWS services capable of enabling integrations between a vendor’s SaaS product and their customer’s resources, within their AWS Accounts. Some AWS services were even designed with SaaS in mind, including AWS PrivateLink and Amazon EventBridge. Services are often combined in common architectural patterns. Here are eight examples of AWS services capable of enabling integrations between a vendor’s SaaS product and their customer’s resources, within their AWS Accounts.
Cross-Account IAM Roles
According to AWS, customers can use an IAM Role to delegate access to resources that are in different AWS accounts, often referred to as a cross-account role. With IAM roles, you can establish trust relationships between your trusting account and other AWS trusted accounts. By setting up cross-account access in this way, you do not need to create individual IAM users in each account. With IAM roles, you can establish trust relationships between your trusting account and other AWS trusted accounts.
Using External IDs are a good way to improve the security of cross-account role handling in a SaaS solution, and should be used by vendors who are implementing a SaaS product that uses cross-account roles. Choosing this option restricts access to the role only through the AWS CLI, Tools for Windows PowerShell, or the AWS API.
Using cross-account roles with external IDs, a SaaS customer allows their SaaS vendor programmatic access to their AWS account. The cross-account role’s policies tightly control the permissions of the vendor (trusted account) and limit access within the customer’s account (trusting account). Vendors use cross-account access for many activities, including resource optimization, DevOps functions, patching and upgrades, and managing their agents.
AWS Transfer for SFTP
According to AWS, AWS Transfer for SFTP is a fully managed service that enables the transfer of files directly into and out of Amazon S3 using the Secure File Transfer Protocol (SFTP)—also known as Secure Shell (SSH) File Transfer Protocol. Announced in November 2018, AWS SFTP is fully compatible with the SFTP standard, connects to your Active Directory, LDAP, and other identity systems, and works with Route 53 DNS routing.
Using S3 to transfer objects (files) between a SaaS vendor’s account and their customer’s account is very common. There are multiple AWS services, including AWS Transfer for SFTP, which can be used to interact with a customer’s account using S3. A vendor may choose to pull from or push to the customer’s S3 bucket using one of these services. Alternately, a customer might push to or pull from a vendor’s S3 bucket, as shown below.
To use AWS Transfer for SFTP, you set up users and assign them each an IAM role. This role determines the level of access that they have to your S3 bucket. When you create an SFTP user, you make several decisions about user access. These include which Amazon S3 buckets the user can access, what portions of each S3 bucket are accessible, and what privileges the user has, for example,
Below, we see an example of the SaaS vendor exposing a secure FTP site, backed by S3. The customers use SFTP to exchange data between their account and their SaaS application. File transfer may be managed by the customer or by a vendor agent, deployed into the customer’s account.
In late April 2020, AWS announced AWS Transfer Family, which is the aggregated name of AWS Transfer for SFTP, FTPS, and FTP. In addition to Secure File Transfer Protocol (SFTP), announce the expansion of the service to add support for File Transfer Protocol over SSL (FTPS) and File Transfer Protocol (FTP).
Amazon S3 Access Points
According to AWS, Amazon S3 Access Points, a feature of S3, simplifies managing data access at scale for applications using shared data sets on S3. Announced in December 2019, Access points are unique hostnames that customers create to enforce distinct permissions and network controls for any request made through the access point. Amazon S3 Access Points can easily scale access for hundreds of applications by creating individualized access points with names and permissions customized for each application.
Access Points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For an application or user to be able to access objects through an access point, both the access point and the underlying bucket must permit the request. Using an Access Point Policy, you can enable another account to access objects within an S3 bucket. The access point policy gives you fine-grain control over which users and which permissions, such as to
PUT objects, are allowed.
Below, we see an example of the SaaS vendor exposing an S3 Access Point. The customers use the supplied access points to exchange data between their account and their SaaS application. The interchange of data may be managed by the customer or by a vendor agent, deployed into the customer’s account.
Amazon API Gateway
According to AWS, Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the front door for applications to access data, business logic, or functionality from backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
Announced in July 2015, Amazon API Gateway and the accompanying APIs serve as a scalable, reliable, performant, and secure connections to backend resources in either the customer or SaaS vendor’s account. API Gateway backend resources include Lambda functions, HTTP endpoints, VPC Links, and other AWS services. Those AWS services include Amazon SQS and Kinesis. These services are often used to exchange data between a SaaS vendor’s account and their customer’s accounts. Eric Johnson of AWS gave a great presentation on API Gateway, available on YouTube, AWS re:Invent 2019: I didn’t know Amazon API Gateway did that.
Several security features can be used to secure communications between the vendor and the customer accounts when using the API Gateway. Security features include encrypting transmitted data using SSL/TLS Certificates (HTTPS), AWS WAF (Web Application Firewall) integration, Client Certificates to ensure HTTP requests to your back-end services are originating from the API Gateway, Authorization using Amazon Cognito User Pools or a Lambda function, API Keys, and VPC Links.
Below, we see an example of the SaaS vendor exposing an API, backed by an SQS queue. The customers use the API to programmatically exchange data between their account and their SaaS application. The interchange of data may be managed by the customer or by a vendor agent, deployed into the customer’s account.
According to AWS, Amazon Virtual Private Cloud (Amazon VPC) enables customers to launch AWS resources into a virtual network that they have defined. A VPC peering connection is a networking connection between two VPCs that allows you to route traffic between them using private IPv4 addresses or IPv6 addresses. You can create a VPC peering connection between your VPCs, or with a VPC in another AWS account. As announced in November 2018, VPCs can be in different regions (known as inter-region VPC peering).
Below, we see an example of the SaaS vendor and the customer using VPC peering to enable direct interaction between a customer’s application, within their account, and their SaaS application, in the vendor’s account.
One of the limitations of VPC peering, which might impact its ability to scale for SaaS vendors using multi-tenant architectures, is overlapping CIDR blocks. According to AWS, you cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks. Given tens or hundreds of customers, VPC peering presents challenges in managing individual CIDR blocks when using a multi-tenant SaaS architectural approach.
According to AWS, a VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an Internet Gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
There are two types of VPC endpoints. An interface endpoint, Powered by AWS PrivateLink, is an elastic network interface (ENI) with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints support a large and growing list of AWS services. The second type of VPC endpoint, a gateway endpoint, is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. Both Amazon S3 and Amazon DynamoDB are currently supported by gateway endpoints.
According to AWS, AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the AWS network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to simplify the network architecture significantly. James Devine of AWS delivered a great presentation on PrivateLink, available on YouTube, AWS re:Invent 2018: Best Practices for AWS PrivateLink.
AWS Service Endpoints
AWS Service Endpoints will direct the traffic to AWS services over AWS PrivateLink while keeping the network traffic within the AWS network. AWS PrivateLink enables SaaS providers to offer services that will look and feel like they are hosted directly on a private network. These services are securely accessible both from the cloud and from on-premises via AWS Direct Connect and AWS VPN, in a highly available and scalable manner.
Below, we see an example of routing API calls from the customer’s account to the SaaS vendor’s account, using an interface VPC endpoint (interface endpoint) powered by AWS PrivateLink. The customer’s requests are sent through ENIs in each private subnet to a Network Load Balancer (NLB) within the vendor’s account, using PrivateLink. There is no need for an Internet Gateway to route traffic over the public Internet.
Below, we see an example of near real-time routing messages from an application within the customer’s account to an Amazon Kinesis Data Firehose delivery stream in the SaaS vendor’s account, using an interface endpoint powered by AWS PrivateLink. Again, there is no need for an Internet Gateway to route traffic over the public Internet. The interchange of data may be managed by the customer or by a vendor agent, deployed into the customer’s account.
According to AWS, Amazon EventBridge is a serverless event bus that makes it easy to connect applications using data from your applications, integrated SaaS applications, and AWS services. EventBridge delivers a stream of real-time data from event sources and routes that data to targets like AWS Lambda. You can set up routing rules to determine where to send your data to build application architectures that react in real-time to all of your data sources. EventBridge makes it easy to build event-driven applications because it takes care of event ingestion and delivery, security, authorization, and error handling for you. Mike Deck of AWS gave an excellent presentation on EventBridge, available on YouTube, AWS re:Invent 2019: Building event-driven architectures w/ Amazon EventBridge.
Amazon EventBridge SaaS Integrations
Amazon EventBridge ingests data from supported SaaS partners and routes it to AWS service targets. With EventBridge, you can use data from your SaaS applications to trigger workflows for customer support, business operations, and more. Currently supported SaaS partners include Auth0, Datadog, Epsagon, MongoDB, New Relic, and PagerDuty, amongst others.
Below, we see an example of routing custom events from the SaaS vendor’s account to multiple targets within the customer’s account using Amazon EventBridge. We can also use interface endpoints powered by AWS PrivateLink within both accounts to eliminate the need to send data over the public Internet. Traffic to targets of EventBridge also remains on the AWS global infrastructure.
JDBC and ODBC Database Connections
Lastly, for data sources on AWS, interactions between vendor and customer AWS Accounts can also be made using JDBC (Java Database Connectivity) and ODBC (Open Database Connectivity) database connections. Services such as Amazon Redshift, Amazon Athena, Amazon RDS, and Amazon Aurora can all be accessed using one or both of these two methods. There multiple ways to enhance the security of the connections, including SSL/TLS connections, data encryption at rest and in flight, AWS IAM service-linked roles, VPCs, and VPC peering.
Below, we see an example of the application in the SaaS vendor’s account communicating over JDBC, with a private Amazon RDS instance in the customer’s account. We can communicate between private subnets, within two different accounts using VPC peering. Like PrivateLink, traffic stays on the global AWS backbone, and never traverses the public Internet.
In this post, we examined several AWS services that help a SaaS vendor interact with their customer’s accounts. In future posts, we will take a more in-depth look at the advantages and disadvantages of each service with respect to their scalability and a vendor’s tenant isolation strategy.
This blog represents my own viewpoints and not of my employer, Amazon Web Services.