Posts Tagged Enterprise
Docker Enterprise Edition: Multi-Environment, Single Control Plane Architecture for AWS
Posted by Gary A. Stafford in AWS, Cloud, Continuous Delivery, DevOps, Enterprise Software Development, Software Development on September 6, 2017
Designing a successful, cloud-based containerized application platform requires a balance of performance and security with cost, reliability, and manageability. Ensuring that a platform meets all functional and non-functional requirements, while remaining within budget and is easily maintainable, can be challenging.
As Cloud Architect and DevOps Team Lead, I recently participated in the development of two architecturally similar, lightweight, cloud-based containerized application platforms. From the start, both platforms were architected to maximize security and performance, while minimizing cost and operational complexity. The later platform was built on AWS with Docker Enterprise Edition.
Docker Enterprise Edition
Released in March of this year, Docker Enterprise Edition (Docker EE) is a secure, full-featured container-based management platform. There are currently eight versions of Docker EE, available for Windows Server, Azure, AWS, and multiple Linux distros, including RHEL, CentOS, Ubuntu, SUSE, and Oracle.
Docker EE is one of several production-grade container orchestration Platforms as a Service (PaaS). Some of the other container platforms in this category include:
- Google Container Engine (GCE, based on Google’s Kubernetes)
- AWS EC2 Container Service (ECS)
- Microsoft Azure Container Service (ACS)
- Mesosphere Enterprise DC/OS (based on Apache Mesos)
- Red Hat OpenShift (based on Kubernetes)
- Rancher Labs (based on Kubernetes, Cattle, Mesos, or Swarm)
Docker Community Edition (CE), Kubernetes, and Apache Mesos are free and open-source. Some providers, such as Rancher Labs, offer enterprise support for an additional fee. Cloud-based services, such as Red Hat Openshift Online, AWS, GCE, and ACS, charge the typical usage monthly fee. Docker EE, similar to Mesosphere Enterprise DC/OS and Red Hat OpenShift, is priced on a per node/per year annual subscription model.
Docker EE is currently offered in three subscription tiers, including Basic, Standard, and Advanced. Additionally, Docker offers Business Day and Business Critical support. Docker EE’s Advanced Tier adds several significant features, including secure multi-tenancy with node-based isolation, and image security scanning and continuous vulnerability scanning, as part of Docker EE’s Docker Trusted Registry.
Architecting for Affordability and Maintainability
Building an enterprise-scale application platform, using public cloud infrastructure, such as AWS, and a licensed Containers-as-a-Service (CaaS) platform, such as Docker EE, can quickly become complex and costly to build and maintain. Based on current list pricing, the cost of a single Linux node ranges from USD 75 per month for basic support, up to USD 300 per month for Docker Enterprise Edition Advanced with Business Critical support. Although cost is relative to the value generated by the application platform, none the less, architects should always strive to avoid unnecessary complexity and cost.
Reoccurring operational costs, such as licensed software subscriptions, support contracts, and monthly cloud-infrastructure charges, are often overlooked by project teams during the build phase. Accurately forecasting reoccurring costs of a fully functional Production platform, under expected normal load, is essential. Teams often overlook how Docker image registries, databases, data lakes, and data warehouses, quickly swell, inflating monthly cloud-infrastructure charges to maintain the platform. The need to control cloud costs have led to the growth of third-party cloud management solutions, such as CloudCheckr Cloud Management Platform (CMP).
Shared Docker Environment Model
Most software development projects require multiple environments in which to continuously develop, test, demonstrate, stage, and release code. Creating separate environments, replete with their own Docker EE Universal Control Plane (aka Control Plane or UCP), Docker Trusted Registry (DTR), AWS infrastructure, and third-party components, would guarantee a high-level of isolation and performance. However, replicating all elements in each environment would add considerable build and run costs, as well as unnecessary complexity.
On both recent projects, we choose to create a single AWS Virtual Private Cloud (VPC), which contained all of the non-production environments required by our project teams. In parallel, we built an entirely separate Production VPC for the Production environment. I’ve seen this same pattern repeated with Red Hat OpenStack and Microsoft Azure.
Production
Isolating Production from the lower environments is essential to ensure security, and to eliminate non-production traffic from impacting the performance of Production. Corporate compliance and regulatory policies often dictate complete Production isolation. Having separate infrastructure, security appliances, role-based access controls (RBAC), configuration and secret management, and encryption keys and SSL certificates, are all required.
For complete separation of Production, different AWS accounts are frequently used. Separate AWS accounts provide separate billing, usage reporting, and AWS Identity and Access Management (IAM), amongst other advantages.
Performance and Staging
Unlike Production, there are few reasons to completely isolate lower-environments from one another. The exception I’ve encountered is Performance and Staging. These two environments are frequently separated from other environments to ensure the accuracy of performance testing and release staging activities. Performance testing, in particular, can generate enormous load on systems, which if not isolated, will impair adjacent environments, applications, and monitoring systems.
On a few recent projects, to reduce cost and complexity, we repurposed the UAT environment for performance testing, once user-acceptance testing was complete. Performance testing was conducted during off-peak development and testing periods, with access to adjacent environments blocked.
The multi-purpose UAT environment further served as a Staging environment. Applications were deployed and released to the UAT and Performance environments, following a nearly-identical process used for Production. Hotfixes to Production were also tested in this environment.
Example of Shared Environments
To demonstrate how to architect a shared non-production Docker EE environment, which minimizes cost and complexity, let’s examine the example shown below. In the example, built on AWS with Docker EE, there are four typical non-production environments, CI/CD, Development, Test, and UAT, and one Production environment.
In the example, there are two separate VPCs, the Production VPC, and the Non-Production VPC. There is no reason to configure VPC Peering between the two VPCs, as there is no need for direct communication between the two. Within the Non-Production VPC, to the left in the diagram, there is a cluster of three Docker EE UCP Manager EC2 nodes, a cluster of three DTR Worker EC2 nodes, and the four environments, consisting of varying numbers of EC2 Worker nodes. Production, to the right of the diagram, has its own cluster of three UCP Manager EC2 nodes and a cluster of six EC2 Worker nodes.
Single Non-Production UCP
As a primary means of reducing cost and complexity, in the example, a single minimally-sized Docker EE UCP cluster of three Manager nodes orchestrate activities across all four non-production environments. Alternately, you would have to create a UCP cluster for each environment; that means nine more Worker Nodes to configure and maintain.
The UCP users, teams, organizations, access controls, Docker Secrets, overlay networks, and other UCP features, for all non-production environments, are managed through the single Control Plane. All deployments to all the non-production environments, from the CI/CD server, are performed through the single Control Plane. Each UCP Manager node is deployed to a different AWS Availability Zone (AZ) to ensure high-availability.
Shared DTR
As another means of reducing cost and complexity, in the example, a Docker EE DTR cluster of three Worker nodes contain all Docker image repositories. Both the non-production and the Production environments use this DTR as a secure source of all Docker images. Not having to replicate image repositories, access controls, infrastructure, and figuring out how to migrate images between two separate DTR clusters, is a significant time, cost, and complexity savings. Each DTR Worker node is also deployed to a different AZ to ensure high-availability.
Using a shared DTR between non-production and Production is an important security consideration your project team needs to consider. A single DTR, shared between non-production and Production, comes with inherent availability and security risks, which should be understood in advance.
Separate Non-Production Worker Nodes
In the shared non-production environments example, each environment has dedicated AWS EC2 instances configured as Docker EE Worker nodes. The number of Worker nodes is determined by the requirements for each environment, as dictated by the project’s Development, Testing, Security, and DevOps teams. Like the UCP and DTR clusters, each Worker node, within an individual environment, is deployed to a different AZ to ensure high-availability and mimic the Production architecture.
Minimizing the number of Worker nodes in each environment, as well as the type and size of each EC2 node, offers a significant potential cost and administrative savings.
Separate Environment Ingress
In the example, the UCP, DTR, and each of the four environments are accessed through separate URLs, using AWS Hosted Zone CNAME records (subdomains). Encrypted HTTPS traffic is routed through a series of security appliances, depending on traffic type, to individual private AWS Elastic Load Balancers (ELB), one for both UCPs, the DTR, and each of the environments. Each ELB load-balances traffic to the Docker EE nodes associated the specific traffic. All firewalls, ELBs, and the UCP and DTR are secured with a high-grade wildcard SSL certificate.
Separate Data Sources
In the shared non-production environments example, there is one Amazon Relational Database Service (RDS) instance in non-Production and one Production. Both RDS instances are replicated across multiple Availability Zones. Within the single shared non-production RDS instance, there are four separate databases, one per non-production environment. This architecture sacrifices the potential database performance of separate RDS instances for additional cost and complexity.
Maintaining Environment Separation
Node Labels
To obtain sufficient environment separation while using a single UCP, each Docker EE Worker node is tagged with an environment
node label. The node label indicates which environment the Worker node is associated with. For example, in the screenshot below, a Worker node is assigned to the Development environment by tagging it with the key of environment
and the value of dev
.
* The Docker EE screens shown here are from UCP 2.1.5, not the recently released 2.2.x, which has an updated UI appearance.Each service’s Docker Compose file uses deployment placement constraints, which indicate where Docker should or should not deploy services. In the hello-world Docker Compose file example below, the node.labels.environment
constraint is set to the ENVIRONMENT
variable, which is set during container deployment by the CI/CD server. This constraint directs Docker to only deploy the hello-world service to nodes which contain the placement constraint of node.labels.environment
, whose value matches the ENVIRONMENT
variable value.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Hello World Service Stack | |
# DTR_URL: Docker Trusted Registry URL | |
# IMAGE: Docker Image to deply | |
# ENVIRONMENT: Environment to deploy into | |
version: '3.2' | |
services: | |
hello-world: | |
image: ${DTR_URL}/${IMAGE} | |
deploy: | |
placement: | |
constraints: | |
– node.role == worker | |
– node.labels.environment == ${ENVIRONMENT} | |
replicas: 4 | |
update_config: | |
parallelism: 4 | |
delay: 10s | |
restart_policy: | |
condition: any | |
max_attempts: 3 | |
delay: 10s | |
logging: | |
driver: fluentd | |
options: | |
tag: docker.{{.Name}} | |
env: SERVICE_NAME,ENVIRONMENT | |
environment: | |
SERVICE_NAME: hello-world | |
ENVIRONMENT: ${ENVIRONMENT} | |
command: "java \ | |
-Dspring.profiles.active=${ENVIRONMENT} \ | |
-Djava.security.egd=file:/dev/./urandom \ | |
-jar hello-world.jar" |
Deploying from CI/CD Server
The ENVIRONMENT
value is set as an environment variable, which is then used by the CI/CD server, running a docker stack deploy
or a docker service update
command, within a deployment pipeline. Below is an example of how to use the environment variable as part of a Jenkins pipeline as code Jenkinsfile.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env groovy | |
// Deploy Hello World Service Stack | |
node('java') { | |
properties([parameters([ | |
choice(choices: ["ci", "dev", "test", "uat"].join("\n"), | |
description: 'Environment', name: 'ENVIRONMENT') | |
])]) | |
stage('Git Checkout') { | |
dir('service') { | |
git branch: 'master', | |
credentialsId: 'jenkins_github_credentials', | |
url: 'ssh://git@garystafford/hello-world.git' | |
} | |
dir('credentials') { | |
git branch: 'master', | |
credentialsId: 'jenkins_github_credentials', | |
url: 'ssh://git@garystafford/ucp-bundle-jenkins.git' | |
} | |
} | |
dir('service') { | |
stage('Build and Unit Test') { | |
sh './gradlew clean cleanTest build' | |
} | |
withEnv(["IMAGE=hello-world:${BUILD_NUMBER}"]) { | |
stage('Docker Build Image') { | |
withCredentials([[$class: 'StringBinding', | |
credentialsId: 'docker_username', | |
variable: 'DOCKER_PASSWORD'], | |
[$class: 'StringBinding', | |
credentialsId: 'docker_username', | |
variable: 'DOCKER_USERNAME']]) { | |
sh "docker login -u ${DOCKER_USERNAME} -p ${DOCKER_PASSWORD} ${DTR_URL}" | |
} | |
sh "docker build –no-cache -t ${DTR_URL}/${IMAGE} ." | |
} | |
stage('Docker Push Image') { | |
sh "docker push ${DTR_URL}/${IMAGE}" | |
} | |
withEnv(['DOCKER_TLS_VERIFY=1', | |
"DOCKER_CERT_PATH=${WORKSPACE}/credentials/", | |
"DOCKER_HOST=${DOCKER_HOST}"]) { | |
stage('Docker Stack Deploy') { | |
try { | |
sh "docker service rm ${params.ENVIRONMENT}_hello-world" | |
sh 'sleep 30s' // wait for service to be completely removed if it exists | |
} catch (err) { | |
echo "Error: ${err}" // catach and move on if it doesn't already exist | |
} | |
sh "docker stack deploy \ | |
–compose-file=docker-compose.yml ${params.ENVIRONMENT}" | |
} | |
} | |
} | |
} | |
} |
Centralized Logging and Metrics Collection
Centralized logging and metrics collection systems are used for application and infrastructure dashboards, monitoring, and alerting. In the shared non-production environment examples, the centralized logging and metrics collection systems are internal to each VPC, but reside on separate EC2 instances and are not registered with the Control Plane. In this way, the logging and metrics collection systems should not impact the reliability, performance, and security of the applications running within Docker EE. In the example, Worker nodes run a containerized copy of fluentd, which collects and pushes logs to ELK’s Elasticsearch.
Logging and metrics collection systems could also be supplied by external cloud-based SaaS providers, such as Loggly, Sysdig and Datadog, or by the platform’s cloud-provider, such as Amazon CloudWatch.
With four environments running multiple containerized copies of each service, figuring out which log entry came from which service instance, requires multiple data points. As shown in the example Kibana UI below, the environment value, along with the service name and container ID, as well as the git commit hash and branch, are added to each log entry for easier troubleshooting. To include the environment, the value of the ENVIRONMENT
variable is passed to Docker’s fluentd log driver as an env
option. This same labeling method is used to tag metrics.
Separate Docker Service Stacks
For further environment separation within the single Control Plane, services are deployed as part of the same Docker service stack. Each service stack contains all services that comprise an application running within a single environment. Multiple stacks may be required to support multiple, distinct applications within the same environment.
For example, in the screenshot below, a hello-world service container, built with a Docker image, tagged with build 59 of the Jenkins continuous integration pipeline, is deployed as part of both the Development (dev) and Test service stacks. The CD and UAT service stacks each contain different versions of the hello-world service.
Separate Docker Overlay Networks
For additional environment separation within the single non-production UCP, all Docker service stacks associated with an environment, reside on the same Docker overlay network. Overlay networks manage communications among the Docker Worker nodes, enabling service-to-service communication for all services on the same overlay network while isolating services running on one network from services running on another network.
in the example screenshot below, the hello-world service, a member of the test service stack, is running on the test_default
overlay network.
Cleaning Up
Having distinct environment-centric Docker service stacks and overlay networks makes it easy to clean up an environment, without impacting adjacent environments. Both service stacks and overlay networks can be removed to clear an environment’s contents.
Separate Performance Environment
In the alternative example below, a Performance environment has been added to the Non-Production VPC. To ensure a higher level of isolation, the Performance environment has its own UPC, RDS, and ELBs. The Performance environment shares the DTR, as well as the security, logging, and monitoring components, with the rest of the non-production environments.
Below, the Performance environment has half the number of Worker nodes as Production. Performance results can be scaled for expected Production performance, given more nodes. Alternately, the number of nodes can be scaled up temporarily to match Production, then scaled back down to a minimum after testing is complete.
Shared DevOps Tooling
All environments leverage shared Development and DevOps resources, deployed to a separate VPC. Resources include Agile Application Lifecycle Management (ALM), such as JIRA or CA Agile Central, source control repository management (SCM), such as GitLab or Bitbucket, binary repository management, such as Artifactory or Nexus, and a CI/CD solution, such as Jenkins, TeamCity, or Bamboo.
From the DevOps VPC, Docker images are pushed and pulled from the DTR in the Non-Production VPC. Deployments of container-based application are executed from the DevOps VPC CI/CD server to the non-production, Performance, and Production UCPs. Separate DevOps CI/CD pipelines and access controls are essential in maintaining the separation of the non-production and Production environments.
Complete Platform
Several common components found in a Docker EE cloud-based AWS platform were discussed in the post. However, a complete AWS application platform has many more moving parts. Below is a comprehensive list of components, including DevOps tooling, organized into two categories: 1) common components that can be potentially shared across the non-production environments to save cost and complexity, and 2) components that should be replicated in each non-environment for security and performance.
Shared Non-Production Components:
- AWS
-
- Virtual Private Cloud (VPC), Region, Availability Zones
- Route Tables, Network ACLs, Internet Gateways
- Subnets
- Some Security Groups
- IAM Groups, User, Roles, Policies (RBAC)
- Relational Database Service (RDS)
- ElastiCache
- API Gateway, Lambdas
- S3 Buckets
- Bastion Servers, NAT Gateways
- Route 53 Hosted Zone (Registered Domain)
- EC2 Key Pairs
- Hardened Linux AMI
- Docker EE
-
- UCP and EC2 Manager Nodes
- DTR and EC2 Worker Nodes
- UCP and DTR Users, Teams, Organizations
- DTR Image Repositories
- Secret Management
- Third-Party Components/Products
-
- SSL Certificates
- Security Components: Firewalls, Virus Scanning, VPN Servers
- Container Security
- End-User IAM
- Directory Service
- Log Aggregation
- Metric Collection
- Monitoring, Alerting
- Configuration and Secret Management
- DevOps
-
- CI/CD Pipelines as Code
- Infrastructure as Code
- Source Code Repositories
- Binary Artifact Repositories
Isolated Non-Production Components:
- AWS
-
- Route 53 Hosted Zones and Associated Records
- Elastic Load Balancers (ELB)
- Elastic Compute Cloud (EC2) Worker Nodes
- Elastic IPs
- ELB and EC2 Security Groups
- RDS Databases (Single RDS Instance with Separate Databases)
All opinions in this post are my own and not necessarily the views of my current employer or their clients.
Software Delivery: Evaluating Risk within the Enterprise
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Software Development on November 9, 2014
Introduction
Many vendor whitepapers, industry publications, blog posts, podcasts, and e-books, extol the best practices in software development and delivery. Best practices include industry-standard concepts, such as Agile, DevOps, TTD, continuous integration, and continuous delivery. Generally, these best practices all strive to improve the process of delivering software enhancements and bug fixes to customers.
Rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. – Wikipedia
Most learning resources present one of two idealized environments, ‘applications as islands’ and ‘utopian enterprise’. I am also often guilty of tailoring my own materials to one of these two idealized environments. Neither ‘applications as islands’ or ‘utopian enterprise’, best model the typical enterprise software environments in which many of us work.
Applications as Islands
The ‘applications as islands’ environment is one of completely isolated application stacks. These types of environments have multiple application stacks, consisting of web, mobile, and desktop components, services, data sources, utilities and scripts, messaging and reporting components, and so forth. Unrealistically, each application stack is completely isolated from the other application stacks within the same environment.
The Utopian Enterprise
The ‘utopian enterprise’ environments have multiple application stacks with multiple shared components. However, they are built, unrealistically, using consistent and modern architectural patterns and compatible technology stacks. They are designed from the ground up to be compartmentalized, scalable, and highly risk-tolerant to changes. They often avoid the challenges of monolithic legacy applications. The closest things in the real world are probably industry trendsetters, such as Facebook, Etsy, Amazon, and Twitter. We all probably wish we could evolve our own software environments into one of these Utopias.
Complexity and Risk
As an organization continues to evolve their software, they naturally increase the overall complexity, and thereby the challenge of effectively delivering reliable and performant software. In this post, I will explore the challenges of software delivery, as a software environment grows in complexity. Specifically, I will focus on how to evaluate the level of risk based on software changes made to various components within the software environment.
Sensitivity and Impact
As we examine the level of risk introduced by software changes within the environment, two aspects of risk are inescapable, sensitivity and impact. Sensitivity will be defined as the potential degree of which one component, such as an application, service, or data source, is affected by changes to other components within the same software environment. How sensitive is ‘Application A’ to changes made to other components within the same software environment, on which ‘Application A’ is directly or indirectly dependent?
The impact will be defined as the potential effect a component’s changes have on other components within the software environment. Teams tend to only evaluate the impact of changes to the immediate component or application stack. They do not sufficiently consider how those changes impact those components that are directly and indirectly dependent on them. What level of impact do changes to ‘Service B’ have on all other components within the software environment that are directly and indirectly dependent on ‘Service B’?
Notice I use the word potential. Any change has the potential to introduce risk. The level of risk varies, based on the type and volume of changes. A few simple changes should have a low potential for impact, as opposed to a high number of changes, or more complex changes. For example, changing an internal error message logged by a particular service operation should present a very low risk. This, as opposed to rewriting that operation’s complex algorithm for calculating a customer’s creditworthiness. The potential impact of those two types’ changes to dependent components varies significantly.
Measuring Risk
For both sensitivity to change and impact of change, I will use a color-coded scale to subjectively assign a level of potential risk to each component within a given software environment. The scale ranges from ‘Low’, to ‘Moderate’, to ‘High’, to ‘Very High’. Using the scale, it is possible to ‘heat map’ a software environment, based on the level of risk from changes.
Independent Aspects of Risk
Sensitivity and impact are two independent aspects of risk. Changes to one component may have a ‘Low’ potential impact on all other components within the environment. While at the same time, that same component may have a ‘High’ sensitivity to changes made to other components within the environment. Alternatively, a component may have a ‘Very High’ risk for potential impact on multiple components within the environment. At the same time, that same component may have a ‘Low’ potential sensitivity to changes made to other components. Sensitivity and risk do not parallel each other.
Growing Complexity
Let’s look at how sensitivity and impact change as we increase the software environment’s complexity. In the first example, we will look at one of the two environments I described earlier, isolated applications. Applications may have their own web and mobile components, SOAP or RESTful services, data sources, utilities, scheduled tasks, and so forth. However, the applications do not depend on each other or components outside their own immediate application stack; the applications are self-contained.
When making changes in this type of environment, the real potential impact is to the overall stability, security, and performance of the individual applications, themselves. As long as they are in isolation, the applications will have no impact on each other. Therefore, applications potential sensitivity to changes and their impact on other applications is ‘Low’.
Shared Components
A slightly more complex example is a software environment in which one or more applications have a dependency on a component outside their immediate application stack. For example, a healthcare provider develops a Windows-based application to track their employee’s work schedules (Application A). In addition, they develop a web application to track patient appointments (Application B). Lastly, they offer a client-facing mobile application for patients to track personal fitness and nutrition goals (Application C). Applications B and C share a common set of services and a database for managing patient data.
Software changes made to Applications A, B, and C, should have no effect on other components within the software environment. However, Applications B and C are potentially impacted by changes made to either the Services Layer or Data Layer. The Services Layer has ‘High’ potential impact to the software environment. Lastly, the Data Layer should not be directly impacted by changes made to the Services Layer or Applications. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B and C. Therefore, the Data Layer’s potential impact on other dependent components within the environment is ‘Very High’.
Multiple Shared Components
An even more complex example is a software environment in which multiple applications have one or more dependencies on multiple components outside their immediate application stack (many-to-many).
Take, for example, a small financial institution. They have a ‘legacy’ COBOL-based application for managing their commercial mortgage business (Application A). They also have an older J2EE-based application, they acquired through a business merger, for managing their commercial banking relationships (Application B). Next, they have a relatively new Java EE-based investment banking application to manage their retail customers (Application C). Lastly, they have web-based, client-facing application for secure, online retail banking.
Since both Application A and B serve commercial clients, it is necessary to send financial data between the two application stacks. Since both applications are built on different, older technologies, the development team built a Custom Messaging Middleware component to connect the two applications. The Custom Messaging Middleware component receives, transforms, and delivers messages between the two applications.
Changes made to Applications C and D should have no impact on other components within the software environment. However, changes made to either Application A or B has the potential to indirectly affect the ability to successfully communicate with the other application, via the Custom Messaging Middleware. Changes to the Custom Messaging Middleware have the potential to affect both Applications A and B. The Custom Messaging Middleware has a ‘Moderate’ potential sensitivity to risk, versus ‘Low’, because one could argue that changes to either Application A or Application B’s messaging format could impact the Custom Messaging Middleware’s ability to properly process that application’s messages and successfully deliver them to opposite application.
Applications B, C, and D have a direct dependency on the Services Layer, and indirectly on the Data Layer. Therefore, the potential impact of changes to the Services Layer on other components is arguably higher than in the last example. The Services Layer’s potential impact on other components is ‘Very High’.
Since Application B has a direct dependency on both the Messaging Middleware and the Services Layer, it has a higher sensitivity to changes then the other three applications. Application B’s potential sensitivity to changes by other components is ‘Very High’.
Changes made to the Services Layer or the Applications will not affect the Data Layer. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B, C, and D. Therefore, the Data Layer’s potential impact on the software environment is ‘Very High’.
Small Enterprise
The last example of increasing complexity is an environment in which even more applications are dependent on even more components. Additionally, there may be different types of components, such as a common UI and third-party APIs, which only increase the complexity of the dependencies. Although this example is nowhere near as complex as many enterprise software environments, it does begin to reflect their intricate, inner-dependent structure.
Let’s use an example of a large web-based retailer. The retailer has a standalone ERM application for managing their wholesale purchasing and product distribution (Application A). Next, they have their primary client-facing storefront (Application B). They also have a separate application to handle customer accounts (Application C). Lastly, they have an application that manages their online media retail business and media storage (Application D).
In addition to the Common Services Layer, Common Data Layer, and Custom Messaging Middleware, as seen in earlier examples, the retailer has two other components in their environment, a Common Web User Interface (UI) and a Web API. The Web UI provides the customer with a seamless branded experience, no matter which application they use – Application B, C, or D. The customer enters the Common Web UI and has all three application’s features seamlessly available to them.
The retailer also exposes a RESTful Web API for its marketing affiliates. Third parties can develop a variety of applications that drive sales to the retailer, in return for a sales commission.
In the earlier examples, individual applications had separate points of entry. However, in this example, the Common Web UI provides a single point of entry for users of Applications B, C, and D. Having a single point of entry also introduces a single point of failure for all three applications. Thus, the potential risk to the retailer and their customers is much greater. The Common Web UI’s potential impact on other components is ‘Very High’.
A single point of entry also introduces a single point of failure.
The potential sensitivity of the Common Web UI to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. Additionally, one could argue, since the Common Web UI displays the three Applications, it is also sensitive to changes made by those applications. If one of those applications becomes impaired due to a bad change, that application would seem to affect the Web UI’s functionality. The Common UI’s potential sensitivity to change is ‘High’.
The Web API is similar to the Common Web UI, in terms of potential sensitivity and impact. The potential impact of changes to the Web API is ‘Very High’, since a defect there could result in the potential impairment of the retailer’s affiliate applications. The potential sensitivity of the Web API to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. The Web API’s potential sensitivity to change is ‘High’. There is very little chance of potential impact to the Web API from the retailer’s affiliate applications.
Impact of Key Components
Lastly, as systems grow in complexity, certain components often become so key, they have the potential to impact the entire environment, a true single point of failure. Below, note the potential impact of changes to the Common Services Layer on all other components. As the software environment has grown in complexity, the Common Services Layer sits at the heart of the system. The Services Layer has multiple components directly dependent on it (i.e. Application C), as well as other components indirectly dependent on it (i.e. Third-Party Applications). It is also the only point of access to and from the Common Data Layer.
There are steps organizations can take to mitigate the potential risk caused by changes to key components, like the Services Layer. Areas organizations commonly focus on to reduce risk are higher code quality, increased test coverage, and improved performance, fault tolerance, system redundancy, and rollback capabilities. Additionally, management should more thoroughly scrutinize proposed software changes to key components, balancing new features with the need for stability, availability, and performance.
Management must balance the need for new features with need for stability, availability, and performance.
Specific to services, organizations often look to decouple larger services, creating smaller, more focused services. Better separation of concerns increases the likelihood that potential impairments caused by code defects are isolated to a smaller subset of functionality.
Conclusion
In this brief post, we examined a potential risk to delivering reliable software, the impact of software changes. There are many risks to delivering reliable software. Once all sources of risk are identified and quantified, the overall level of risk to delivering reliable software can be assessed, and steps taken to reduce the potential impact.
Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on May 24, 2013
Create a new WebLogic Server domain on Oracle’s Pre-built Development VM. Remotely deploy a sample web application to the domain from a remote machine.
Introduction
In my last two posts, Using Oracle’s Pre-Built Enterprise Java VM for Development Testing and Resizing Oracle’s Pre-Built Development Virtual Machines, I introduced Oracle’s Pre-Built Enterprise Java Development VM, aka a ‘virtual appliance’. Oracle has provided ready-made VMs that would take a team of IT professionals days to assemble. The Oracle Linux 5 OS-based VM has almost everything that comprises basic enterprise test and production environment based on the Oracle/Java technology stack. The VM includes Java JDK 1.6+, WebLogic Server, Coherence, TopLink, Subversion, Hudson, Maven, NetBeans, Enterprise Pack for Eclipse, and so forth.
One of the first things you will probably want to do, once your Oracle’s Pre-Built Enterprise Java Development VM is up and running, is deploy an application to WebLogic Server. According to Oracle, WebLogic Server is ‘a scalable, enterprise-ready Java Platform, Enterprise Edition (Java EE) application server.’ Even if you haven’t used WebLogic Server before, don’t worry, Oracle has designed it to be easy to get started.
In this post I will cover creating a new WebLogic Server (WLS) domain, and the deployment a simple application to WLS from a remote development machine. The major steps in the process presented in this post are as follows:
- Create a new WLS domain
- Create and build a sample application
- Deploy the sample application to the new WLS domain
- Access deployed application via a web browser
Networking
First, let me review how I have my VM configured for networking, so you will understand my deployment methodology, discussed later. The way you configure your Oracle VM VirtualBox appliance will depend on your network topology. Again, keeping it simple for this post, I have given the Oracle VM a static IP address (192.168.1.88). The machine on which I am hosting VirtualBox uses DHCP to obtain an IP address on the same local wireless network.
For the VM’s VirtualBox networking mode, I have chosen the ‘Bridged Adapter‘ mode. Using this mode, any machine on the network can access the VM through the host machine, via the VM’s IP address. One of the best posts I have read on VM networking is on Oracle’s The Fat Bloke Sings blog, here.
Creating New WLS Domain
A domain, according Oracle, is ‘the basic administrative unit of WebLogic Server. It consists of one or more WebLogic Server instances, and logically related resources and services that are managed, collectively, as one unit.’ Although the Oracle Development VM comes with pre-existing domains, we will create our own for this post.
To create the new domain, we will use the Oracle’s Fusion Middleware Configuration Wizard. The Wizard will take you through a step-by-step process to configure your new domain. To start the wizard, from within the Oracle VM, open a terminal window, and use the following command to switch to the Wizard’s home directory and start the application.
/labs/wls1211/wlserver_12.1/common/bin/config.sh
There are a lot of configuration options available, using the Wizard. I have selected some basic settings, shown below, to configure the new domain. Feel free to change the settings as you step through the Wizard, to meet your own needs. Make sure to use the ‘Development Mode’ Start Mode Option for this post. Also, make sure to note the admin port of the domain, the domain’s location, and the username and password you choose.
- 01 – New WebLogic Domain
- 02 – Selecting the Domain Source
- 03 – Naming the Domain
- 04 – Domain User Name and Password
- 05 – Startup and JDK
- 06 – Optional Configuration
- 07 – Administrative Configuration
- 08 – JMS Configuration
- 09 – Managed Servers
- 10 – Clusters
- 11 – Machines
- 12 – Target Services
- 13 – JMS File Stores
- 14 – RDBMS
- 15 – Configuration Summary
Starting the Domain
To start the new domain, open a terminal window in the VM and run the following command to change to the root directory of the new domain and start the WLS domain instance. Your domain path and domain name may be different. The start script command will bring up a new terminal window, showing you the domain starting.
/labs/wls1211/user_projects/domains/blogdev_domain/startWebLogic.sh
WLS Administration Console
Once the domain starts, test it by opening a web browser from the host machine and entering the URL of the WLS Administration Console. If your networking is set-up correctly, the host machine will able to connect to the VM and open the domain, running on the port you indicated when creating the domain, on the static IP address of the VM. If your IP address and port are different, make sure to change the URL. To log into WLS Administration Console, use the username and password you chose when you created the domain.
http://192.168.1.88:7031/console/login/LoginForm.jsp
Before we start looking around the new domain however, let’s install an application into it.
Sample Java Application
If you have an existing application you want to install, you can skip this part. If you don’t, we will quickly create a simple Java EE Hello World web application, using a pre-existing sample project in NetBeans – no coding required. From your development machine, create a new Samples -> Web Services -> REST: Hello World (Java EE 6) Project. You now have a web project containing a simple RESTful web service, Servlet, and Java Server Page (.jsp). Build the project in NetBeans. We will upload the resulting .war file manually, in the next step.
In a previous post, Automated Deployment to GlassFish Using Jenkins CI Server and Apache Ant, we used the same sample web application to demonstrate automated deployments to Oracle’s GlassFish application server.
Deploying the Application
There are several methods to deploy applications to WLS, depending on your development workflow. For this post, we will keep it simple. We will manually deploy our web application’s .war file to WLS using the browser-based WLS Administration Console. In a future post, we will use Hudson, also included on the VM, to build and deploy an application, but for now we will do it ourselves.
To deploy the application, switch back to the WLS Administration Console. Following the screen grabs below, you will select the .war file, built from the above web application, and upload it to the Oracle VM’s new WLS domain. The .war file has all the necessary files, including the RESTful web service, Servlet, and the .jsp page. Make sure to deploy it as an ‘application’ as opposed to a ‘library’ (see ‘target style’ configuration screen, below).
Accessing the Application
Now that we have deployed the Hello World application, we will access it from our browser. From any machine on the same network, point a browser to the following URL. Adjust your URL if your VM’s IP address and domain’s port is different.
http://192.168.1.88:7031/HelloWebLogicServer/resources/helloWorld
The Hello World RESTful web service’s Web Application Description Language (WADL) description can be viewed at:
http://192.168.1.88:7031/HelloWebLogicServer/resources/application.wadl
Since the Oracle VM is accessible from anywhere on the network, the deployed application is also accessible from any device on the network, as demonstrated below.
Conclusion
This was a simple demonstration of deploying an application to WebLogic Server on Oracle’s Pre-Built Enterprise Java Development VM. WebLogic Server is a powerful, feature-rich Java application server. Once you understand how to configure and administer WLS, you can deploy more complex applications. In future posts we will show a more common, slightly more complex example of automated deployment from Hudson. In addition, we will show how to create a datasource in WLS and access it from the deployed application, to talk to a relational database.
Helpful Links
Using Oracle’s Pre-Built Enterprise Java VM for Development Testing
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Oracle Database Development, Software Development on April 19, 2013
Install and configure Oracle’s Pre-Built Enterprise Java Development VM, with Oracle Linux 5, to create quick, full-featured development test environments.
Virtual Machines for Software Developers
As software engineers, we spend a great deal of time configuring our development machines to simulate test and production environments in which our code will eventually run. With the Microsoft/.NET technology stack, that most often means installing and configuring .NET, IIS, and SQL Server. With the Oracle/Java technology stack – Java, WebLogic or GlassFish Application Server, and Oracle 11g.
Within the last few years, the growth of virtual machines (VMs) within IT/IS organizations has exploded. According to Wikipedia, a virtual machine (VM), is ‘a software implementation of a machine (i.e. a computer) that executes programs like a physical machine.’ Rapid and inexpensive virtualization of business infrastructure using VMs has led to the exponential growth of private and public cloud platforms.
Instead of attempting to configure development machines to simulate the test and production environments, which are simultaneously running development applications and often personal programs, software engineers can leverage VMs in the same way as IT/IS organizations. Free, open-source virtualization software products from Oracle and VMware offer developers the ability to easily ‘spin-up’ fresh environments to compile, deploy, and test code. Code is tested in a pristine environment, closely configured to match production, without the overhead and baggage of day-to-day development. When testing is complete, the VM is simply deleted and a new copy re-deployed for the next project.
Oracle Pre-Built Virtual Appliances
I’ve worked with a number of virtualization products, based on various Windows and Linux operating systems. Not matter the product or OS, the VM still needs to be set up just like any other new computer system, with software and configuration. However, recently I began using Oracle’s pre-built developer VMs. Oracle offers a number of pre-built VMs for various purposes, including database development, enterprise Java development, business intelligence, application hosting, SOA development, and even PHP development. The VMs, called virtual appliances, are Open Virtualization Format Archive files, built to work with Oracle VM VirtualBox. Simply import the appliance into VirtualBox and start it up.
Oracle has provided ready-made VMs that would take even the most experienced team of IT professionals days to download, install, configure, and integrate. All the configuration details, user accounts information, instructions for use, and even pre-loaded tutorials, are provided. Oracle notes on their site that these VMs are not intended for use in production. However, the VMs are more than robust enough to use as a development test environment.
Because of its similarity to my production environment, I the installed the Enterprise Java Development VM on a Windows 7 Enterprise-based development computer. The Oracle Linux 5 OS-based VM has almost everything that comprises basic enterprise test and production environment based on the Oracle/Java technology stack. The VM includes an application server, source control server, build automation server, Java SDK, two popular IDE’s, and related components. The VM includes Java JDK 1.6+, WebLogic Server, Coherence, TopLink, Subversion, Hudson, Maven, NetBeans, Enterprise Pack for Eclipse, and so forth.
Aside from a database server, the environment has everything most developers might need to develop, build, store, and host their code. If you need a database, as most of us do, you can install it into the VM, or better yet, implement the Database App Development VM, in parallel. The Database VM contains Oracle’s 11g Release 2 enterprise-level relational database, along with several related database development and management tools. Using a persistence layer (data access layer), built with the included EclipseLink, you can connect the Enterprise appliance to the database appliance.
Set-Up Process
I followed the following steps to setup my VM:
- Update (or download and install) Oracle VM VirtualBox to the latest release.
- Download (6) Open Virtualization Format Archive (OVF/.ova) files.
- Download script to combine the .ova files.
- Execute script to assemble (6) .ova files into single. ova file.
- Import the appliance (combined .ova file) into VirtualBox.
- Optional: Clone and resize the appliance’s (2) virtual machines disks (see note below).
- Optional: Add the Yum Server configuration to the VM to enable normal software updates (see instructions below).
- Change any necessary settings within VM: date/time, timezone, etc.
- Install and/or update system software and development applications within VM: Java 1.7, etc.
Issue with Small Footprint of VM
The small size of the of pre-built VM is major issue I ran into almost immediately. Note in the screen grab above of VirtualBox, the Oracle VM only has (2) 8 GB virtual machine disks (.vmdk). Although Oracle designed the VMs to have a small footprint, it was so small that I quickly filled up its primary partition. At that point, the VM was too full to apply the even the normal system updates. I switched the cache location for yum to a different partition, but then ran out of space again when yum tried to apply the updates it had downloaded to the primary partition.
Lack of disk space was a complete show-stopper for me until I researched a fix. Unfortunately, VirtualBox’s ‘VBoxManage modifyhd –resize’ command is not yet compatible with the virtual machine disk (.vmdk) format. After much trial and error, and a few late nights reading posts by others who had run into this problem, I found a fairly easy series of steps to enlarge the size of the VM. It doesn’t require you to be a complete ‘Linux Geek’, and only takes about 30 minutes of copying and restarting the VM a few times. I will included the instructions in this separate, upcoming post.
Issue with Package Updater
While solving the VM’s disk space issue, I also discoverer the VM’s Enterprise Linux System was setup with something called the Unbreakable Linux Network (ULN) Update Agent. From what I understood, without a service agreement with Oracle, I could not update the VM’s software using the standard Package Updater. However, a few quick commands I found on the Yum Server site, overcame that limitation and allowed me to update the VM’s software. Just follow the simple instructions here, for Oracle Linux 5. There are several hundred updates that will be applied, including an upgrade of Oracle Linux from 5.5 to 5.9.
Issue with Java Updates
Along with the software updates, I ran into an issue installing the latest version of Java. I attempted to install the standard Oracle package that contained the latest Java JDK, JRE, and NetBeans for Linux. Upon starting the install script, I immediately received a ‘SELinux AVC denial’ message. This security measure halted my installation with the following error: ‘The java application attempted to load /labs/java/jre1.7.0_21/lib/i386/client/libjvm.so which requires text relocation. This is a potential security problem.‘
To get around the SELinux AVC denial issue, I installed the JRE, JDK, and NetBeans separately. Although this took longer and required a number of steps, it allowed me to get around the security and install the latest version of Java.
Note I later discovered that I could have simply changed the SELinux Security Level to ‘Permissive’ in the SELinux Security and Firewall, part of the Administrative Preferences. This would have allowed the original Oracle package containing the JDK, JRE, and NetBeans, to run.