Posts Tagged Linux
Docker Enterprise Edition: Multi-Environment, Single Control Plane Architecture for AWS
Posted by Gary A. Stafford in AWS, Cloud, Continuous Delivery, DevOps, Enterprise Software Development, Software Development on September 6, 2017
Designing a successful, cloud-based containerized application platform requires a balance of performance and security with cost, reliability, and manageability. Ensuring that a platform meets all functional and non-functional requirements, while remaining within budget and is easily maintainable, can be challenging.
As Cloud Architect and DevOps Team Lead, I recently participated in the development of two architecturally similar, lightweight, cloud-based containerized application platforms. From the start, both platforms were architected to maximize security and performance, while minimizing cost and operational complexity. The later platform was built on AWS with Docker Enterprise Edition.
Docker Enterprise Edition
Released in March of this year, Docker Enterprise Edition (Docker EE) is a secure, full-featured container-based management platform. There are currently eight versions of Docker EE, available for Windows Server, Azure, AWS, and multiple Linux distros, including RHEL, CentOS, Ubuntu, SUSE, and Oracle.
Docker EE is one of several production-grade container orchestration Platforms as a Service (PaaS). Some of the other container platforms in this category include:
- Google Container Engine (GCE, based on Google’s Kubernetes)
- AWS EC2 Container Service (ECS)
- Microsoft Azure Container Service (ACS)
- Mesosphere Enterprise DC/OS (based on Apache Mesos)
- Red Hat OpenShift (based on Kubernetes)
- Rancher Labs (based on Kubernetes, Cattle, Mesos, or Swarm)
Docker Community Edition (CE), Kubernetes, and Apache Mesos are free and open-source. Some providers, such as Rancher Labs, offer enterprise support for an additional fee. Cloud-based services, such as Red Hat Openshift Online, AWS, GCE, and ACS, charge the typical usage monthly fee. Docker EE, similar to Mesosphere Enterprise DC/OS and Red Hat OpenShift, is priced on a per node/per year annual subscription model.
Docker EE is currently offered in three subscription tiers, including Basic, Standard, and Advanced. Additionally, Docker offers Business Day and Business Critical support. Docker EE’s Advanced Tier adds several significant features, including secure multi-tenancy with node-based isolation, and image security scanning and continuous vulnerability scanning, as part of Docker EE’s Docker Trusted Registry.
Architecting for Affordability and Maintainability
Building an enterprise-scale application platform, using public cloud infrastructure, such as AWS, and a licensed Containers-as-a-Service (CaaS) platform, such as Docker EE, can quickly become complex and costly to build and maintain. Based on current list pricing, the cost of a single Linux node ranges from USD 75 per month for basic support, up to USD 300 per month for Docker Enterprise Edition Advanced with Business Critical support. Although cost is relative to the value generated by the application platform, none the less, architects should always strive to avoid unnecessary complexity and cost.
Reoccurring operational costs, such as licensed software subscriptions, support contracts, and monthly cloud-infrastructure charges, are often overlooked by project teams during the build phase. Accurately forecasting reoccurring costs of a fully functional Production platform, under expected normal load, is essential. Teams often overlook how Docker image registries, databases, data lakes, and data warehouses, quickly swell, inflating monthly cloud-infrastructure charges to maintain the platform. The need to control cloud costs have led to the growth of third-party cloud management solutions, such as CloudCheckr Cloud Management Platform (CMP).
Shared Docker Environment Model
Most software development projects require multiple environments in which to continuously develop, test, demonstrate, stage, and release code. Creating separate environments, replete with their own Docker EE Universal Control Plane (aka Control Plane or UCP), Docker Trusted Registry (DTR), AWS infrastructure, and third-party components, would guarantee a high-level of isolation and performance. However, replicating all elements in each environment would add considerable build and run costs, as well as unnecessary complexity.
On both recent projects, we choose to create a single AWS Virtual Private Cloud (VPC), which contained all of the non-production environments required by our project teams. In parallel, we built an entirely separate Production VPC for the Production environment. I’ve seen this same pattern repeated with Red Hat OpenStack and Microsoft Azure.
Production
Isolating Production from the lower environments is essential to ensure security, and to eliminate non-production traffic from impacting the performance of Production. Corporate compliance and regulatory policies often dictate complete Production isolation. Having separate infrastructure, security appliances, role-based access controls (RBAC), configuration and secret management, and encryption keys and SSL certificates, are all required.
For complete separation of Production, different AWS accounts are frequently used. Separate AWS accounts provide separate billing, usage reporting, and AWS Identity and Access Management (IAM), amongst other advantages.
Performance and Staging
Unlike Production, there are few reasons to completely isolate lower-environments from one another. The exception I’ve encountered is Performance and Staging. These two environments are frequently separated from other environments to ensure the accuracy of performance testing and release staging activities. Performance testing, in particular, can generate enormous load on systems, which if not isolated, will impair adjacent environments, applications, and monitoring systems.
On a few recent projects, to reduce cost and complexity, we repurposed the UAT environment for performance testing, once user-acceptance testing was complete. Performance testing was conducted during off-peak development and testing periods, with access to adjacent environments blocked.
The multi-purpose UAT environment further served as a Staging environment. Applications were deployed and released to the UAT and Performance environments, following a nearly-identical process used for Production. Hotfixes to Production were also tested in this environment.
Example of Shared Environments
To demonstrate how to architect a shared non-production Docker EE environment, which minimizes cost and complexity, let’s examine the example shown below. In the example, built on AWS with Docker EE, there are four typical non-production environments, CI/CD, Development, Test, and UAT, and one Production environment.
In the example, there are two separate VPCs, the Production VPC, and the Non-Production VPC. There is no reason to configure VPC Peering between the two VPCs, as there is no need for direct communication between the two. Within the Non-Production VPC, to the left in the diagram, there is a cluster of three Docker EE UCP Manager EC2 nodes, a cluster of three DTR Worker EC2 nodes, and the four environments, consisting of varying numbers of EC2 Worker nodes. Production, to the right of the diagram, has its own cluster of three UCP Manager EC2 nodes and a cluster of six EC2 Worker nodes.
Single Non-Production UCP
As a primary means of reducing cost and complexity, in the example, a single minimally-sized Docker EE UCP cluster of three Manager nodes orchestrate activities across all four non-production environments. Alternately, you would have to create a UCP cluster for each environment; that means nine more Worker Nodes to configure and maintain.
The UCP users, teams, organizations, access controls, Docker Secrets, overlay networks, and other UCP features, for all non-production environments, are managed through the single Control Plane. All deployments to all the non-production environments, from the CI/CD server, are performed through the single Control Plane. Each UCP Manager node is deployed to a different AWS Availability Zone (AZ) to ensure high-availability.
Shared DTR
As another means of reducing cost and complexity, in the example, a Docker EE DTR cluster of three Worker nodes contain all Docker image repositories. Both the non-production and the Production environments use this DTR as a secure source of all Docker images. Not having to replicate image repositories, access controls, infrastructure, and figuring out how to migrate images between two separate DTR clusters, is a significant time, cost, and complexity savings. Each DTR Worker node is also deployed to a different AZ to ensure high-availability.
Using a shared DTR between non-production and Production is an important security consideration your project team needs to consider. A single DTR, shared between non-production and Production, comes with inherent availability and security risks, which should be understood in advance.
Separate Non-Production Worker Nodes
In the shared non-production environments example, each environment has dedicated AWS EC2 instances configured as Docker EE Worker nodes. The number of Worker nodes is determined by the requirements for each environment, as dictated by the project’s Development, Testing, Security, and DevOps teams. Like the UCP and DTR clusters, each Worker node, within an individual environment, is deployed to a different AZ to ensure high-availability and mimic the Production architecture.
Minimizing the number of Worker nodes in each environment, as well as the type and size of each EC2 node, offers a significant potential cost and administrative savings.
Separate Environment Ingress
In the example, the UCP, DTR, and each of the four environments are accessed through separate URLs, using AWS Hosted Zone CNAME records (subdomains). Encrypted HTTPS traffic is routed through a series of security appliances, depending on traffic type, to individual private AWS Elastic Load Balancers (ELB), one for both UCPs, the DTR, and each of the environments. Each ELB load-balances traffic to the Docker EE nodes associated the specific traffic. All firewalls, ELBs, and the UCP and DTR are secured with a high-grade wildcard SSL certificate.
Separate Data Sources
In the shared non-production environments example, there is one Amazon Relational Database Service (RDS) instance in non-Production and one Production. Both RDS instances are replicated across multiple Availability Zones. Within the single shared non-production RDS instance, there are four separate databases, one per non-production environment. This architecture sacrifices the potential database performance of separate RDS instances for additional cost and complexity.
Maintaining Environment Separation
Node Labels
To obtain sufficient environment separation while using a single UCP, each Docker EE Worker node is tagged with an environment
node label. The node label indicates which environment the Worker node is associated with. For example, in the screenshot below, a Worker node is assigned to the Development environment by tagging it with the key of environment
and the value of dev
.
* The Docker EE screens shown here are from UCP 2.1.5, not the recently released 2.2.x, which has an updated UI appearance.Each service’s Docker Compose file uses deployment placement constraints, which indicate where Docker should or should not deploy services. In the hello-world Docker Compose file example below, the node.labels.environment
constraint is set to the ENVIRONMENT
variable, which is set during container deployment by the CI/CD server. This constraint directs Docker to only deploy the hello-world service to nodes which contain the placement constraint of node.labels.environment
, whose value matches the ENVIRONMENT
variable value.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Hello World Service Stack | |
# DTR_URL: Docker Trusted Registry URL | |
# IMAGE: Docker Image to deply | |
# ENVIRONMENT: Environment to deploy into | |
version: '3.2' | |
services: | |
hello-world: | |
image: ${DTR_URL}/${IMAGE} | |
deploy: | |
placement: | |
constraints: | |
– node.role == worker | |
– node.labels.environment == ${ENVIRONMENT} | |
replicas: 4 | |
update_config: | |
parallelism: 4 | |
delay: 10s | |
restart_policy: | |
condition: any | |
max_attempts: 3 | |
delay: 10s | |
logging: | |
driver: fluentd | |
options: | |
tag: docker.{{.Name}} | |
env: SERVICE_NAME,ENVIRONMENT | |
environment: | |
SERVICE_NAME: hello-world | |
ENVIRONMENT: ${ENVIRONMENT} | |
command: "java \ | |
-Dspring.profiles.active=${ENVIRONMENT} \ | |
-Djava.security.egd=file:/dev/./urandom \ | |
-jar hello-world.jar" |
Deploying from CI/CD Server
The ENVIRONMENT
value is set as an environment variable, which is then used by the CI/CD server, running a docker stack deploy
or a docker service update
command, within a deployment pipeline. Below is an example of how to use the environment variable as part of a Jenkins pipeline as code Jenkinsfile.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env groovy | |
// Deploy Hello World Service Stack | |
node('java') { | |
properties([parameters([ | |
choice(choices: ["ci", "dev", "test", "uat"].join("\n"), | |
description: 'Environment', name: 'ENVIRONMENT') | |
])]) | |
stage('Git Checkout') { | |
dir('service') { | |
git branch: 'master', | |
credentialsId: 'jenkins_github_credentials', | |
url: 'ssh://git@garystafford/hello-world.git' | |
} | |
dir('credentials') { | |
git branch: 'master', | |
credentialsId: 'jenkins_github_credentials', | |
url: 'ssh://git@garystafford/ucp-bundle-jenkins.git' | |
} | |
} | |
dir('service') { | |
stage('Build and Unit Test') { | |
sh './gradlew clean cleanTest build' | |
} | |
withEnv(["IMAGE=hello-world:${BUILD_NUMBER}"]) { | |
stage('Docker Build Image') { | |
withCredentials([[$class: 'StringBinding', | |
credentialsId: 'docker_username', | |
variable: 'DOCKER_PASSWORD'], | |
[$class: 'StringBinding', | |
credentialsId: 'docker_username', | |
variable: 'DOCKER_USERNAME']]) { | |
sh "docker login -u ${DOCKER_USERNAME} -p ${DOCKER_PASSWORD} ${DTR_URL}" | |
} | |
sh "docker build –no-cache -t ${DTR_URL}/${IMAGE} ." | |
} | |
stage('Docker Push Image') { | |
sh "docker push ${DTR_URL}/${IMAGE}" | |
} | |
withEnv(['DOCKER_TLS_VERIFY=1', | |
"DOCKER_CERT_PATH=${WORKSPACE}/credentials/", | |
"DOCKER_HOST=${DOCKER_HOST}"]) { | |
stage('Docker Stack Deploy') { | |
try { | |
sh "docker service rm ${params.ENVIRONMENT}_hello-world" | |
sh 'sleep 30s' // wait for service to be completely removed if it exists | |
} catch (err) { | |
echo "Error: ${err}" // catach and move on if it doesn't already exist | |
} | |
sh "docker stack deploy \ | |
–compose-file=docker-compose.yml ${params.ENVIRONMENT}" | |
} | |
} | |
} | |
} | |
} |
Centralized Logging and Metrics Collection
Centralized logging and metrics collection systems are used for application and infrastructure dashboards, monitoring, and alerting. In the shared non-production environment examples, the centralized logging and metrics collection systems are internal to each VPC, but reside on separate EC2 instances and are not registered with the Control Plane. In this way, the logging and metrics collection systems should not impact the reliability, performance, and security of the applications running within Docker EE. In the example, Worker nodes run a containerized copy of fluentd, which collects and pushes logs to ELK’s Elasticsearch.
Logging and metrics collection systems could also be supplied by external cloud-based SaaS providers, such as Loggly, Sysdig and Datadog, or by the platform’s cloud-provider, such as Amazon CloudWatch.
With four environments running multiple containerized copies of each service, figuring out which log entry came from which service instance, requires multiple data points. As shown in the example Kibana UI below, the environment value, along with the service name and container ID, as well as the git commit hash and branch, are added to each log entry for easier troubleshooting. To include the environment, the value of the ENVIRONMENT
variable is passed to Docker’s fluentd log driver as an env
option. This same labeling method is used to tag metrics.
Separate Docker Service Stacks
For further environment separation within the single Control Plane, services are deployed as part of the same Docker service stack. Each service stack contains all services that comprise an application running within a single environment. Multiple stacks may be required to support multiple, distinct applications within the same environment.
For example, in the screenshot below, a hello-world service container, built with a Docker image, tagged with build 59 of the Jenkins continuous integration pipeline, is deployed as part of both the Development (dev) and Test service stacks. The CD and UAT service stacks each contain different versions of the hello-world service.
Separate Docker Overlay Networks
For additional environment separation within the single non-production UCP, all Docker service stacks associated with an environment, reside on the same Docker overlay network. Overlay networks manage communications among the Docker Worker nodes, enabling service-to-service communication for all services on the same overlay network while isolating services running on one network from services running on another network.
in the example screenshot below, the hello-world service, a member of the test service stack, is running on the test_default
overlay network.
Cleaning Up
Having distinct environment-centric Docker service stacks and overlay networks makes it easy to clean up an environment, without impacting adjacent environments. Both service stacks and overlay networks can be removed to clear an environment’s contents.
Separate Performance Environment
In the alternative example below, a Performance environment has been added to the Non-Production VPC. To ensure a higher level of isolation, the Performance environment has its own UPC, RDS, and ELBs. The Performance environment shares the DTR, as well as the security, logging, and monitoring components, with the rest of the non-production environments.
Below, the Performance environment has half the number of Worker nodes as Production. Performance results can be scaled for expected Production performance, given more nodes. Alternately, the number of nodes can be scaled up temporarily to match Production, then scaled back down to a minimum after testing is complete.
Shared DevOps Tooling
All environments leverage shared Development and DevOps resources, deployed to a separate VPC. Resources include Agile Application Lifecycle Management (ALM), such as JIRA or CA Agile Central, source control repository management (SCM), such as GitLab or Bitbucket, binary repository management, such as Artifactory or Nexus, and a CI/CD solution, such as Jenkins, TeamCity, or Bamboo.
From the DevOps VPC, Docker images are pushed and pulled from the DTR in the Non-Production VPC. Deployments of container-based application are executed from the DevOps VPC CI/CD server to the non-production, Performance, and Production UCPs. Separate DevOps CI/CD pipelines and access controls are essential in maintaining the separation of the non-production and Production environments.
Complete Platform
Several common components found in a Docker EE cloud-based AWS platform were discussed in the post. However, a complete AWS application platform has many more moving parts. Below is a comprehensive list of components, including DevOps tooling, organized into two categories: 1) common components that can be potentially shared across the non-production environments to save cost and complexity, and 2) components that should be replicated in each non-environment for security and performance.
Shared Non-Production Components:
- AWS
-
- Virtual Private Cloud (VPC), Region, Availability Zones
- Route Tables, Network ACLs, Internet Gateways
- Subnets
- Some Security Groups
- IAM Groups, User, Roles, Policies (RBAC)
- Relational Database Service (RDS)
- ElastiCache
- API Gateway, Lambdas
- S3 Buckets
- Bastion Servers, NAT Gateways
- Route 53 Hosted Zone (Registered Domain)
- EC2 Key Pairs
- Hardened Linux AMI
- Docker EE
-
- UCP and EC2 Manager Nodes
- DTR and EC2 Worker Nodes
- UCP and DTR Users, Teams, Organizations
- DTR Image Repositories
- Secret Management
- Third-Party Components/Products
-
- SSL Certificates
- Security Components: Firewalls, Virus Scanning, VPN Servers
- Container Security
- End-User IAM
- Directory Service
- Log Aggregation
- Metric Collection
- Monitoring, Alerting
- Configuration and Secret Management
- DevOps
-
- CI/CD Pipelines as Code
- Infrastructure as Code
- Source Code Repositories
- Binary Artifact Repositories
Isolated Non-Production Components:
- AWS
-
- Route 53 Hosted Zones and Associated Records
- Elastic Load Balancers (ELB)
- Elastic Compute Cloud (EC2) Worker Nodes
- Elastic IPs
- ELB and EC2 Security Groups
- RDS Databases (Single RDS Instance with Separate Databases)
All opinions in this post are my own and not necessarily the views of my current employer or their clients.
Shell Script to Automate Creation of Swap File on Linux
Posted by Gary A. Stafford in Bash Scripting, DevOps, Enterprise Software Development, Software Development on December 19, 2013
Introduction
Recently, while scripting the installation of Oracle’s WebLogic Server, I ran into an issue with a lack of a swap space. I was automating the installation of WebLogic in Silent Mode on a Vagrant VM. The VM was built from an Ubuntu Cloud Image of Ubuntu Server. Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The Ubuntu image did not have the minimum 512 MB of swap space required by the WebLogic installer.
Swap
According to Gary Sims on Linux.com, “Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.”
Scripts
To create the required swap space, I could create either a swap partition or a swap file. I chose to create a swap file, using a shell script. Actually, there are two scripts. The first script creates creates a 512 MB swap file as a pre-step in the automated installation of WebLogic. Once the WebLogic installation is complete, the second, optional script, may be ran to remove the swap file. ArchWiki (wiki.archlinux.org) has an excellent post on swap space I referenced to build my first script.
Use a ‘sudo ./create_swap.sh’ command to create the swap file and display the results in the terminal.
#!/bin/sh | |
# size of swapfile in megabytes | |
swapsize=512 | |
# does the swap file already exist? | |
grep -q "swapfile" /etc/fstab | |
# if not then create it | |
if [ $? -ne 0 ]; then | |
echo 'swapfile not found. Adding swapfile.' | |
fallocate -l ${swapsize}M /swapfile | |
chmod 600 /swapfile | |
mkswap /swapfile | |
swapon /swapfile | |
echo '/swapfile none swap defaults 0 0' >> /etc/fstab | |
else | |
echo 'swapfile found. No changes made.' | |
fi | |
# output results to terminal | |
cat /proc/swaps | |
cat /proc/meminfo | grep Swap | |
If the swap file is no longer required, the second script will remove it. Use a ‘sudo ./remove_swap.sh’ command to remove the swap file and display the results in the terminal. LinuxQuestions.org has a good Forum post on removing swap files I referenced to build my second script.
#!/bin/sh | |
# does the swap file exist? | |
grep -q "swapfile" /etc/fstab | |
# if it does then remove it | |
if [ $? -eq 0 ]; then | |
echo 'swapfile found. Removing swapfile.' | |
sed -i '/swapfile/d' /etc/fstab | |
echo "3" > /proc/sys/vm/drop_caches | |
swapoff -a | |
rm -f /swapfile | |
else | |
echo 'No swapfile found. No changes made.' | |
fi | |
# output results to terminal | |
cat /proc/swaps | |
cat /proc/meminfo | grep Swap |
Updating Ubuntu Linux to the Latest JDK
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on December 16, 2013
Introduction
If you are Java Developer, new to the Linux environment, installing and configuring Java updates can be a bit daunting. In the following post, we will update a VirtualBox VM running Canonical’s popular Ubuntu Linux operating system. The VM currently contains an earlier version of Java. We will update the VM to the latest release of the Java.
All code for this post is available as Gists on GitHub.com, including a complete install script, explained at the end of this post.
Current Version of Java?
First, we will use the ‘update-alternatives –display java’ command to review all the versions of Java currently installed on the VM. We can have multiple copies installed, but only one will be configured and active. We can verify the active version using the ‘java -version’ command.
# preview java alternatives | |
update-alternatives --display java | |
# check current version of java | |
java -version |
In the above example, the 1.7.0_17 JDK version of Java is configured and active. That version is located in the ‘/usr/lib/jvm/jdk1.7.0_17’ subdirectory. There are two other Java versions also installed but not active, an Oracle 1.7.0_17 JRE version and an older 1.7.0_04 JDK version. These three versions are referred to as ‘alternatives’, thus the ‘alternatives’ command. By selecting an alternative version of Java, we control which java.exe binary executable file the system calls when the ‘java’ command is executed. In a many software development environments, you may need different versions of Java, depending on different client project’s level of technology.
Alternatives
According to About.com, alternatives ‘make it possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. A generic name in the filesystem is shared by all files providing interchangeable functionality. The alternatives system and the system administrator together determine which actual file is referenced by this generic name. The generic name is not a direct symbolic link to the selected alternative. Instead, it is a symbolic link to a name in the alternatives directory, which in turn is a symbolic link to the actual file referenced.’
We can see this system at work by changing to our ‘/usr/bin’ directory. This directory contains the majority of binary executables on the system. Executing an ‘ls -Al /usr/bin/* | grep -e java -e jar -e appletviewer -e mozilla-javaplugin.so’ command, we see that each Java executable is actually a symbolic link to the alternatives directory, not a binary executable file.
# view java-related executables | |
ls -Al /usr/bin/* | \ | |
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so | |
# view all the commands which support alternatives | |
update-alternatives --get-selections | |
# view all java-related executables which support alternatives | |
update-alternatives --get-selections | \ | |
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so |
To find out all the commands which support alternatives, you can use the ‘update-alternatives –get-selections’ command. We can use a similar command to get just the Java commands, ‘update-alternatives –get-selections | grep -e java -e jar -e appletview -e mozilla-javaplugin.so’.
Computer Architecture?
Next, we need to determine the computer processor architecture of the VM. The architecture determines which version of Java to download. The machine that hosts our VM may have a 64-bit architecture (also known as x86-64, x64, and amd64), while the VM might have a 32-bit architecture (also known as IA-32 or x86). Trying to install 64-bit versions of software onto 32-bit VMs is a common mistake.
The VM’s architecture was originally displayed with the ‘java -version’ command, above. To confirm the 64-bit architecture we can use either the ‘uname -a’ or ‘arch’ command.
# determine the processor architecture | |
uname -a | |
arch |
JRE or JDK?
One last decision. The Java Runtime Environment (JRE) purpose is to run Java applications. The JRE covers most end-user’s needs. The Java Development Kit (JDK) purpose is to develop Java applications. The JDK includes a complete JRE, plus tools for developing, debugging, and monitoring Java applications. As developers, we will choose to install the JDK.
Download Latest Version
In the above screen grab, you see our VM is running a 64-bit version of Ubuntu 12.04.3 LTS (Precise Pangolin). Therefore, we will download the most recent version of the 64-bit Linux JDK. We could choose either Oracle’s commercial version of Java or the OpenJDK version. According to Oracle, the ‘OpenJDK is an open-source implementation of the Java Platform, Standard Edition (Java SE) specifications’. We will choose the latest commercial version of Oracle’s JDK. At the time of this post, that is JDK 7u45 (aka 1.7.0_45-b18).
The Linux file formats available for download, are a .rpm (Red Hat Package Manager) file and a .tar.gz file (aka tarball). For this post, we will download the tarball, the ‘jdk-7u45-linux-x64.tar.gz’ file.
Extract Tarball
We will use the command ‘sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm’, to extract the files directly to the ‘/usr/lib/jvm’ folder. This folder contains all previously installed versions of Java. Once the tarball’s files are extracted, we should see a new directory containing the new version of Java, ‘jdk1.7.0_45’, in the ‘/usr/lib/jvm’ directory.
# extract new version of java from the downloaded tarball | |
cd ~/Downloads/Java | |
ls | |
sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm/ | |
# change directories to the location of the extracted version of java | |
cd /usr/lib/jvm/ | |
ls |
Installation
There are two configuration modes available in the alternatives system, manual and automatic mode. According to die.net, ‘when a link group is in manual mode, the alternatives system will not (automatically) make any changes to the system administrator’s settings’. When a link group is in automatic mode, the alternatives system ensures that the links in the group point to the highest priority alternatives appropriate for the group’.
We will first install and configure the new version of Java in manual mode. To install the new version of Java, we run ‘update-alternatives –install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4’. Note the last parameter, ‘4’, the priority. Why ‘4’? If we chose to use automatic mode, as we will a little later, we want our new Java version to have the highest numeric priority. In automatic mode, the system looks at the priority to determine which version of Java it will run. In the post’s first screen grab, note each of the three installed Java versions had different priorities: 1, 2, and 3. If we want to use automatic mode later, we must set a higher priority on our new version of Java, ‘4’ or greater. We will set it now as part of the install, so we can use it later in automatic mode.
Configuration
First, to configure (activate) the new Java version in alternatives manual mode, we will run ‘update-alternatives –config java’. We are prompted to choose from the list of alternatives for Java executables (java.exe), which has now grown from three to four choices. Choose the new JDK version of Java, which we just installed.
That’s it. The system has been instructed to use the version we selected as the Java alternative. Now, when we run the ‘java’ command, the system will access the newly configured JDK version. To verify, we rerun the ‘java -version’ command. The version of Java has changed since the first time we ran the command (see first screen grab).
# find a higher priority number | |
update-alternatives --display java | |
# install new version of java | |
sudo update-alternatives --install /usr/bin/java java \ | |
/usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4 | |
# configure new version of java | |
update-alternatives --config java | |
# select the new version when prompted |
Now let’s switch to automatic mode. To switch to automatic mode, we use ‘update-alternatives –auto java’. Notice how the mode has changed in the screen grab below and our version with the highest priority is selected.
Other Java Command Alternatives
We will repeat this process for the other Java-related executables. Many are part of the JDK Tools and Utilities. Java-related executables include the Javadoc Tool (javadoc), Java Web Start (javaws), Java Compiler (javac), and Java Archive Tool (jar), javap, javah, appletviewer, and the Java Plugin for Linux. Note if you are updating a headless VM, you would not have a web browser installed. Therefore it would not be necessary to configuring the mozilla-javaplugin.
# install and configure other java executable alternatives in manual mode | |
# install and update Java Web Start (javaws) | |
update-alternatives --display javaws | |
sudo update-alternatives --install /usr/bin/javaws javaws \ | |
/usr/lib/jvm/jdk1.7.0_45/jre/bin/javaws 20000 | |
update-alternatives --config javaws | |
# install and update Java Compiler (javac) | |
update-alternatives --display javac | |
sudo update-alternatives --install /usr/bin/javac javac \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/javac 20000 | |
update-alternatives --config javac | |
# install and update Java Archive Tool (jar) | |
update-alternatives --display jar | |
sudo update-alternatives --install /usr/bin/jar jar \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/jar 20000 | |
update-alternatives --config jar | |
# jar signing and verification tool (jarsigner) | |
update-alternatives --display jarsigner | |
sudo sudo update-alternatives --install /usr/bin/jarsigner jarsigner \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/jarsigner 20000 | |
update-alternatives --config jarsigner | |
# install and update Java Archive Tool (javadoc) | |
update-alternatives --display javadoc | |
sudo update-alternatives --install /usr/bin/javadoc javadoc \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/javadoc 20000 | |
update-alternatives --config javadoc | |
# install and update Java Plugin for Linux (mozilla-javaplugin.so) | |
update-alternatives --display mozilla-javaplugin.so | |
sudo update-alternatives --install \ | |
/usr/lib/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so \ | |
/usr/lib/jvm/jdk1.7.0_45/jre/lib/amd64/libnpjp2.so 20000 | |
update-alternatives --config mozilla-javaplugin.so | |
# install and update Java disassembler (javap) | |
update-alternatives --display javap | |
sudo update-alternatives --install /usr/bin/javap javap \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/javap 20000 | |
update-alternatives --config javap | |
# install and update file that creates C header files and C stub (javah) | |
update-alternatives --display javah | |
sudo update-alternatives --install /usr/bin/javah javah \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/javah 20000 | |
update-alternatives --config javah | |
# jexec utility executes command inside the jail (jexec) | |
update-alternatives --display jexec | |
sudo update-alternatives --install /usr/bin/jexec jexec \ | |
/usr/lib/jvm/jdk1.7.0_45/lib/jexec 20000 | |
update-alternatives --config jexec | |
# install and update file to run applets without a web browser (appletviewer) | |
update-alternatives --display appletviewer | |
sudo update-alternatives --install /usr/bin/appletviewer appletviewer \ | |
/usr/lib/jvm/jdk1.7.0_45/bin/appletviewer 20000 | |
update-alternatives --config appletviewer |
JAVA_HOME
Many applications which need the JDK/JRE to run, look up the JAVA_HOME environment variable for the location of the Java compiler/interpreter. The most common approach is to hard-wire the JAVA_HOME variable to the current JDK directory. User the ‘sudo nano ~/.bashrc’ command to open the bashrc file. Add or modify the following line, ‘export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45’. Remember to also add the java executable path to the PATH variable, ‘export PATH=$PATH:$JAVA_HOME/bin’. Lastly, execute a ‘bash –login’ command for the changes to be visible in the current session.
Alternately, we could use a symbolic link to ‘default-java’. There are several good posts on the Internet on using the ‘default-java’ link.
# open bashrc | |
sudo nano ~/.bashrc | |
#add these lines to bashrc file | |
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45 | |
export PATH=$PATH:$JAVA_HOME/bin | |
# for the changes to be visible in the current session | |
bash --login |
Complete Scripted Example
Below is a complete script to install or update the latest JDK on Ubuntu. Simply download the tarball to your home directory, along with the below script, available on GitHub. Execute the script with a ‘sh install_java_complete.sh’. All java executables will be installed and configured as alternatives in automatic mode. The JAVA_HOME and PATH environment variables will also be set.
#!/bin/bash | |
################################################################# | |
# author: Gary A. Stafford | |
# date: 2013-12-17 | |
# source: http://www.programmaticponderings.com | |
# description: complete code for install and config on Ubuntu | |
# of jdk 1.7.0_45 in alteratives automatic mode | |
################################################################# | |
########## variables start ########## | |
# priority value | |
priority=20000 | |
# path to binaries directory | |
binaries=/usr/bin | |
# path to libraries directory | |
libraries=/usr/lib | |
# path to new java version | |
javapath=$libraries/jvm/jdk1.7.0_45 | |
# path to downloaded java version | |
java_download=~/jdk-7u45-linux-x64.tar.gz | |
########## variables end ########## | |
cd $libraries | |
[ -d jvm ] || sudo mkdir jvm | |
cd ~/ | |
# change permissions on jvm subdirectory | |
sudo chmod +x $libraries/jvm/ | |
# extract new version of java from the downloaded tarball | |
if [ -f $java_download ]; then | |
sudo tar -zxf $java_download -C $libraries/jvm | |
else | |
echo "Cannot locate Java download. Check 'java_download' variable." | |
echo 'Exiting script.' | |
exit 1 | |
fi | |
# install and config java web start (java) | |
sudo update-alternatives --install $binaries/java java \ | |
$javapath/jre/bin/java $priority | |
sudo update-alternatives --auto java | |
# install and config java web start (javaws) | |
sudo update-alternatives --install $binaries/javaws javaws \ | |
$javapath/jre/bin/javaws $priority | |
sudo update-alternatives --auto javaws | |
# install and config java compiler (javac) | |
sudo update-alternatives --install $binaries/javac javac \ | |
$javapath/bin/javac $priority | |
sudo update-alternatives --auto javac | |
# install and config java archive tool (jar) | |
sudo update-alternatives --install $binaries/jar jar \ | |
$javapath/bin/jar $priority | |
sudo update-alternatives --auto jar | |
# jar signing and verification tool (jarsigner) | |
sudo update-alternatives --install $binaries/jarsigner jarsigner \ | |
$javapath/bin/jarsigner $priority | |
sudo update-alternatives --auto jarsigner | |
# install and config java tool for generating api documentation in html (javadoc) | |
sudo update-alternatives --install $binaries/javadoc javadoc \ | |
$javapath/bin/javadoc $priority | |
sudo update-alternatives --auto javadoc | |
# install and config java disassembler (javap) | |
sudo update-alternatives --install $binaries/javap javap \ | |
$javapath/bin/javap $priority | |
sudo update-alternatives --auto javap | |
# install and config file that creates c header files and c stub (javah) | |
sudo update-alternatives --install $binaries/javah javah \ | |
$javapath/bin/javah $priority | |
sudo update-alternatives --auto javah | |
# jexec utility executes command inside the jail (jexec) | |
sudo update-alternatives --install $binaries/jexec jexec \ | |
$javapath/lib/jexec $priority | |
sudo update-alternatives --auto jexec | |
# install and config file to run applets without a web browser (appletviewer) | |
sudo update-alternatives --install $binaries/appletviewer appletviewer \ | |
$javapath/bin/appletviewer $priority | |
sudo update-alternatives --auto appletviewer | |
# install and config java plugin for linux (mozilla-javaplugin.so) | |
if [ -f '$libraries/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so' ]; then | |
sudo update-alternatives --install \ | |
$libraries/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so \ | |
$javapath/jre/lib/amd64/libnpjp2.so $priority | |
sudo update-alternatives --auto mozilla-javaplugin.so | |
else | |
echo 'Mozilla Firefox not found. Java Plugin for Linux will not be installed.' | |
fi | |
echo | |
########## add JAVA_HOME start ########## | |
grep -q 'JAVA_HOME' ~/.bashrc | |
if [ $? -ne 0 ]; then | |
echo 'JAVA_HOME not found. Adding JAVA_HOME to ~/.bashrc' | |
echo >> ~/.bashrc | |
echo 'export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45' >> ~/.bashrc | |
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> ~/.bashrc | |
else | |
echo 'JAVA_HOME found. No changes required to ~/.bashrc.' | |
fi | |
########## add JAVA_HOME start ########## | |
echo | |
echo '*** Java update script completed successfully ***' | |
echo | |
# confirm alternative for java-related executables | |
update-alternatives --get-selections | \ | |
grep -e java -e jar -e jexec -e appletviewer -e mozilla-javaplugin.so | |
# confirm JAVA_HOME environment variable is set | |
echo "To confirm JAVA_HOME, execute 'bash --login' and then 'echo \$JAVA_HOME'" |
Below is a test of the script on a fresh Vagrant VM of an Ubuntu Cloud Image of Ubuntu Server 13.10 (Saucy Salamander). Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The script was able to successfully install and configure the JDK, as well as the JAVA_HOME and PATH environment variables.
Deleting Old Versions?
Before deciding to completely delete previously installed versions of Java from the ‘/usr/lib/jvm’ directory, ensure there are no links to those versions from OS and application configuration files. Many applications, such as NetBeans, eclipse, soapUI, and WebLogic Server, may contain their own Java configurations. If they don’t use the JAVA_HOME variable, they should be updated to reflect the current active Java version when possible.
Resources
Ubuntu Linux: Install Latest Oracle Java 7
update-alternatives(8) – Linux man page
Travel-Size Wireless Router for Your Raspberry Pi
Posted by Gary A. Stafford in Bash Scripting, Raspberry Pi, Software Development on July 15, 2013
Introduction
Recently, I purchased a USB-powered wireless router for to use with my Raspberry Pi when travelling. In an earlier post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I discussed the use of the Raspberry Pi, combined with a webcam, Motion, and FFmpeg, to create a low-cost dashboard video camera. Like many, I find one the big challenges with the Raspberry Pi, is how to connect and interact with it. Being in my car, and usually out of range of my home’s wireless network, except maybe in the garage, this becomes even more of an issue. That’s where adding an inexpensive travel-size router to my vehicle comes in handy.
I chose the TP-LINK TL-WR702N Wireless N150 Travel Router, sold by Amazon. The TP-LINK router, described as ‘nano size’, measures only 2.2 inches square by 0.7 inches wide. It has several modes of operation, including as a router, access point, client, bridge, or repeater. It operates at wireless speeds up to 150Mpbs and is compatible with IEEE 802.11b/g/n networks. It supports several common network security protocols, including WEP, WPA/WPA2, WPA-PSK/WPA2-PSK encryption. For $22 USD, what more could you ask for!
My goal with the router was to do the following:
- Have the Raspberry Pi auto-connect to the new TP-LINK router’s wireless network when in range, just like my home network.
- Since I might still be in range of my home network, have the Raspberry Pi try to connect to the TP-LINK first, before falling back to my home network.
- Ensure the network was relatively secure, since I would be exposed to many more potential threats when traveling.
My vehicle has two power outlets. I plug my Raspberry Pi into one outlet and the router into the other. You could daisy chain the router off the Pi. However, my Pi’s ports are in use my the USB wireless adapter and the USB webcam. Using the TP-LINK router, I can easily connect to the Raspberry Pi with my mobile phone or tablet, using an SSH client.
When I arrive at my destination, I log into the Pi and do a proper shutdown. This activates my shutdown script (see my last post), which moves the newly created Motion/FFmpeg time-lapse dash-cam videos to a secure folder on my Pi, before powering down.
Of course there are many other uses for the router. For example, I can remove the Pi and router from my car and plug it back in at the hotel while traveling, or power the router from my laptop while at work or the coffee shop. I now have my own private wireless network wherever I am to use the Raspberry Pi, or work with other users. Remember the TP-LINK can act as a router, access point, client, bridge, or a repeater.
Network Security
Before configuring your Raspberry Pi, the first thing you should do is change all the default security related settings for the router. Start with the default SSID and the PSK password. Both these default values are printed right on the router. That’s motivation enough to change!
Additionally, change the default IP address of the router and the username and password for the browser-based Administration Console.
Lastly, pick the most secure protocol possible. I chose ‘WPA-PSK/WPA2-PSK’. All these changes are done through the TP-LINK’s browser-based Administration Console.
Configuring Multiple Wireless Networks
In an earlier post, Installing a Miniature WiFi Module on the Raspberry Pi (w/ Roaming Enabled), I detailed the installation and configuration of a Miniature WiFi Module, from Adafruit Industries, on a Pi running Soft-float Debian “wheezy”. I normally connect my Pi to my home wireless network. I wanted to continue to do this in the house, but connect the new router when traveling.
Based on the earlier post, I was already using Jouni Malinen’s wpa_supplicant, the WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2. This made network configuration relatively simple. If you use wpa_supplicant, your ‘/etc/network/interfaces’ file should look like the following. If you’re not familiar with configuring the interfaces file for wpa_supplicant, this post on NoWiresSecurity.com is a good starting point.
Note that in this example, I am using DHCP for all wireless network connections. If you chose to use static IP addresses for any of the networks, you will have to change the interfaces file accordingly. Once you add multiple networks, configuring static IP addresses for each network, becomes more complex. That is my next project…
First, I generated a new pre-shared key (PSK) for the router’s SSID configuration using the following command. Substitute your own SSID (‘your_ssid’) and passphrase (‘your_passphrase’).
wpa_passphrase your_ssid your_passphrase
Based your SSID and passphrase, this command will generate a pre-shared key (PSK), similar to the following. Save or copy the PSK to the clipboard. We will need the PSK in the next step.
Then, I modified my wpa_supplicant configuration file with the following command:
sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
I added the second network configuration, similar to the existing configuration for my home wireless network, using the newly generated PSK. Below is an example of what mine looks like (of course, not the actual PSKs).
Depending on your Raspberry Pi and router configurations, your wpa_supplicant configuration will look slightly different. You may wish to add more settings. Don’t consider my example the absolute right way for your networks.
Wireless Network Priority
Note the priority of the TP-LINK router is set to 2, while my home NETGEAR router is set to 1. This ensures wpa_supplicant will attempt to connect to the TP-LINK network first, before attempting the home network. The higher number gets priority. The best resource I’ve found, which explains all the configuration options is detail, is here. In this example wpa_supplicant configuration file, priority is explained this way, ‘by default, all networks will get same priority group (0). If some of the networks are more desirable, this field can be used to change the order in which wpa_supplicant goes through the networks when selecting a BSS. The priority groups will be iterated in decreasing priority (i.e., the larger the priority value, the sooner the network is matched against the scan results). Within each priority group, networks will be selected based on security policy, signal strength, etc.’
Conclusion
If you want an easy, inexpensive, secure way to connect to your Raspberry Pi, in the vehicle or other location, a travel-size wireless router is a great solution. Best of all, configuring it for your Raspberry Pi is simple if you use wpa_supplicant.
Using a Startup Script to Save Motion/FFmpeg Videos and Images on The Raspberry Pi
Posted by Gary A. Stafford in Bash Scripting, Raspberry Pi, Software Development on July 9, 2013
Use a start-up script to overcome limitations of Motion/FFmpeg and save multiple Raspberry Pi dashboard camera timelapse videos and images, automatically.
Introduction
In my last post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I demonstrated how the Raspberry Pi can be used as a low-cost dashboard video camera. One of the challenges I faced in that post was how to save the timelapse videos and individual images (frames) created by Motion and FFmpeg when the Raspberry Pi is turned on and off. Each time the car starts, the Raspberry Pi boots up, and Motion begins to run, the previous images and video, stored in the default ‘/tmp/motion/’ directory are removed and new images and video, created.
Take the average daily commute, we drive to and from work. Maybe we stop for a morning coffee, or stop at the store on the way home to pick up dinner. Maybe we use our car go out for lunch. Our car starts, stops, starts, stops, starts, and stops. Our daily commute actually encompasses a series small trips, and therefore multiple dash-cam timelapse videos. If you are only interested in keeping the latest timelapse video in case of an accident, then this may not be a problem. When the accident occurs, simply pull the SDHC card from the Raspberry Pi and copy the video and images off to your laptop.
However, if you are interested in capturing and preserving series of dash-cam videos, such as in the daily commute example above, then the default behavior of Motion is insufficient. To preserve each video segment or series of images, we need a way to preserve the content created by Motion and FFmpeg, before they are overwritten. In this post, I will present a solution to overcome this limitation.
The process involves the following steps:
- Change the default location where Motion stores timelapse videos and images to somewhere other than a temporary directory;
- Create a startup script that will move the video and images to a safe location when restarting the Pi;
- Configure the Pi’s Debian operating system to run this script at startup (and optionally shutdown), before Motion starts;
Sounds pretty simple. However, understanding how startup scripts work with Debian’s Init program, making sure the new move script run before Motion starts, and knowing how to move a huge number of files, all required forethought.
Change Motion’s Default Location for Video and Images
To start, change the default location where Motion stores timelapse video and images, from ‘/tmp/motion/’ to a location outside the ‘/tmp’ directory. I chose to create a directory called ‘/motiontmp’. Make sure you set the permissions on the new ‘/motiontmp’ directory, so Motion can write to it:
sudo chmod -R 777 /motiontmp
To have Motion use this location, we need to modify the Motion configuration file:
sudo nano /etc/motion/motion.conf
Change the following setting (in bold below). Note when Motion starts for the first time, it will create the ‘motion’ sub folder inside ‘motiontmp’. You do not have to create it yourself.
# Target base directory for pictures and films # Recommended to use absolute path. (Default: current working directory) target_dir /motiontmp/motion
Create the Startup Script to Move Video and Images
Next, create the new shell script that will run at startup to move Motion’s video and images. The script creates a timestamped folder in new ‘motiontmp’ directory for each series of images and video. The script then copies all files from the ‘motion’ directory to the new timestamped directory. Before copying, the script deletes any zero-byte jpegs, which are images that did not fully process prior to the Raspberry Pi being shut off when the car stopped. To create the new script, run the following command.
sudo nano /etc/init.d/motionStartup.sh
Copy the following contents into the script and save it.
### BEGIN INIT INFO # Provides: motionStartup # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Move motion files at startup. # Description: Move motion files at startup. # X-Start-Before: motion ### END INIT INFO #! /bin/sh # /etc/init.d/motionStartup # # Some things that run always #touch /var/lock/motionStartup logger -s "Script motionStartup called" # Carry out specific functions when asked to by the system case "$1" in start) logger -s "Script motionStartup started" TIMESTAMP=$(date +%Y%m%d%H%M%S | sed 's/ //g') # No spaces logger -s "Script motionStartup $TIMESTAMP" sudo mkdir /motiontmp/$TIMESTAMP || logger -s "Error mkdir start" find /motiontmp/motion/. -type f -size 0 -print0 -delete find /motiontmp/motion/. -maxdepth 1 -type f | \ xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP ;; stop) logger -s "Script motionStartup stopped" ;; *) echo "Usage: /etc/init.d/motionStartup {start|stop}" exit 1 ;; esac exit 0
Note the ‘X-Start-Before’ setting at the top of the script (in bold). An explanation of this setting is found on the Debian Wiki website. According to the site, ”There is no such standard-defined header, but there is a proposed extension implemented in the insserv package (since version 1.09.0-8). Use the X-Start-Before and X-Stop-After headers proposed by SuSe.” To make sure you have a current version of ‘insserv‘, you can run the following command:
dpkg -l insserv
Also, note how the files are moved by the script:
find /motiontmp/motion/. -maxdepth 1 -type f | \ xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP
It’s not as simple as using ‘mv *.*’ when you have a few thousand files. This will likely throw a ‘Argument list too long’ exception. According to one stackoverflow, the exception is because bash actually expands the asterisk to every matching file, producing a very long command line. Using the ‘find’ combined with ‘xargs’ gets around these problem. The ‘xargs’ command splits up the list and issues several commands if necessary. This issue applies to several commands, including rm, cp, and mv.
Lastly, note the use of the ‘logger‘ commands throughout the script. These are optional and may be removed. I like to log the script’s progress for troubleshooting purposes. Using the above ‘logger’ commands, I can easily pinpoint issues by looking at the log with grep, such as:
tail -500 /var/log/messages | grep 'motionStartup' | grep 'logger:'
You can test the script by running the following command:
/etc/init.d/./motionStartup.sh start
You should see a series of three messages output to the screen by the script, confirming the script is working. Note the new timestamped folder created by the script, below.
Below, is an example of the how the directory structure should look after a few videos are created by Motion, and the Raspberry Pi cycled off and on. You need to complete the rest of the steps in this post for this to work automatically.
Shutdown Script?
I know, the name of the post clearly says ‘Startup Script’. Well, a little tip, if you copy the code from the ‘start’ method and paste it in the ‘stop’ method, this now also works at shutdown. If you do a proper shutdown (like ‘sudo reboot’), the Raspberry Pi’s OS will call the script’s ‘stop’ method. The ‘start’ method is more useful to use for us in the car, where we may not be able to do a proper shutdown; we just turn the car off and kill power to the Pi. However, if you are shutting down from mobile device via ssh, or using a micro keyboard and LCD monitor, the script will do it’s work on the way down.
Configure Debian OS to Run the New Startup Script
To have our new script run on startup, install it by running the following command:
sudo update-rc.d motionStartup.sh defaults
A full explanation of this command is to complex for this brief post. A good overview of creating startup scripts and installing them in Debian is found on the Debian Administration website. This is the source I used to start to understand runlevels. There are also a few links at the end of the post. To tell which runlevel (state) you running at, use the following command:
runlevel
To make sure the startup script was installed properly, run the following command. This will display the contents of each ‘rc*.d’ folder. Each folder corresponds to a runlevel – 0, 1, 2, etc. Each folder contains symbolic links to the actual scripts. The links are named by order of executed (S01…, S02…, S03…):
ls /etc/rc*.d
Look for the new script listed under the appropriate runlevel(s). The new script should be listed before ‘motion’, as shown below.
If for any reason you need to uninstall the new script (not delete/remove), run the following command. This not a common task, but necessary to change the order of execution of the scripts or rename a script.
sudo update-rc.d -f motionStartup.sh remove
Copy and Remove Files from the Raspberry Pi
Once the startup script is working and we are capturing images and timelapse video, the next thing we will probably want to do is copy files off the Raspberry Pi. To do this over your WiFi network, use a ‘scp’ command from a remote machine. The below script copies all directories, stating with ‘2013’, and their contents to remote machine, preserving the directory structure.
scp -rp user@ip_address_of_pi:/motiontmp/2013* ~/local_directory/
Maybe you just want all the timelapse videos Motion/FFmpeg creates; you don’t care about the images. The following command copies just the MPEG videos from all ‘2013’ folders to a single directory on the your remote machine. The directory structure is ignored during the copy. This is the quickest way to just store all the videos.
scp -rp user@ip_address_of_pi:/motiontmp/2013*/*.mpg ~/local_directory/
If you are going to save all the MPEG timelapse videos in one location, I recommend changing the naming convention of the videos in the motion.conf file. I have added the hour, minute, and seconds to mine. This will ensure the names don’t conflict when moved to a common directory:
# File path for timelapse mpegs relative to target_dir # Default: %Y%m%d-timelapse # Default value is near equivalent to legacy oldlayout option # For Motion 3.0 compatible mode choose: %Y/%m/%d-timelapse # File extension .mpg is automatically added so do not include this timelapse_filename %Y%m%d%H%M%S-timelapse
To remove all the videos and images once they have been moved off the Pi and are no longer needed, you can run a rm command. Using the ‘-rf’ options make sure the directories and their contents are removed.
sudo rm -rf /motiontmp/2013*
Conclusion
The only issue I have yet to overcome is maintaining the current time on the Raspberry Pi. The Pi lacks a Real Time Clock (RTC). Therefore, turning the Pi on and off in the car causes it to loose the current time. Since the Pi is not always on a WiFi network, it can’t sync to the current time when restarted. The only side-effects I’ve seen so far caused by this, the videos occasionally contain more than one driving event and the time displayed in the videos is not always correct. Otherwise, the process works pretty well.
Resources
The following are some useful resources on this topic:
Debian Reference: Chapter 3. The system initialization
How-To: Managing services with update-rc.d
Files and scripts that execute on boot
Making scripts run at boot time with Debian
Finding all files and move to new directory from shell prompt
Shell scripting: Write message to a syslog / log file
“Argument list too long”: Beyond Arguments and Limitations
Linux / Unix Command: date (used for TIMESTAMP)
To have our new script run on startup, install it by running the following command:
Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg
Posted by Gary A. Stafford in Bash Scripting, Raspberry Pi, Software Development on June 30, 2013
Demonstrate the use of the Raspberry Pi and a basic webcam, along with Motion and FFmpeg, to build low-cost dashboard video camera for your daily commute.
Dashboard Video Cameras
Most of us remember the proliferation of dashboard camera videos of the February 2013 meteor racing across the skies of Russia. This rare astronomical event was captured on many Russian motorist’s dashboard cameras. Due to the dangerous driving conditions in Russia, many drivers rely on dashboard cameras for insurance and legal purposes. In the United States, we are more use to seeing dashboard cameras used by law-enforcement. Who hasn’t seen those thrilling police videos of car crashes, drunk drivers, and traffic stops gone wrong.
Although driving in the United States is not as dangerous as in Russia, there is reason we can’t also use dashboard cameras. In case you are involved in an accident, you will have a video record of the event for your insurance company. If you witness an accident or other dangerous situation, your video may help law enforcement and other emergency responders. Maybe you just want to record a video diary of your next road trip.
A wide variety of dashboard video cameras, available for civilian vehicles, can be seen on Amazon’s website. They range in price and quality from less that $50 USD to well over $300 USD or more, depending on their features. In a popular earlier post, Remote Motion-Activated Web-Based Surveillance with Raspberry Pi, I demonstrated the use of the Raspberry Pi and a webcam, along with Motion and FFmpeg, to provide low-cost web-based, remote surveillance. There are many other uses for this combination of hardware and software, including as a dashboard video camera.
Methods for Creating Dashboard Camera Videos
I’ve found two methods for capturing dashboard camera videos. The first and easiest method involves configuring Motion to use FFmpeg to create a video. FFmpeg creates a video from individual images (frames) taken at regular intervals while driving. The upside of the FFmpeg option, it gives you a quick ready-made video. The downside of FFmpeg option, your inability to fully control the high-level of video compression and high frame-rate (fps). This makes it hard to discern fine details when viewing the video.
Alternately, you can capture individual JPEG images and combine them using FFmpeg from the command line or using third-party movie-editing tools. The advantage of combining the images yourself, you have more control over the quality and frame-rate of the video. Altering the frame-rate, alters your perception of the speed of the vehicle recording the video. The only disadvantage of combining the images yourself, you have the extra steps involved to process the images into a video.
At one frame every two seconds (.5 fps), a 30 minute commute to work will generate 30 frames/minute x 30 minutes, or 900 jpeg images. At 640 x 480 pixels, depending on your jpeg compression ratio, that’s a lot of data to move around and crunch into a video. If you just want a basic record of your travels, use FFmpeg. If you want a higher-quality record of trip, maybe for a video-diary, combining the frames yourself is a better way to go.
Configuring Motion for a Dashboard Camera
The installation and setup of FFmpeg and Motion are covered in my earlier post so I won’t repeat that here. Below are several Motion settings I recommend starting with for use with a dashboard video camera. To configure Motion, open it’s configuration file, by entering the following command on your Raspberry Pi:
sudo nano /etc/motion/motion.conf
To use FFmpeg, the first method, find the ‘FFMPEG related options’ section of the configuration and locate ‘Use ffmpeg to encode a timelapse movie’. Enter a number for the ‘ffmpeg_timelapse’ setting. This is the rate at which images are captured and combined into a video. I suggest starting with 2 seconds. With a dashboard camera, you are trying to record important events as you drive. In as little as 2-3 seconds at 55 mph, you can miss a lot of action. Moving the setting down to 1 second will give more detail, but you will chew up a lot of disk space, if that is an issue for you. I would experiment with different values:
# Use ffmpeg to encode a timelapse movie # Default value 0 = off - else save frame every Nth second ffmpeg_timelapse 2
To use the ‘do-it-yourself’ FFmpeg method, locate the ‘Snapshots’ section. Find ‘Make automated snapshot every N seconds (default: 0 = disabled)’. Change the ‘snapshot_interval’ setting, using the same logic as the ‘ffmpeg_timelapse’ setting, above:
# Make automated snapshot every N seconds (default: 0 = disabled) snapshot_interval 2
Irregardless of which method you choose (or use them both), you will want to tweak some more settings. In the ‘Text Display’ section, locate ‘Set to ‘preview’ will only draw a box in preview_shot pictures.’ Change the ‘locate’ setting to ‘off’. As shown in the video frame below, since you are moving in your vehicle most of the time, there is no sense turning on this option. Motion cannot differentiate between the highway zipping by the camera and the approaching vehicles. Everything is in motion to the camera, the box just gets in the way:
# Set to 'preview' will only draw a box in preview_shot pictures. locate off
Optionally, I recommend turning on the time-stamp option. This is found right below the ‘locate’ setting. Especially in the event of an accident, you want an accurate time-stamp on the video or still images (make sure you Raspberry Pi’s time is correct):
# Draws the timestamp using same options as C function strftime(3) # Default: %Y-%m-%d\n%T = date in ISO format and time in 24 hour clock # Text is placed in lower right corner text_right %Y-%m-%d\n%T-%q
Starting with the largest, best quality images will ensure the video quality is optimal. Start with a large size capture and reduce it only if you are having trouble capturing the video quickly enough. These settings are found in the ‘Capture device options’ section:
# Image width (pixels). Valid range: Camera dependent, default: 352 width 640 # Image height (pixels). Valid range: Camera dependent, default: 288 height 480
Similarly, I suggest starting with a low amount of jpeg compression to maximize quality and only lower if necessary. This setting is found in the ‘Image File Output’ section:
# The quality (in percent) to be used by the jpeg compression (default: 75) quality 90
Once you have completed the configuration of Motion, restart Motion for the changes to take effect:
sudo /etc/init.d/motion restart
Since you will be powering on your Raspberry Pi in your vehicle, and may have no way to reach Motion from a command line, you will want Motion to start capturing video and images for you automatically at startup. To enable Motion (the motion daemon) on start-up, edit the /etc/default/motion file.
sudo nano /etc/default/motion
Change the ‘start_motion_daemon‘ setting to ‘yes’. If you decide to stop using the Raspberry Pi for capturing video, remember to disable this option. Motion will keep generating video and images, even without a camera connected, if the daemon process is running.
Capturing Dashboard Video
Although taking dashboard camera videos with your Raspberry Pi sounds easy, it presents several challenges. How will you mount your camera? How will you adjust your camera’s view? How will you power your Raspberry Pi in the vehicle? How will you power-down your Raspberry Pi from the vehicle? How will you make sure Motion is running? How will you get the video and images off the Raspberry Pi? Do you have one a mini keyboard and LCD monitor to use in your vehicle? Or, is your Raspberry Pi on your wireless network? If so, do you know how to bring up the camera’s view and Motion’s admin site on your smartphone’s web-browser?
My start-up process is as follows:
- Start my car.
- Plug the webcam and the power cable into the Raspberry Pi.
- Let the Raspberry Pi boot up fully and allow Motion to start. This takes less than one minute.
- Open the http address Motion serves up using my mobile browser.
(Since my Raspberry Pi has a wireless USB adapter installed and I’m still able to connect from my garage). - Adjust the camera using the mobile browser view from the camera.
- Optionally, use Motion’s ‘HTTP Based Control’ feature to adjust any Motion configurations, on-the-fly (great option).
Once I reach my destination, I copy the video and/or still image frames off the Raspberry Pi:
- Let the car run for at least 1-2 minutes after you stop. The Raspberry Pi is still processing the images and video.
- Copy the files off the Raspberry Pi over the local network, right from car (if in range of my LAN).
- Alternately, shut down the Raspberry Pi by using a SSH mobile app on your smartphone, or just shut the car off (this not the safest method!).
- Place the Pi’s SDHC card into my laptop and copy the video and/or still image frames.
Here are some tips I’ve found to make creating dashboard camera video’s easier and better quality:
- Leave your camera in your vehicle once you mount and position it.
- Make sure your camera is secure so the vehicle’s vibrations while driving don’t create bouncy-images or change the position of the camera field of view.
- Clean your vehicle’s front window, inside and out. Bugs or other dirt are picked up by the camera and may affect the webcam’s focus.
- Likewise, film on the window from smoking or dirt will soften the details of the video and create harsh glare when driving on sunny days.
- Similarly, make sure your camera’s lens is clean.
- Keep your dashboard clear of objects such as paper, as it reflects on the window and will obscure the dashboard camera’s video.
- Constantly stopping your Raspberry Pi by shutting the vehicle off can potential damage the Raspberry Pi and/or corrupt the operating system.
- Make sure to keep your Raspberry Pi out of sight of potential thieves and the direct sun when you are not driving.
- Backup your Raspberry Pi’s SDHC card before using for dashboard camera, see Duplicating Your Raspberry Pi’s SDHC Card.
Creating Video from Individual Dashboard Camera Images
FFmpeg
If you choose the second method for capturing dashboard camera videos, the easiest way to combine the individual dashboard camera images is by calling FFmpeg from the command line. To create the example #3 video, shown below, I ran two commands from a Linux Terminal prompt. The first command is a bash command to rename all the images to four-digit incremented numbers (‘0001.jpg’, ‘0002.jpg’, ‘0003.jpg’, etc.). This makes it easier to execute the second command. I found this script on stackoverflow. It requires Gawk (‘sudo apt-get install gawk’). If you are unsure about running this command, make a copy of the original images in case something goes wrong.
The second command is a basic FFmpeg command to combine the images into a 20 fps MPEG-4 video file. More information on running FFmpeg can be found on their website. There is a huge number of options available with FFmpeg from the command line. Running this command, FFmpeg processed 4,666 frames at 640 x 480 pixels in 233.30 seconds, outputting a 147.5 Mb MPEG-4 video file.
find -name '*.jpg' | sort | gawk '{ printf "mv %s %04d.jpg\n", $0, NR }' | bash ffmpeg -r 20 -qscale 2 -i %04d.jpg output.mp4
Example #3 – FFmpeg Video from Command Line
If you want to compress the video, you can chain a second FFmpeg command to the first one, similar to the one below. In my tests, this reduced the video size to 20-25% of the original uncompressed version.
ffmpeg -r 20 -qscale 2 -i %04d.jpg output.mp4 && ffmpeg -i output.mp4 -vcodec mpeg2video output_compressed.mp4
If your images are to dark (early morning or overcast) or have a color-cast (poor webcam or tinted-windows), you can use programs like ImageMagick to adjust all the images as a single batch. In example #5 below, I pre-processed all the images prior to making the video. With one ImageMagick command, I adjusting their levels to make them lighter and less flat.
mogrify -level 12%,98%,1.79 *.jpg
Example #5 – FFmpeg Uncompressed Video from Command Line
Windows MovieMaker
Using Windows MovieMaker was not my first choice, but I’ve had a tough time finding an equivalent Linux gui-based application. If you are going to create your own video from the still images, you need to be able to import and adjust thousands of images quickly and easily. I can import, create, and export a typical video of a 30 minute trip in 10 minutes with MovieMaker. With MovieMaker, you can also add titles, special effects, and so forth.
Sample Videos
Below are a few dashboard video examples using a variety of methods. In the first two examples, I captured still images and created the FFmpeg video at the same time. You can compare quality of Method #1 to #2.
Example #2a – Motion/FFmpeg Video
Example #2b – Windows MovieMaker
Example #5 – FFmpeg Compressed Video from Command Line
Example #6 – FFmpeg Compressed Video from Command Line
Useful Links
Renaming files in a folder to sequential numbers
ImageMagick: Command-line Options
Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on May 24, 2013
Create a new WebLogic Server domain on Oracle’s Pre-built Development VM. Remotely deploy a sample web application to the domain from a remote machine.
Introduction
In my last two posts, Using Oracle’s Pre-Built Enterprise Java VM for Development Testing and Resizing Oracle’s Pre-Built Development Virtual Machines, I introduced Oracle’s Pre-Built Enterprise Java Development VM, aka a ‘virtual appliance’. Oracle has provided ready-made VMs that would take a team of IT professionals days to assemble. The Oracle Linux 5 OS-based VM has almost everything that comprises basic enterprise test and production environment based on the Oracle/Java technology stack. The VM includes Java JDK 1.6+, WebLogic Server, Coherence, TopLink, Subversion, Hudson, Maven, NetBeans, Enterprise Pack for Eclipse, and so forth.
One of the first things you will probably want to do, once your Oracle’s Pre-Built Enterprise Java Development VM is up and running, is deploy an application to WebLogic Server. According to Oracle, WebLogic Server is ‘a scalable, enterprise-ready Java Platform, Enterprise Edition (Java EE) application server.’ Even if you haven’t used WebLogic Server before, don’t worry, Oracle has designed it to be easy to get started.
In this post I will cover creating a new WebLogic Server (WLS) domain, and the deployment a simple application to WLS from a remote development machine. The major steps in the process presented in this post are as follows:
- Create a new WLS domain
- Create and build a sample application
- Deploy the sample application to the new WLS domain
- Access deployed application via a web browser
Networking
First, let me review how I have my VM configured for networking, so you will understand my deployment methodology, discussed later. The way you configure your Oracle VM VirtualBox appliance will depend on your network topology. Again, keeping it simple for this post, I have given the Oracle VM a static IP address (192.168.1.88). The machine on which I am hosting VirtualBox uses DHCP to obtain an IP address on the same local wireless network.
For the VM’s VirtualBox networking mode, I have chosen the ‘Bridged Adapter‘ mode. Using this mode, any machine on the network can access the VM through the host machine, via the VM’s IP address. One of the best posts I have read on VM networking is on Oracle’s The Fat Bloke Sings blog, here.
Creating New WLS Domain
A domain, according Oracle, is ‘the basic administrative unit of WebLogic Server. It consists of one or more WebLogic Server instances, and logically related resources and services that are managed, collectively, as one unit.’ Although the Oracle Development VM comes with pre-existing domains, we will create our own for this post.
To create the new domain, we will use the Oracle’s Fusion Middleware Configuration Wizard. The Wizard will take you through a step-by-step process to configure your new domain. To start the wizard, from within the Oracle VM, open a terminal window, and use the following command to switch to the Wizard’s home directory and start the application.
/labs/wls1211/wlserver_12.1/common/bin/config.sh
There are a lot of configuration options available, using the Wizard. I have selected some basic settings, shown below, to configure the new domain. Feel free to change the settings as you step through the Wizard, to meet your own needs. Make sure to use the ‘Development Mode’ Start Mode Option for this post. Also, make sure to note the admin port of the domain, the domain’s location, and the username and password you choose.
- 01 – New WebLogic Domain
- 02 – Selecting the Domain Source
- 03 – Naming the Domain
- 04 – Domain User Name and Password
- 05 – Startup and JDK
- 06 – Optional Configuration
- 07 – Administrative Configuration
- 08 – JMS Configuration
- 09 – Managed Servers
- 10 – Clusters
- 11 – Machines
- 12 – Target Services
- 13 – JMS File Stores
- 14 – RDBMS
- 15 – Configuration Summary
Starting the Domain
To start the new domain, open a terminal window in the VM and run the following command to change to the root directory of the new domain and start the WLS domain instance. Your domain path and domain name may be different. The start script command will bring up a new terminal window, showing you the domain starting.
/labs/wls1211/user_projects/domains/blogdev_domain/startWebLogic.sh
WLS Administration Console
Once the domain starts, test it by opening a web browser from the host machine and entering the URL of the WLS Administration Console. If your networking is set-up correctly, the host machine will able to connect to the VM and open the domain, running on the port you indicated when creating the domain, on the static IP address of the VM. If your IP address and port are different, make sure to change the URL. To log into WLS Administration Console, use the username and password you chose when you created the domain.
http://192.168.1.88:7031/console/login/LoginForm.jsp
Before we start looking around the new domain however, let’s install an application into it.
Sample Java Application
If you have an existing application you want to install, you can skip this part. If you don’t, we will quickly create a simple Java EE Hello World web application, using a pre-existing sample project in NetBeans – no coding required. From your development machine, create a new Samples -> Web Services -> REST: Hello World (Java EE 6) Project. You now have a web project containing a simple RESTful web service, Servlet, and Java Server Page (.jsp). Build the project in NetBeans. We will upload the resulting .war file manually, in the next step.
In a previous post, Automated Deployment to GlassFish Using Jenkins CI Server and Apache Ant, we used the same sample web application to demonstrate automated deployments to Oracle’s GlassFish application server.
Deploying the Application
There are several methods to deploy applications to WLS, depending on your development workflow. For this post, we will keep it simple. We will manually deploy our web application’s .war file to WLS using the browser-based WLS Administration Console. In a future post, we will use Hudson, also included on the VM, to build and deploy an application, but for now we will do it ourselves.
To deploy the application, switch back to the WLS Administration Console. Following the screen grabs below, you will select the .war file, built from the above web application, and upload it to the Oracle VM’s new WLS domain. The .war file has all the necessary files, including the RESTful web service, Servlet, and the .jsp page. Make sure to deploy it as an ‘application’ as opposed to a ‘library’ (see ‘target style’ configuration screen, below).
Accessing the Application
Now that we have deployed the Hello World application, we will access it from our browser. From any machine on the same network, point a browser to the following URL. Adjust your URL if your VM’s IP address and domain’s port is different.
http://192.168.1.88:7031/HelloWebLogicServer/resources/helloWorld
The Hello World RESTful web service’s Web Application Description Language (WADL) description can be viewed at:
http://192.168.1.88:7031/HelloWebLogicServer/resources/application.wadl
Since the Oracle VM is accessible from anywhere on the network, the deployed application is also accessible from any device on the network, as demonstrated below.
Conclusion
This was a simple demonstration of deploying an application to WebLogic Server on Oracle’s Pre-Built Enterprise Java Development VM. WebLogic Server is a powerful, feature-rich Java application server. Once you understand how to configure and administer WLS, you can deploy more complex applications. In future posts we will show a more common, slightly more complex example of automated deployment from Hudson. In addition, we will show how to create a datasource in WLS and access it from the deployed application, to talk to a relational database.
Helpful Links
Using Oracle’s Pre-Built Enterprise Java VM for Development Testing
Posted by Gary A. Stafford in Enterprise Software Development, Java Development, Oracle Database Development, Software Development on April 19, 2013
Install and configure Oracle’s Pre-Built Enterprise Java Development VM, with Oracle Linux 5, to create quick, full-featured development test environments.
Virtual Machines for Software Developers
As software engineers, we spend a great deal of time configuring our development machines to simulate test and production environments in which our code will eventually run. With the Microsoft/.NET technology stack, that most often means installing and configuring .NET, IIS, and SQL Server. With the Oracle/Java technology stack – Java, WebLogic or GlassFish Application Server, and Oracle 11g.
Within the last few years, the growth of virtual machines (VMs) within IT/IS organizations has exploded. According to Wikipedia, a virtual machine (VM), is ‘a software implementation of a machine (i.e. a computer) that executes programs like a physical machine.’ Rapid and inexpensive virtualization of business infrastructure using VMs has led to the exponential growth of private and public cloud platforms.
Instead of attempting to configure development machines to simulate the test and production environments, which are simultaneously running development applications and often personal programs, software engineers can leverage VMs in the same way as IT/IS organizations. Free, open-source virtualization software products from Oracle and VMware offer developers the ability to easily ‘spin-up’ fresh environments to compile, deploy, and test code. Code is tested in a pristine environment, closely configured to match production, without the overhead and baggage of day-to-day development. When testing is complete, the VM is simply deleted and a new copy re-deployed for the next project.
Oracle Pre-Built Virtual Appliances
I’ve worked with a number of virtualization products, based on various Windows and Linux operating systems. Not matter the product or OS, the VM still needs to be set up just like any other new computer system, with software and configuration. However, recently I began using Oracle’s pre-built developer VMs. Oracle offers a number of pre-built VMs for various purposes, including database development, enterprise Java development, business intelligence, application hosting, SOA development, and even PHP development. The VMs, called virtual appliances, are Open Virtualization Format Archive files, built to work with Oracle VM VirtualBox. Simply import the appliance into VirtualBox and start it up.
Oracle has provided ready-made VMs that would take even the most experienced team of IT professionals days to download, install, configure, and integrate. All the configuration details, user accounts information, instructions for use, and even pre-loaded tutorials, are provided. Oracle notes on their site that these VMs are not intended for use in production. However, the VMs are more than robust enough to use as a development test environment.
Because of its similarity to my production environment, I the installed the Enterprise Java Development VM on a Windows 7 Enterprise-based development computer. The Oracle Linux 5 OS-based VM has almost everything that comprises basic enterprise test and production environment based on the Oracle/Java technology stack. The VM includes an application server, source control server, build automation server, Java SDK, two popular IDE’s, and related components. The VM includes Java JDK 1.6+, WebLogic Server, Coherence, TopLink, Subversion, Hudson, Maven, NetBeans, Enterprise Pack for Eclipse, and so forth.
Aside from a database server, the environment has everything most developers might need to develop, build, store, and host their code. If you need a database, as most of us do, you can install it into the VM, or better yet, implement the Database App Development VM, in parallel. The Database VM contains Oracle’s 11g Release 2 enterprise-level relational database, along with several related database development and management tools. Using a persistence layer (data access layer), built with the included EclipseLink, you can connect the Enterprise appliance to the database appliance.
Set-Up Process
I followed the following steps to setup my VM:
- Update (or download and install) Oracle VM VirtualBox to the latest release.
- Download (6) Open Virtualization Format Archive (OVF/.ova) files.
- Download script to combine the .ova files.
- Execute script to assemble (6) .ova files into single. ova file.
- Import the appliance (combined .ova file) into VirtualBox.
- Optional: Clone and resize the appliance’s (2) virtual machines disks (see note below).
- Optional: Add the Yum Server configuration to the VM to enable normal software updates (see instructions below).
- Change any necessary settings within VM: date/time, timezone, etc.
- Install and/or update system software and development applications within VM: Java 1.7, etc.
Issue with Small Footprint of VM
The small size of the of pre-built VM is major issue I ran into almost immediately. Note in the screen grab above of VirtualBox, the Oracle VM only has (2) 8 GB virtual machine disks (.vmdk). Although Oracle designed the VMs to have a small footprint, it was so small that I quickly filled up its primary partition. At that point, the VM was too full to apply the even the normal system updates. I switched the cache location for yum to a different partition, but then ran out of space again when yum tried to apply the updates it had downloaded to the primary partition.
Lack of disk space was a complete show-stopper for me until I researched a fix. Unfortunately, VirtualBox’s ‘VBoxManage modifyhd –resize’ command is not yet compatible with the virtual machine disk (.vmdk) format. After much trial and error, and a few late nights reading posts by others who had run into this problem, I found a fairly easy series of steps to enlarge the size of the VM. It doesn’t require you to be a complete ‘Linux Geek’, and only takes about 30 minutes of copying and restarting the VM a few times. I will included the instructions in this separate, upcoming post.
Issue with Package Updater
While solving the VM’s disk space issue, I also discoverer the VM’s Enterprise Linux System was setup with something called the Unbreakable Linux Network (ULN) Update Agent. From what I understood, without a service agreement with Oracle, I could not update the VM’s software using the standard Package Updater. However, a few quick commands I found on the Yum Server site, overcame that limitation and allowed me to update the VM’s software. Just follow the simple instructions here, for Oracle Linux 5. There are several hundred updates that will be applied, including an upgrade of Oracle Linux from 5.5 to 5.9.
Issue with Java Updates
Along with the software updates, I ran into an issue installing the latest version of Java. I attempted to install the standard Oracle package that contained the latest Java JDK, JRE, and NetBeans for Linux. Upon starting the install script, I immediately received a ‘SELinux AVC denial’ message. This security measure halted my installation with the following error: ‘The java application attempted to load /labs/java/jre1.7.0_21/lib/i386/client/libjvm.so which requires text relocation. This is a potential security problem.‘
To get around the SELinux AVC denial issue, I installed the JRE, JDK, and NetBeans separately. Although this took longer and required a number of steps, it allowed me to get around the security and install the latest version of Java.
Note I later discovered that I could have simply changed the SELinux Security Level to ‘Permissive’ in the SELinux Security and Firewall, part of the Administrative Preferences. This would have allowed the original Oracle package containing the JDK, JRE, and NetBeans, to run.
Links
Using soapUI to Test RESTful Web Services
Posted by Gary A. Stafford in Java Development on April 16, 2013
Introduction
There are many excellent tools available to test RESTful web services. Applications like Fiddler, cURL, Firefox with Firebug, and Google Chrome’s Advanced REST Client and REST Console are commonly used by developers and test engineers. Another powerful tool, used by many enterprise software development organizations, is soapUI by SmatBear Software.
SmartBear offers several versions, from the free open-source edition (shown here), to the full-featured soapUI Pro. Although, soapUI Pro has many useful advanced features, the free edition is fine to start with. Here is a product comparison of the various editions available from SmartBear.
SmartBear’s soapUI is available for Windows, Mac OS, and Linux (shown here). The application supports a wide range of technologies, including SOAP, WSDL, REST, HTTP, HTTPS, AMF, JDBC, JMS, WS-I Integeration, WS-Security, WS-Addressing, WS-Reliable Messaging, according to the SmartBear web site. It is easy to download and install.
Testing RESTful Web Services with soapUI
In my last post, we used JDBC to map JPA entity classes to tables and views within a MySQL database. We then built RESTful web services, EJB classes, which communicated with MySQL through the entities. The RESTful web services, part of a Java Web Application, were deployed to GlassFish.
Using that post’s RESTful web services, here is a quick example of how easy it is to use the free, open-source edition of soapUI to test those services. Start by locating the address of the RESTful service’s WADL. The WADL address is displayed in the upper left corner of the NetBeans’ browser-based test page, shown in the earlier post.
Next, create a new soapUI project. Give the project the WADL address and a project name.
Using the WADL, soapUI will create sample HTTP Request for each service resource’s methods (left-side of screen). Populating the sample request with any required input parameters, you make an HTTP Request to the service’s method.
In this first example, I call the Actor resource’s ‘findAll’ method using an HTTP GET method. The call to ‘http://localhost:8080/MySQLDemoService/webresources/com.mysql.entities.actor’ results in an HTTP Response with the list of Actor objects, mapped (serialized) to JSON (right-side of screen).
In this second example, I call the Film resource’s ‘Id’ method, to locate a single Film object, using the HTTP GET method. The call to ‘http://localhost:8080/MySQLDemoService/webresources/com.mysql.entities.film/719’ results in an HTTP Response with the a single Film object, identified by Id 719. This time the object is marshalled to XML, instead of JSON.
Duplicating Your Raspberry Pi’s SDHC Card
Posted by Gary A. Stafford in Software Development on February 12, 2013
There are a few reasons you might want to duplicate (clone/copy) your Raspberry Pi’s Secure Digital High-Capacity (SDHC) card. I had two, backup and a second Raspberry Pi. I spent untold hours installing and configuring software on your Raspberry Pi with Java, OpenCV, Motion, etc. Having a backup of all my work seemed like a good idea.
Second reason, a second Raspberry Pi. I wanted to set up a second Raspberry Pi, but didn’t want to spend the time to duplicate my previous efforts. Nor, could I probably ever duplicate the first Pi’s configuration, exactly. To ensure consistency across multiple Raspberry Pi’s, duplicating my first Raspberry Pi’s SDHC card made a lot of sense.
I found several posts on the web about duplicating an SDHC card. One of the best articles was on the PIXHAWK website. It only took me a few simple steps to backup my original 8 GB SDHC card, and then create a clone by copying the backup to a new 8 GB SDHC card, as follows:
1) Remove the original SDHC card from Raspberry Pi and insert it into a card reader on your computer. I strongly suggest locking the card to protect it against any mistakes while backing up.
2) Locate where the SDHC card is mounted on your computer. This can be done using GParted, or in a terminal window, using the ‘blkid’ (block device attributes) command. My Raspberry Pi’s SDHC card, with its three separate partitions was found at ‘/dev/sdb’.
3) Use the ‘dd’ (convert and copy a file) command to duplicate the contents of the SDHC card to your computer. This can take a while and there is no progress bar. The command I used to back up the card to my computer’s $HOME directory was:
sudo dd if=/dev/sdb of=~/sdhc-card-bu.bin
4) Unmount and unlock the original SDHC card. Mount the new SDHC card. It should mount in the same place.
5) Reverse the process by copying the backup file, ‘sdhc-card-bu.bin’, to the new SDHC card. Again, this can take a while and there is no progress bar. The command I used was:
sudo dd if=~/sdhc-card-bu.bin of=/dev/sdb
Using ‘dd’, backups and restores the entire SDHC card, partitions and all. I was able to insert the card into a brand new Raspberry Pi and boot it up, without any problems.
Obviously, there are some things you may want to change on a cloned Raspberry Pi. For example, you should change the cloned Raspberry Pi’s host name, so it doesn’t conflict with the original Raspberry Pi on the network. This is easily done:
sudo nano /etc/hostname sudo /etc/init.d/hostname.sh start
Also, changing the cloned Raspberry Pi’s root password is a wise idea for both security and sanity, especially if you have more than one Pi on your network. This guarantees you know which one you are logging into. This is easily done using the ‘passwd’ command: