Posts Tagged Linux

Docker Enterprise Edition: Multi-Environment, Single Control Plane Architecture for AWS

Final_DockerEE_21 (1)

Designing a successful, cloud-based containerized application platform requires a balance of performance and security with cost, reliability, and manageability. Ensuring that a platform meets all functional and non-functional requirements, while remaining within budget and is easily maintainable, can be challenging.

As Cloud Architect and DevOps Team Lead, I recently participated in the development of two architecturally similar, lightweight, cloud-based containerized application platforms. From the start, both platforms were architected to maximize security and performance, while minimizing cost and operational complexity. The later platform was built on AWS with Docker Enterprise Edition.

Docker Enterprise Edition

Released in March of this year, Docker Enterprise Edition (Docker EE) is a secure, full-featured container-based management platform. There are currently eight versions of Docker EE, available for Windows Server, Azure, AWS, and multiple Linux distros, including RHEL, CentOS, Ubuntu, SUSE, and Oracle.

Docker EE is one of several production-grade container orchestration Platforms as a Service (PaaS). Some of the other container platforms in this category include:

Docker Community Edition (CE), Kubernetes, and Apache Mesos are free and open-source. Some providers, such as Rancher Labs, offer enterprise support for an additional fee. Cloud-based services, such as Red Hat Openshift Online, AWS, GCE, and ACS, charge the typical usage monthly fee. Docker EE, similar to Mesosphere Enterprise DC/OS and Red Hat OpenShift, is priced on a per node/per year annual subscription model.

Docker EE is currently offered in three subscription tiers, including Basic, Standard, and Advanced. Additionally, Docker offers Business Day and Business Critical support. Docker EE’s Advanced Tier adds several significant features, including secure multi-tenancy with node-based isolation, and image security scanning and continuous vulnerability scanning, as part of Docker EE’s Docker Trusted Registry.

Architecting for Affordability and Maintainability

Building an enterprise-scale application platform, using public cloud infrastructure, such as AWS, and a licensed Containers-as-a-Service (CaaS) platform, such as Docker EE, can quickly become complex and costly to build and maintain. Based on current list pricing, the cost of a single Linux node ranges from USD 75 per month for basic support, up to USD 300 per month for Docker Enterprise Edition Advanced with Business Critical support. Although cost is relative to the value generated by the application platform, none the less, architects should always strive to avoid unnecessary complexity and cost.

Reoccurring operational costs, such as licensed software subscriptions, support contracts, and monthly cloud-infrastructure charges, are often overlooked by project teams during the build phase. Accurately forecasting reoccurring costs of a fully functional Production platform, under expected normal load, is essential. Teams often overlook how Docker image registries, databases, data lakes, and data warehouses, quickly swell, inflating monthly cloud-infrastructure charges to maintain the platform. The need to control cloud costs have led to the growth of third-party cloud management solutions, such as CloudCheckr Cloud Management Platform (CMP).

Shared Docker Environment Model

Most software development projects require multiple environments in which to continuously develop, test, demonstrate, stage, and release code. Creating separate environments, replete with their own Docker EE Universal Control Plane (aka Control Plane or UCP), Docker Trusted Registry (DTR), AWS infrastructure, and third-party components, would guarantee a high-level of isolation and performance. However, replicating all elements in each environment would add considerable build and run costs, as well as unnecessary complexity.

On both recent projects, we choose to create a single AWS Virtual Private Cloud (VPC), which contained all of the non-production environments required by our project teams. In parallel, we built an entirely separate Production VPC for the Production environment. I’ve seen this same pattern repeated with Red Hat OpenStack and Microsoft Azure.

Production

Isolating Production from the lower environments is essential to ensure security, and to eliminate non-production traffic from impacting the performance of Production. Corporate compliance and regulatory policies often dictate complete Production isolation. Having separate infrastructure, security appliances, role-based access controls (RBAC), configuration and secret management, and encryption keys and SSL certificates, are all required.

For complete separation of Production, different AWS accounts are frequently used. Separate AWS accounts provide separate billing, usage reporting, and AWS Identity and Access Management (IAM), amongst other advantages.

Performance and Staging

Unlike Production, there are few reasons to completely isolate lower-environments from one another. The exception I’ve encountered is Performance and Staging. These two environments are frequently separated from other environments to ensure the accuracy of performance testing and release staging activities. Performance testing, in particular, can generate enormous load on systems, which if not isolated, will impair adjacent environments, applications, and monitoring systems.

On a few recent projects, to reduce cost and complexity, we repurposed the UAT environment for performance testing, once user-acceptance testing was complete. Performance testing was conducted during off-peak development and testing periods, with access to adjacent environments blocked.

The multi-purpose UAT environment further served as a Staging environment. Applications were deployed and released to the UAT and Performance environments, following a nearly-identical process used for Production. Hotfixes to Production were also tested in this environment.

Example of Shared Environments

To demonstrate how to architect a shared non-production Docker EE environment, which minimizes cost and complexity, let’s examine the example shown below. In the example, built on AWS with Docker EE, there are four typical non-production environments, CI/CD, Development, Test, and UAT, and one Production environment.

Final_DockerEE_17

In the example, there are two separate VPCs, the Production VPC, and the Non-Production VPC. There is no reason to configure VPC Peering between the two VPCs, as there is no need for direct communication between the two. Within the Non-Production VPC, to the left in the diagram, there is a cluster of three Docker EE UCP Manager EC2 nodes, a cluster of three DTR Worker EC2 nodes, and the four environments, consisting of varying numbers of EC2 Worker nodes. Production, to the right of the diagram, has its own cluster of three UCP Manager EC2 nodes and a cluster of six EC2 Worker nodes.

Single Non-Production UCP

As a primary means of reducing cost and complexity, in the example, a single minimally-sized Docker EE UCP cluster of three Manager nodes orchestrate activities across all four non-production environments. Alternately, you would have to create a UCP cluster for each environment; that means nine more Worker Nodes to configure and maintain.

The UCP users, teams, organizations, access controls, Docker Secrets, overlay networks, and other UCP features, for all non-production environments, are managed through the single Control Plane. All deployments to all the non-production environments, from the CI/CD server, are performed through the single Control Plane. Each UCP Manager node is deployed to a different AWS Availability Zone (AZ) to ensure high-availability.

Shared DTR

As another means of reducing cost and complexity, in the example, a Docker EE DTR cluster of three Worker nodes contain all Docker image repositories. Both the non-production and the Production environments use this DTR as a secure source of all Docker images. Not having to replicate image repositories, access controls, infrastructure, and figuring out how to migrate images between two separate DTR clusters, is a significant time, cost, and complexity savings. Each DTR Worker node is also deployed to a different AZ to ensure high-availability.

Using a shared DTR between non-production and Production is an important security consideration your project team needs to consider. A single DTR, shared between non-production and Production, comes with inherent availability and security risks, which should be understood in advance.

Separate Non-Production Worker Nodes

In the shared non-production environments example, each environment has dedicated AWS EC2 instances configured as Docker EE Worker nodes. The number of Worker nodes is determined by the requirements for each environment, as dictated by the project’s Development, Testing, Security, and DevOps teams. Like the UCP and DTR clusters, each Worker node, within an individual environment, is deployed to a different AZ to ensure high-availability and mimic the Production architecture.

Minimizing the number of Worker nodes in each environment, as well as the type and size of each EC2 node, offers a significant potential cost and administrative savings.

Separate Environment Ingress

In the example, the UCP, DTR, and each of the four environments is accessed through separate URLs, using AWS Hosted Zone CNAME records (subdomains).

Final_DockerEE_16

Encrypted HTTPS traffic is routed through a series of security appliances, depending on traffic type, to individual private AWS Elastic Load Balancers (ELB), one for both UCPs, the DTR, and each of the environments. Each ELB load-balances traffic to the Docker EE nodes associated the specific traffic. All firewalls, ELBs, and the UCP and DTR are secured with a high-grade wildcard SSL certificate.

AWS_ELB

Separate Data Sources

In the shared non-production environments example, there is one Amazon Relational Database Service‎ (RDS) instance in non-Production and one Production. Both RDS instances are replicated across multiple Availability Zones. Within the single shared non-production RDS instance, there are four separate databases, one per non-production environment. This architecture sacrifices the potential database performance of separate RDS instances for additional cost and complexity.

Maintaining Environment Separation

Node Labels

To obtain sufficient environment separation while using a single UCP, each Docker EE Worker node is tagged with an environment node label. The node label indicates which environment the Worker node is associated with. For example, in the screenshot below, a Worker node is assigned to the Development environment by tagging it with the key of environment and the value of dev.

Node_Label

* The Docker EE screens shown here are from UCP 2.1.5, not the recently released 2.2.x, which has an updated UI appearance.Each service’s Docker Compose file uses deployment placement constraints, which indicate where Docker should or should not deploy services. In the hello-world Docker Compose file example below, the node.labels.environment constraint is set to the ENVIRONMENT variable, which is set during container deployment by the CI/CD server. This constraint directs Docker to only deploy the hello-world service to nodes which contain the placement constraint of node.labels.environment, whose value matches the ENVIRONMENT variable value.

Deploying from CI/CD Server

The ENVIRONMENT value is set as an environment variable, which is then used by the CI/CD server, running a docker stack deploy or a docker service update command, within a deployment pipeline. Below is an example of how to use the environment variable as part of a Jenkins pipeline as code Jenkinsfile.

Centralized Logging and Metrics Collection

Centralized logging and metrics collection systems are used for application and infrastructure dashboards, monitoring, and alerting. In the shared non-production environment examples, the centralized logging and metrics collection systems are internal to each VPC, but reside on separate EC2 instances and are not registered with the Control Plane. In this way, the logging and metrics collection systems should not impact the reliability, performance, and security of the applications running within Docker EE. In the example, Worker nodes run a containerized copy of fluentd, which collects and pushes logs to ELK’s Elasticsearch.

Logging and metrics collection systems could also be supplied by external cloud-based SaaS providers, such as LogglySysdig and Datadog, or by the platform’s cloud-provider, such as Amazon CloudWatch.

With four environments running multiple containerized copies of each service, figuring out which log entry came from which service instance, requires multiple data points. As shown in the example Kibana UI below, the environment value, along with the service name and container ID, as well as the git commit hash and branch, are added to each log entry for easier troubleshooting. To include the environment, the value of the ENVIRONMENT variable is passed to Docker’s fluentd log driver as an env option. This same labeling method is used to tag metrics.

ELK

Separate Docker Service Stacks

For further environment separation within the single Control Plane, services are deployed as part of the same Docker service stack. Each service stack contains all services that comprise an application running within a single environment. Multiple stacks may be required to support multiple, distinct applications within the same environment.

For example, in the screenshot below, a hello-world service container, built with a Docker image, tagged with build 59 of the Jenkins continuous integration pipeline, is deployed as part of both the Development (dev) and Test service stacks. The CD and UAT service stacks each contain different versions of the hello-world service.

Hello-World-UCP

Separate Docker Overlay Networks

For additional environment separation within the single non-production UCP, all Docker service stacks associated with an environment, reside on the same Docker overlay network. Overlay networks manage communications among the Docker Worker nodes, enabling service-to-service communication for all services on the same overlay network while isolating services running on one network from services running on another network.

in the example screenshot below, the hello-world service, a member of the test service stack, is running on the test_default overlay network.

Network

Cleaning Up

Having distinct environment-centric Docker service stacks and overlay networks makes it easy to clean up an environment, without impacting adjacent environments. Both service stacks and overlay networks can be removed to clear an environment’s contents.

Separate Performance Environment

In the alternative example below, a Performance environment has been added to the Non-Production VPC. To ensure a higher level of isolation, the Performance environment has its own UPC, RDS, and ELBs. The Performance environment shares the DTR, as well as the security, logging, and monitoring components, with the rest of the non-production environments.

Below, the Performance environment has half the number of Worker nodes as Production. Performance results can be scaled for expected Production performance, given more nodes. Alternately, the number of nodes can be scaled up temporarily to match Production, then scaled back down to a minimum after testing is complete.

Final_DockerEE_20

Shared DevOps Tooling

All environments leverage shared Development and DevOps resources, deployed to a separate VPC. Resources include Agile Application Lifecycle Management (ALM), such as JIRA or CA Agile Central, source control repository management (SCM), such as GitLab or Bitbucket, binary repository management, such as Artifactory or Nexus, and a CI/CD solution, such as Jenkins, TeamCity, or Bamboo.

From the DevOps VPC, Docker images are pushed and pulled from the DTR in the Non-Production VPC. Deployments of container-based application are executed from the DevOps VPC CI/CD server to the non-production, Performance, and Production UCPs. Separate DevOps CI/CD pipelines and access controls are essential in maintaining the separation of the non-production and Production environments.

Final_DockerEE_22

Complete Platform

Several common components found in a Docker EE cloud-based AWS platform were discussed in the post. However, a complete AWS application platform has many more moving parts. Below is a comprehensive list of components, including DevOps tooling, organized into two categories: 1) common components that can be potentially shared across the non-production environments to save cost and complexity, and 2) components that should be replicated in each non-environment for security and performance.

Shared Non-Production Components:

  • AWS
    • Virtual Private Cloud (VPC), Region, Availability Zones
    • Route Tables, Network ACLs, Internet Gateways
    • Subnets
    • Some Security Groups
    • IAM Groups, User, Roles, Policies (RBAC)
    • Relational Database Service‎ (RDS)
    • ElastiCache
    • API Gateway, Lambdas
    • S3 Buckets
    • Bastion Servers, NAT Gateways
    • Route 53 Hosted Zone (Registered Domain)
    • EC2 Key Pairs
    • Hardened Linux AMI
  • Docker EE
    • UCP and EC2 Manager Nodes
    • DTR and EC2 Worker Nodes
    • UCP and DTR Users, Teams, Organizations
    • DTR Image Repositories
    • Secret Management
  • Third-Party Components/Products
    • SSL Certificates
    • Security Components: Firewalls, Virus Scanning, VPN Servers
    • Container Security
    • End-User IAM
    • Directory Service
    • Log Aggregation
    • Metric Collection
    • Monitoring, Alerting
    • Configuration and Secret Management
  • DevOps
    • CI/CD Pipelines as Code
    • Infrastructure as Code
    • Source Code Repositories
    • Binary Artifact Repositories

Isolated Non-Production Components:

  • AWS
    • Route 53 Hosted Zones and Associated Records
    • Elastic Load Balancers (ELB)
    • Elastic Compute Cloud (EC2) Worker Nodes
    • Elastic IPs
    • ELB and EC2 Security Groups
    • RDS Databases (Single RDS Instance with Separate Databases)

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , , ,

Leave a comment

Shell Script to Automate Creation of Swap File on Linux

Use shell scripts to create and remove Linux swap files on the fly.

Introduction

Recently, while scripting the installation of Oracle’s WebLogic Server, I ran into an issue with a lack of a swap space. I was automating the installation of WebLogic in Silent Mode on a Vagrant VM. The VM was built from an Ubuntu Cloud Image of Ubuntu ServerUbuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The Ubuntu image did not have the minimum 512 MB of swap space required by the WebLogic installer.

Swap Space Error During Install

Swap Space Error During WebLogic Install

Swap

According to Gary Sims on Linux.com, “Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.

Scripts

To create the required swap space, I could create either a swap partition or a swap file. I chose to create a swap file, using a shell script. Actually, there are two scripts. The first script creates creates a 512 MB swap file as a pre-step in the automated installation of WebLogic. Once the WebLogic installation is complete, the second, optional script, may be ran to remove the swap file. ArchWiki (wiki.archlinux.org)  has an excellent post on swap space I referenced to build my first script.

Use a ‘sudo ./create_swap.sh’ command to create the swap file and display the results in the terminal.

Creating the Swap File

Creating the Swap File

Installing without Swap Space Error

Installing WebLogic with a Swap File

If the swap file is no longer required, the second script will remove it. Use a ‘sudo ./remove_swap.sh’ command to remove the swap file and display the results in the terminal. LinuxQuestions.org has a good Forum post on removing swap files I referenced to build my second script.

Removing Swap File

Removing the Swap File

, , , , ,

1 Comment

Updating Ubuntu Linux to the Latest JDK

Introduction

If you are Java Developer, new to the Linux environment, installing and configuring Java updates can be a bit daunting. In the following post, we will update a VirtualBox VM running Canonical’s popular Ubuntu Linux operating system. The VM currently contains an earlier version of Java. We will update the VM to the latest release of the Java.

All code for this post is available as Gists on GitHub.com, including a complete install script, explained at the end of this post.

Current Version of Java?

First, we will use the ‘update-alternatives –display java’ command to review all the versions of Java currently installed on the VM. We can have multiple copies installed, but only one will be configured and active. We can verify the active version using the ‘java -version’ command.

01 - Check Current Version of Java

Check Current Version of Java

In the above example, the 1.7.0_17 JDK version of Java is configured and active. That version is located in the ‘/usr/lib/jvm/jdk1.7.0_17’ subdirectory. There are two other Java versions also installed but not active, an Oracle 1.7.0_17 JRE version and an older 1.7.0_04 JDK version. These three versions are referred to as ‘alternatives’, thus the ‘alternatives’ command. By selecting an alternative version of Java, we control which java.exe binary executable file the system calls when the ‘java’ command is executed. In a many software development environments, you may need different versions of Java, depending on different client project’s level of technology.

Alternatives

According to About.com, alternatives ‘make it possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. A generic name in the filesystem is shared by all files providing interchangeable functionality. The alternatives system and the system administrator together determine which actual file is referenced by this generic name. The generic name is not a direct symbolic link to the selected alternative. Instead, it is a symbolic link to a name in the alternatives directory, which in turn is a symbolic link to the actual file referenced.’

We can see this system at work by changing to our ‘/usr/bin’ directory. This directory contains the majority of binary executables on the system. Executing an ‘ls -Al /usr/bin/* | grep -e java -e jar -e appletviewer -e mozilla-javaplugin.so’ command, we see that each Java executable is actually a symbolic link to the alternatives directory, not a binary executable file.

Java Symbolic Links to Alternatives Directory

Java Symbolic Links to Alternatives Directory

To find out all the commands which support alternatives, you can use the ‘update-alternatives –get-selections’ command. We can use a similar command to get just the Java commands, ‘update-alternatives –get-selections | grep -e java -e jar -e appletview -e mozilla-javaplugin.so’.

Java-Related Executable Alternatives

Java-Related Executable Alternatives (view after update)

Computer Architecture?

Next, we need to determine the computer processor architecture of the VM. The architecture determines which version of Java to download. The machine that hosts our VM may have a 64-bit architecture (also known as x86-64, x64, and amd64), while the VM might have a 32-bit architecture (also known as IA-32 or x86). Trying to install 64-bit versions of software onto 32-bit VMs is a common mistake.

The VM’s architecture was originally displayed with the ‘java -version’ command, above. To confirm the 64-bit architecture we can use either the ‘uname -a’ or ‘arch’ command.

02 - Find Your Processor Type

Find Your Processor’s Architecture

JRE or JDK?

One last decision. The Java Runtime Environment (JRE) purpose is to run Java applications. The JRE covers most end-user’s needs. The Java Development Kit (JDK) purpose is to develop Java applications. The JDK includes a complete JRE, plus tools for developing, debugging, and monitoring Java applications. As developers, we will choose to install the JDK.

Download Latest Version

In the above screen grab, you see our VM is running a 64-bit version of Ubuntu 12.04.3 LTS (Precise Pangolin). Therefore, we will download the most recent version of the 64-bit Linux JDK. We could choose either Oracle’s commercial version of Java or the OpenJDK version. According to Oracle, the ‘OpenJDK is an open-source implementation of the Java Platform, Standard Edition (Java SE) specifications’. We will choose the latest commercial version of Oracle’s JDK. At the time of this post, that is JDK 7u45 (aka 1.7.0_45-b18).

The Linux file formats available for download, are a .rpm (Red Hat Package Manager) file and a .tar.gz file (aka tarball). For this post, we will download the tarball, the ‘jdk-7u45-linux-x64.tar.gz’ file.

Current Java JDK Downloads

Current JDK Downloads

Extract Tarball

We will use the command ‘sudo tar -zxvf jdk-7u45-linux-x64.tar.gz -C /usr/lib/jvm’, to extract the files directly to the ‘/usr/lib/jvm’ folder. This folder contains all previously installed versions of Java. Once the tarball’s files are extracted, we should see a new directory containing the new version of Java, ‘jdk1.7.0_45’, in the ‘/usr/lib/jvm’ directory.

04 - Versions of Java on the VM

Versions of Java on the VM

Installation

There are two configuration modes available in the alternatives system, manual and automatic mode. According to die.net‘when a link group is in manual mode, the alternatives system will not (automatically) make any changes to the system administrator’s settings’. When a link group is in automatic mode, the alternatives system ensures that the links in the group point to the highest priority alternatives appropriate for the group’.

We will first install and configure the new version of Java in manual mode. To install the new version of Java, we run ‘update-alternatives –install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/jre/bin/java 4’. Note the last parameter, ‘4’, the priority. Why ‘4’? If we chose to use automatic mode, as we will a little later, we want our new Java version to have the highest numeric priority. In automatic mode, the system looks at the priority to determine which version of Java it will run. In the post’s first screen grab, note each of the three installed Java versions had different priorities: 1, 2, and 3. If we want to use automatic mode later, we must set a higher priority on our new version of Java, ‘4’ or greater. We will set it now as part of the install, so we can use it later in automatic mode.

Configuration

First, to configure (activate) the new Java version in alternatives manual mode, we will run ‘update-alternatives –config java’. We are prompted to choose from the list of alternatives for Java executables (java.exe), which has now grown from three to four choices. Choose the new JDK version of Java, which we just installed.

That’s it. The system has been instructed to use the version we selected as the Java alternative. Now, when we run the ‘java’ command, the system will access the newly configured JDK version. To verify, we rerun the ‘java -version’ command. The version of Java has changed since the first time we ran the command (see first screen grab).

05 - Install and Configure New Version of Java

Install and Configure New Version of Java

Now let’s switch to automatic mode. To switch to automatic mode, we use ‘update-alternatives –auto java’. Notice how the mode has changed in the screen grab below and our version with the highest priority is selected.

06c - Switching to Auto

Switching to Auto

Other Java Command Alternatives

We will repeat this process for the other Java-related executables. Many are part of the JDK Tools and Utilities. Java-related executables include the Javadoc Tool (javadoc), Java Web Start (javaws), Java Compiler (javac), and Java Archive Tool (jar), javap, javah, appletviewer, and the Java Plugin for Linux. Note if you are updating a headless VM, you would not have a web browser installed. Therefore it would not be necessary to configuring the mozilla-javaplugin.

06b - Configure Java Plug-in for Linux

Configure Java Plug-in for Linux

Verify Java Version

Verify Java Version is Latest

JAVA_HOME

Many applications which need the JDK/JRE to run, look up the JAVA_HOME environment variable for the location of the Java compiler/interpreter. The most common approach is to hard-wire the JAVA_HOME variable to the current JDK directory. User the ‘sudo nano ~/.bashrc’ command to open the bashrc file. Add or modify the following line, ‘export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45’. Remember to also add the java executable path to the PATH variable, ‘export PATH=$PATH:$JAVA_HOME/bin’. Lastly, execute a ‘bash –login’ command for the changes to be visible in the current session.

Alternately, we could use a symbolic link to ‘default-java’. There are several good posts on the Internet on using the ‘default-java’ link.

Complete Scripted Example

Below is a complete script to install or update the latest JDK on Ubuntu. Simply download the tarball to your home directory, along with the below script, available on GitHub. Execute the script with a ‘sh install_java_complete.sh’. All java executables will be installed and configured as alternatives in automatic mode. The JAVA_HOME and PATH environment variables will also be set.

Below is a test of the script on a fresh Vagrant VM of an Ubuntu Cloud Image of Ubuntu Server 13.10 (Saucy Salamander). Ubuntu Cloud Images are pre-installed disk images that have been customized by Ubuntu engineering to run on cloud-platforms such as Amazon EC2, Openstack, Windows, LXC, and Vagrant. The script was able to successfully install and configure the JDK, as well as the  JAVA_HOME and PATH environment variables.

Test New Install on Vagrant Ubuntu VM

Test New Install on Vagrant Ubuntu VM

Deleting Old Versions?

Before deciding to completely delete previously installed versions of Java from the ‘/usr/lib/jvm’ directory, ensure there are no links to those versions from OS and application configuration files. Many applications, such as NetBeans, eclipse, soapUI, and WebLogic Server, may contain their own Java configurations. If they don’t use the JAVA_HOME variable, they should be updated to reflect the current active Java version when possible.

Resources

Ubuntu Linux: Install Latest Oracle Java 7

update-alternatives(8) – Linux man page

Configuring different JDKs with alternatives

Ubuntu Documentation: Java

, , , , , , , , , , , ,

1 Comment

Travel-Size Wireless Router for Your Raspberry Pi

Use a low-cost nano-size wireless router to connect to your Raspberry Pi while traveling. Set up your own private wireless network in your vehicle, hotel, or coffee shop.

Introduction

Recently, I purchased a USB-powered wireless router for to use with my Raspberry Pi when travelling. In an earlier post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I discussed the use of the Raspberry Pi, combined with a webcam, Motion, and FFmpeg, to create a low-cost dashboard video camera. Like many, I find one the big challenges with the Raspberry Pi, is how to connect and interact with it. Being in my car, and usually out of range of my home’s wireless network, except maybe in the garage, this becomes even more of an issue. That’s where adding an inexpensive travel-size router to my vehicle comes in handy.

I chose the TP-LINK TL-WR702N Wireless N150 Travel Router, sold by Amazon. The TP-LINK router, described as ‘nano size’, measures only 2.2 inches square by 0.7 inches wide. It has several modes of operation, including as a router, access point, client, bridge, or repeater. It operates at wireless speeds up to 150Mpbs and is compatible with IEEE 802.11b/g/n networks. It supports several common network security protocols, including WEP, WPA/WPA2, WPA-PSK/WPA2-PSK encryption. For $22 USD, what more could you ask for!

TP-LINK Nano Router

My goal with the router was to do the following:

  1. Have the Raspberry Pi auto-connect to the new TP-LINK router’s wireless network when in range, just like my home network.
  2. Since I might still be in range of my home network, have the Raspberry Pi try to connect to the TP-LINK first, before falling back to my home network.
  3. Ensure the network was relatively secure, since I would be exposed to many more potential threats when traveling.

My vehicle has two power outlets. I plug my Raspberry Pi into one outlet and the router into the other. You could daisy chain the router off the Pi. However, my Pi’s ports are in use my the USB wireless adapter and the USB webcam. Using the TP-LINK router, I can easily connect to the Raspberry Pi with my mobile phone or tablet, using an SSH client.

Using Fing to Locate the Pi on the TP-LINK Wireless Network

Using Fing to Locate the Pi on the TP-LINK Wireless Network

When I arrive at my destination, I log into the Pi and do a proper shutdown. This activates my shutdown script (see my last post), which moves the newly created Motion/FFmpeg time-lapse dash-cam videos to a secure folder on my Pi, before powering down.

Using SSH Terminal for iOS to Shutdown the Pi

Using SSH Terminal for iOS to Shutdown the Pi

Of course there are many other uses for the router. For example, I can remove the Pi and router from my car and plug it back in at the hotel while traveling, or power the router from my laptop while at work or the coffee shop. I now have my own private wireless network wherever I am to use the Raspberry Pi, or work with other users. Remember the TP-LINK can act as a router, access point, client, bridge, or a repeater.

The Raspberry Pi and Router both fit in a Small Container for Travel

The Raspberry Pi and Router both fit in a Small Container for Travel

Network Security

Before configuring your Raspberry Pi, the first thing you should do is change all the default security related settings for the router. Start with the default SSID and the PSK password. Both these default values are printed right on the router. That’s motivation enough to change!

TP-LINK Administration Console 2

Additionally, change the default IP address of the router and the username and password for the browser-based Administration Console.

TP-LINK Administration Console

Lastly, pick the most secure protocol possible. I chose ‘WPA-PSK/WPA2-PSK’. All these changes are done through the TP-LINK’s browser-based Administration Console.

Configuring Multiple Wireless Networks

In an earlier post, Installing a Miniature WiFi Module on the Raspberry Pi (w/ Roaming Enabled), I detailed the installation and configuration of a Miniature WiFi Module, from Adafruit Industries, on a Pi running Soft-float Debian “wheezy”. I normally connect my Pi to my home wireless network. I wanted to continue to do this in the house, but connect the new router when traveling.

Based on the earlier post, I was already using Jouni Malinen’s wpa_supplicant, the WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2. This made network configuration relatively simple. If you use wpa_supplicant, your ‘/etc/network/interfaces’ file should look like the following. If you’re not familiar with configuring the interfaces file for wpa_supplicant, this post on NoWiresSecurity.com is a good starting point.

Interfaces File

Note that in this example, I am using DHCP for all wireless network connections. If you chose to use static IP addresses for any of the networks, you will have to change the interfaces file accordingly. Once you add multiple networks, configuring static IP addresses for each network, becomes more complex. That is my next project…

First, I generated a new pre-shared key (PSK) for the router’s SSID configuration using the following command. Substitute your own SSID (‘your_ssid’) and passphrase (‘your_passphrase’).

wpa_passphrase your_ssid your_passphrase

Based your SSID and passphrase, this command will generate a pre-shared key (PSK), similar to the following. Save or copy the PSK to the clipboard. We will need the PSK in the next step.

Creating PSK 2

Then, I modified my wpa_supplicant configuration file with the following command:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

I added the second network configuration, similar to the existing configuration for my home wireless network, using the newly generated PSK. Below is an example of what mine looks like (of course, not the actual PSKs).

WPA Supplicant Configuration

Depending on your Raspberry Pi and router configurations, your wpa_supplicant configuration will look slightly different. You may wish to add more settings. Don’t consider my example the absolute right way for your networks.

Wireless Network Priority

Note the priority of the TP-LINK router is set to 2, while my home NETGEAR router is set to 1. This ensures wpa_supplicant will attempt to connect to the TP-LINK network first, before attempting the home network. The higher number gets priority. The best resource I’ve found, which explains all the configuration options is detail, is here. In this example wpa_supplicant configuration file, priority is explained this way, ‘by default, all networks will get same priority group (0). If some of the networks are more desirable, this field can be used to change the order in which wpa_supplicant goes through the networks when selecting a BSS. The priority groups will be iterated in decreasing priority (i.e., the larger the priority value, the sooner the network is matched against the scan results). Within each priority group, networks will be selected based on security policy, signal strength, etc.’

Conclusion

If you want an easy, inexpensive, secure way to connect to your Raspberry Pi, in the vehicle or other location, a travel-size wireless router is a great solution. Best of all, configuring it for your Raspberry Pi is simple if you use wpa_supplicant.

, , , , , , , , , , , , ,

5 Comments

Using a Startup Script to Save Motion/FFmpeg Videos and Images on The Raspberry Pi

Use a start-up script to overcome limitations of Motion/FFmpeg and save multiple Raspberry Pi dashboard camera timelapse videos and images, automatically.

01-20130624190454-00

Introduction

In my last post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I demonstrated how the Raspberry Pi can be used as a low-cost dashboard video camera. One of the challenges I faced in that post was how to save the timelapse videos and individual images (frames) created by Motion and FFmpeg when the Raspberry Pi is turned on and off. Each time the car starts, the Raspberry Pi boots up, and Motion begins to run, the previous images and video, stored in the default ‘/tmp/motion/’ directory are removed and new images and video, created.

Take the average daily commute, we drive to and from work. Maybe we stop for a morning coffee, or stop at the store on the way home to pick up dinner. Maybe we use our car go out for lunch. Our car starts, stops, starts, stops, starts, and stops. Our daily commute actually encompasses a series small trips, and therefore multiple dash-cam timelapse videos. If you are only interested in keeping the latest timelapse video in case of an accident, then this may not be a problem. When the accident occurs, simply pull the SDHC card from the Raspberry Pi and copy the video and images off to your laptop.

However, if you are interested in capturing and preserving series of dash-cam videos, such as in the daily commute example above, then the default behavior of Motion is insufficient. To preserve each video segment or series of images, we need a way to preserve the content created by Motion and FFmpeg, before they are overwritten. In this post, I will present a solution to overcome this limitation.

The process involves the following steps:

  1. Change the default location where Motion stores timelapse videos and images to somewhere other than a temporary directory;
  2. Create a startup script that will move the video and images to a safe location when restarting the Pi;
  3. Configure the Pi’s Debian operating system to run this script at startup (and optionally shutdown), before Motion starts;

Sounds pretty simple. However, understanding how startup scripts work with Debian’s Init program, making sure the new move script run before Motion starts, and knowing how to move a huge number of files, all required forethought.

Change Motion’s Default Location for Video and Images

To start, change the default location where Motion stores timelapse video and images, from ‘/tmp/motion/’ to a location outside the ‘/tmp’ directory. I chose to create a directory called ‘/motiontmp’. Make sure you set the permissions on the new ‘/motiontmp’ directory, so Motion can write to it:

sudo chmod -R 777 /motiontmp

To have Motion  use this location, we need to modify the Motion configuration file:

sudo nano /etc/motion/motion.conf

Change the following setting (in bold below). Note when Motion starts for the first time, it will create the ‘motion’ sub folder inside ‘motiontmp’. You do not have to create it yourself.

# Target base directory for pictures and films
# Recommended to use absolute path. (Default: current working directory)
target_dir /motiontmp/motion

Motion Target Directory

Create the Startup Script to Move Video and Images

Next, create the new shell script that will run at startup to move Motion’s video and images. The script creates a timestamped folder in new ‘motiontmp’ directory for each series of images and video. The script then copies all files from the ‘motion’ directory to the new timestamped directory. Before copying, the script deletes any zero-byte jpegs, which are images that did not fully process prior to the Raspberry Pi being shut off when the car stopped. To create the new script, run the following command.

sudo nano /etc/init.d/motionStartup.sh

Copy the following contents into the script and save it.

### BEGIN INIT INFO
# Provides:          motionStartup
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Move motion files at startup.
# Description:       Move motion files at startup.
# X-Start-Before:    motion
### END INIT INFO

#! /bin/sh
# /etc/init.d/motionStartup
#

# Some things that run always
#touch /var/lock/motionStartup
logger -s "Script motionStartup called"

# Carry out specific functions when asked to by the system
case "$1" in
  start)
    logger -s "Script motionStartup started"
    TIMESTAMP=$(date +%Y%m%d%H%M%S | sed 's/ //g') # No spaces
    logger -s "Script motionStartup $TIMESTAMP"
    sudo mkdir /motiontmp/$TIMESTAMP || logger -s "Error mkdir start"
    find /motiontmp/motion/. -type f -size 0 -print0 -delete
    find /motiontmp/motion/. -maxdepth 1 -type f | \
        xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP
    ;;
  stop)
    logger -s "Script motionStartup stopped"
    ;;
  *)
    echo "Usage: /etc/init.d/motionStartup {start|stop}"
    exit 1
    ;;
esac

exit 0

Note the ‘X-Start-Before’ setting at the top of the script (in bold). An explanation of this setting is found on the Debian Wiki website. According to the site, ”There is no such standard-defined header, but there is a proposed extension implemented in the insserv package (since version 1.09.0-8). Use the X-Start-Before and X-Stop-After headers proposed by SuSe.” To make sure you have a current version of ‘insserv‘, you can run the following command:

dpkg -l insserv

Also, note how the files are moved by the script:

find /motiontmp/motion/. -maxdepth 1 -type f | \
        xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP

It’s not as simple as using ‘mv *.*’ when you have a few thousand files. This will likely throw a ‘Argument list too long’ exception. According to one stackoverflow, the exception is because bash actually expands the asterisk to every matching file, producing a very long command line. Using the ‘find’ combined with ‘xargs’ gets around these problem. The ‘xargs’ command splits up the list and issues several commands if necessary. This issue applies to several commands, including rm, cp, and mv.

Lastly, note the use of the ‘logger‘ commands throughout the script. These are optional and may be removed. I like to log the script’s progress for troubleshooting purposes. Using the above ‘logger’ commands, I can easily pinpoint issues by looking at the log with grep, such as:

tail -500 /var/log/messages | grep 'motionStartup' | grep 'logger:'

View of Log with Script Messages

You can test the script by running the following command:

/etc/init.d/./motionStartup.sh start

You should see a series of three messages output to the screen by the script, confirming the script is working. Note the new timestamped folder created by the script, below.

Testing the New Script

Below, is an example of the how the directory structure should look after a few videos are created by Motion, and the Raspberry Pi cycled off and on. You need to complete the rest of the steps in this post for this to work automatically.


Shutdown Script?

I know, the name of the post clearly says ‘Startup Script’. Well, a little tip, if you copy the code from the ‘start’ method and paste it in the ‘stop’ method, this now also works at shutdown. If you do a proper shutdown (like ‘sudo reboot’), the Raspberry Pi’s OS will call the script’s ‘stop’ method. The ‘start’ method is more useful to use for us in the car, where we may not be able to do a proper shutdown; we just turn the car off and kill power to the Pi. However, if you are shutting down from mobile device via ssh, or using a micro keyboard and LCD monitor, the script will do it’s work on the way down.

Configure Debian OS to Run the New Startup Script

To have our new script run on startup, install it by running the following command:

sudo update-rc.d motionStartup.sh defaults

A full explanation of this command is to complex for this brief post. A good overview of creating startup scripts and installing them in Debian is found on the Debian Administration website. This is the source I used to start to understand runlevels. There are also a few links at the end of the post. To tell which runlevel (state) you running at, use the following command:

runlevel

To make sure the startup script was installed properly, run the following command. This will display the contents of each ‘rc*.d’ folder. Each folder corresponds to a runlevel – 0, 1, 2, etc. Each folder contains symbolic links to the actual scripts. The links are named by order of executed (S01…, S02…, S03…):

ls /etc/rc*.d

Look for the new script listed under the appropriate runlevel(s). The new script should be listed before ‘motion’, as shown below.

View of Runlevels

View of Runlevels 2

If for any reason you need to uninstall the new script (not delete/remove), run the following command. This not a common task, but necessary to change the order of execution of the scripts or rename a script.

sudo update-rc.d -f motionStartup.sh remove

Copy and Remove Files from the Raspberry Pi

Once the startup script is working and we are capturing images and timelapse video, the next thing we will probably want to do is copy files off the Raspberry Pi. To do this over your WiFi network, use a ‘scp’ command from a remote machine. The below script copies all directories, stating with ‘2013’, and their contents to remote machine, preserving the directory structure.

scp -rp user@ip_address_of_pi:/motiontmp/2013* ~/local_directory/

Maybe you just want all the timelapse videos Motion/FFmpeg creates; you don’t care about the images. The following command copies just the MPEG videos from all ‘2013’ folders to a single directory on the your remote machine. The directory structure is ignored during the copy. This is the quickest way to just store all the videos.

scp -rp user@ip_address_of_pi:/motiontmp/2013*/*.mpg ~/local_directory/

If you are going to save all the MPEG timelapse videos in one location, I recommend changing the naming convention of the videos in the motion.conf file. I have added the hour, minute, and seconds to mine. This will ensure the names don’t conflict when moved to a common directory:

# File path for timelapse mpegs relative to target_dir
# Default: %Y%m%d-timelapse
# Default value is near equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d-timelapse
# File extension .mpg is automatically added so do not include this
timelapse_filename %Y%m%d%H%M%S-timelapse

To remove all the videos and images once they have been moved off the Pi and are no longer needed, you can run a rm command. Using the ‘-rf’ options make sure the directories and their contents are removed.

sudo rm -rf /motiontmp/2013*

Conclusion

The only issue I have yet to overcome is maintaining the current time on the Raspberry Pi. The Pi lacks a Real Time Clock (RTC). Therefore, turning the Pi on and off in the car causes it to loose the current time. Since the Pi is not always on a WiFi network, it can’t sync to the current time when restarted. The only side-effects I’ve seen so far caused by this, the videos occasionally contain more than one driving event and the time displayed in the videos is not always correct. Otherwise, the process works pretty well.

Resources

The following are some useful resources on this topic:

Debian Reference: Chapter 3. The system initialization

How-To: Managing services with update-rc.d

Files and scripts that execute on boot

Making scripts run at boot time with Debian

Finding all files and move to new directory from shell prompt

Shell scripting: Write message to a syslog / log file

“Argument list too long”: Beyond Arguments and Limitations

Linux / Unix Command: date (used for TIMESTAMP)

Time / Date Commands

To have our new script run on startup, install it by running the following command:

, , , , , , , , , ,

3 Comments

Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg

Demonstrate the use of the Raspberry Pi and a basic webcam, along with Motion and FFmpeg, to build low-cost dashboard video camera for your daily commute.

01-20130622162908-00_feature

Dashboard Video Cameras

Most of us remember the proliferation of dashboard camera videos of the February 2013 meteor racing across the skies of Russia. This rare astronomical event was captured on many Russian motorist’s dashboard cameras. Due to the dangerous driving conditions in Russia, many drivers rely on dashboard cameras for insurance and legal purposes. In the United States, we are more use to seeing dashboard cameras used by law-enforcement. Who hasn’t seen those thrilling police videos of car crashes, drunk drivers, and traffic stops gone wrong.

Although driving in the United States is not as dangerous as in Russia, there is reason we can’t also use dashboard cameras. In case you are involved in an accident, you will have a video record of the event for your insurance company. If you witness an accident or other dangerous situation, your video may help law enforcement and other emergency responders. Maybe you just want to record a video diary of your next road trip.

A wide variety of dashboard video cameras, available for civilian vehicles, can be seen on Amazon’s website. They range in price and quality from less that $50 USD to well over $300 USD or more, depending on their features. In a popular earlier post, Remote Motion-Activated Web-Based Surveillance with Raspberry Pi, I demonstrated the use of the Raspberry Pi and a webcam, along with Motion and FFmpeg, to provide low-cost web-based, remote surveillance. There are many other uses for this combination of hardware and software, including as a dashboard video camera.

Methods for Creating Dashboard Camera Videos

I’ve found two methods for capturing dashboard camera videos. The first and easiest method involves configuring Motion to use FFmpeg to create a video. FFmpeg creates a video from individual images (frames) taken at regular intervals while driving. The upside of the FFmpeg option, it gives you a quick ready-made video. The downside of FFmpeg option, your inability to fully control the high-level of video compression and high frame-rate (fps). This makes it hard to discern fine details when viewing the video.

Alternately, you can capture individual JPEG images and combine them using FFmpeg from the command line or using third-party movie-editing tools. The advantage of combining the images yourself, you have more control over the quality and frame-rate of the video. Altering the frame-rate, alters your perception of the speed of the vehicle recording the video. The only disadvantage of combining the images yourself, you have the extra steps involved to process the images into a video.

At one frame every two seconds (.5 fps), a 30 minute commute to work will generate 30 frames/minute x 30 minutes, or 900 jpeg images. At 640 x 480 pixels, depending on your jpeg compression ratio, that’s a lot of data to move around and crunch into a video. If you just want a basic record of your travels, use FFmpeg. If you want a higher-quality record of trip, maybe for a video-diary, combining the frames yourself is a better way to go.

Configuring Motion for a Dashboard Camera

The installation and setup of FFmpeg and Motion are covered in my earlier post so I won’t repeat that here. Below are several Motion settings I recommend starting with for use with a dashboard video camera. To configure Motion, open it’s configuration file, by entering the following command on your Raspberry Pi:

sudo nano /etc/motion/motion.conf

To use FFmpeg, the first method, find the ‘FFMPEG related options’ section of the configuration and locate ‘Use ffmpeg to encode a timelapse movie’. Enter a number for the ‘ffmpeg_timelapse’ setting. This is the rate at which images are captured and combined into a video. I suggest starting with 2 seconds. With a dashboard camera, you are trying to record important events as you drive. In as little as 2-3 seconds at 55 mph, you can miss a lot of action. Moving the setting down to 1 second will give more detail, but you will chew up a lot of disk space, if that is an issue for you. I would experiment with different values:

# Use ffmpeg to encode a timelapse movie
# Default value 0 = off - else save frame every Nth second
ffmpeg_timelapse 2

To use the ‘do-it-yourself’ FFmpeg method, locate the ‘Snapshots’ section. Find ‘Make automated snapshot every N seconds (default: 0 = disabled)’. Change the ‘snapshot_interval’ setting, using the same logic as the ‘ffmpeg_timelapse’ setting, above:

# Make automated snapshot every N seconds (default: 0 = disabled)
snapshot_interval 2

Irregardless of which method you choose (or use them both), you will want to tweak some more settings. In the ‘Text Display’ section, locate ‘Set to ‘preview’ will only draw a box in preview_shot pictures.’ Change the ‘locate’ setting to ‘off’. As shown in the video frame below, since you are moving in your vehicle most of the time, there is no sense turning on this option. Motion cannot differentiate between the highway zipping by the camera and the approaching vehicles. Everything is in motion to the camera, the box just gets in the way:

# Set to 'preview' will only draw a box in preview_shot pictures.
locate off

01-20130625193032-01

Optionally, I recommend turning on the time-stamp option. This is found right below the ‘locate’ setting. Especially in the event of an accident, you want an accurate time-stamp on the video or still images (make sure you Raspberry Pi’s time is correct):

# Draws the timestamp using same options as C function strftime(3)
# Default: %Y-%m-%d\n%T = date in ISO format and time in 24 hour clock
# Text is placed in lower right corner
text_right %Y-%m-%d\n%T-%q

01-20130622162908-00

Starting with the largest, best quality images will ensure  the video quality is optimal. Start with a large size capture and reduce it only if you are having trouble capturing the video quickly enough. These settings are found in the ‘Capture device options’ section:

# Image width (pixels). Valid range: Camera dependent, default: 352
width 640

# Image height (pixels). Valid range: Camera dependent, default: 288
height 480

Similarly, I suggest starting with a low amount of jpeg compression to maximize quality and only lower if necessary. This setting is found in the ‘Image File Output’ section:

# The quality (in percent) to be used by the jpeg compression (default: 75)
quality 90

Once you have completed the configuration of Motion, restart Motion for the changes to take effect:

sudo /etc/init.d/motion restart

Since you will be powering on your Raspberry Pi in your vehicle, and may have no way to reach Motion from a command line, you will want Motion to start capturing video and images for you automatically at startup. To enable Motion (the motion daemon) on start-up, edit the /etc/default/motion file.

sudo nano /etc/default/motion

Change the ‘start_motion_daemon‘ setting to ‘yes’. If you decide to stop using the Raspberry Pi for capturing video, remember to disable this option. Motion will keep generating video and images, even without a camera connected, if the daemon process is running.

Capturing Dashboard Video

Although taking dashboard camera videos with your Raspberry Pi sounds easy, it presents several challenges. How will you mount your camera? How will you adjust your camera’s view? How will you power your Raspberry Pi in the vehicle? How will you power-down your Raspberry Pi from the vehicle? How will you make sure Motion is running? How will you get the video and images off the Raspberry Pi? Do you have one a mini keyboard and LCD monitor to use in your vehicle? Or, is your Raspberry Pi on your wireless network? If so, do you know how to bring up the camera’s view and Motion’s admin site on your smartphone’s web-browser?

My start-up process is as follows:

  1. Start my car.
  2. Plug the webcam and the power cable into the Raspberry Pi.
  3. Let the Raspberry Pi boot up fully and allow Motion to start. This takes less than one minute.
  4. Open the http address Motion serves up using my mobile browser.
    (Since my Raspberry Pi has a wireless USB adapter installed and I’m still able to connect from my garage).
  5. Adjust the camera using the mobile browser view from the camera.
  6. Optionally, use Motion’s ‘HTTP Based Control’ feature to adjust any Motion configurations, on-the-fly (great option).
Logitech Webcam C210 Webcam Mounted on Car Sun Visor

Logitech Webcam C210 Webcam Mounted on Car Sun Visor

Raspberry Pi in Vehicle with iPhone Preview of Dashboard Camera

Raspberry Pi in Vehicle with iPhone Preview of Dashboard Camera

Adjusting Dashboard Camera using iPhone Preview over LAN Connection to Raspberry Pi

Adjusting Camera using iPhone WiFi Connection to Raspberry Pi

Using Motion's HTTP Based Control on iPhone Mobile Web Browser

Using Motion’s HTTP Based Control on iPhone Mobile Web Browser

Once I reach my destination, I copy the video and/or still image frames off the Raspberry Pi:

  1. Let the car run for at least 1-2 minutes after you stop. The Raspberry Pi is still processing the images and video.
  2. Copy the files off the Raspberry Pi over the local network, right from car (if in range of my LAN).
  3. Alternately, shut down the Raspberry Pi by using a SSH mobile app on your smartphone, or just shut the car off (this not the safest method!).
  4. Place the Pi’s SDHC card into my laptop and copy the video and/or still image frames.
Shutting Down Raspberry Pi Using SSH Terminal iPhone App

Shutting Down Raspberry Pi Using SSH Terminal iPhone App

Here are some tips I’ve found to make creating dashboard camera video’s easier and better quality:

  • Leave your camera in your vehicle once you mount and position it.
  • Make sure your camera is secure so the vehicle’s vibrations while driving don’t create bouncy-images or change the position of the camera field of view.
  • Clean your vehicle’s front window, inside and out. Bugs or other dirt are picked up by the camera and may affect the webcam’s focus.
  • Likewise, film on the window from smoking or dirt will soften the details of the video and create harsh glare when driving on sunny days.
  • Similarly, make sure your camera’s lens is clean.
  • Keep your dashboard clear of objects such as paper, as it reflects on the window and will obscure the dashboard camera’s video.
  • Constantly stopping your Raspberry Pi by shutting the vehicle off can potential damage the Raspberry Pi and/or corrupt the operating system.
  • Make sure to keep your Raspberry Pi out of sight of potential thieves and the direct sun when you are not driving.
  • Backup your Raspberry Pi’s SDHC card before using for dashboard camera, see Duplicating Your Raspberry Pi’s SDHC Card.

Creating Video from Individual Dashboard Camera Images

FFmpeg

If you choose the second method for capturing dashboard camera videos, the easiest way to combine the individual dashboard camera images is by calling FFmpeg from the command line. To create the example #3 video, shown below, I ran two commands from a Linux Terminal prompt. The first command is a bash command to rename all the images to four-digit incremented numbers (‘0001.jpg’, ‘0002.jpg’, ‘0003.jpg’, etc.). This makes it easier to execute the second command. I found this script on stackoverflow. It requires Gawk (‘sudo apt-get install gawk’). If you are unsure about running this command, make a copy of the original images in case something goes wrong.

The second command is a basic FFmpeg command to combine the images into a 20 fps MPEG-4 video file. More information on running FFmpeg can be found on their website. There is a huge number of options available with FFmpeg from the command line. Running this command, FFmpeg processed 4,666 frames at 640 x 480 pixels in 233.30 seconds, outputting a 147.5 Mb MPEG-4 video file.

find -name '*.jpg' | sort | gawk '{ printf "mv %s %04d.jpg\n", $0, NR }' | bash 
ffmpeg -r 20 -qscale 2  -i %04d.jpg output.mp4
FFmpeg Command Line Video Creation Output

FFmpeg Command Line Video Creation Output


Example #3 – FFmpeg Video from Command Line

If you want to compress the video, you can chain a second FFmpeg command to the first one, similar to the one below. In my tests, this reduced the video size to 20-25% of the original uncompressed version.

ffmpeg -r 20 -qscale 2 -i %04d.jpg output.mp4 && ffmpeg -i output.mp4 -vcodec mpeg2video output_compressed.mp4

If your images are to dark (early morning or overcast) or have a color-cast (poor webcam or tinted-windows), you can use programs like ImageMagick to adjust all the images as a single batch. In example #5 below, I pre-processed all the images prior to making the video. With one ImageMagick command, I adjusting their levels to make them lighter and less flat.

mogrify -level 12%,98%,1.79 *.jpg


Example #5 – FFmpeg Uncompressed Video from Command Line

Windows MovieMaker

Using Windows MovieMaker was not my first choice, but I’ve had a tough time finding an equivalent Linux gui-based application. If you are going to create your own video from the still images, you need to be able to import and adjust thousands of images quickly and easily. I can import, create, and export a typical video of a 30 minute trip in 10 minutes with MovieMaker. With MovieMaker, you can also add titles, special effects, and so forth.

Single Images Combined in Windows MovieMaker

Single Images Combined in Windows MovieMaker

Sample Videos

Below are a few dashboard video examples using a variety of methods. In the first two examples, I captured still images and created the FFmpeg video at the same time. You can compare quality of Method #1 to #2.


Example #2a – Motion/FFmpeg Video


Example #2b – Windows MovieMaker


Example #5 – FFmpeg Compressed Video from Command Line


Example #6 – FFmpeg Compressed Video from Command Line

Useful Links

Renaming files in a folder to sequential numbers

Useful FFmpeg Syntax Examples

ImageMagick: Command-line Options

ImageMagick: Mogrify — in-place batch processing

http://wp.me/p1RD28-AW

, , , , , , , , , ,

12 Comments

Deploying Applications to WebLogic Server on Oracle’s Pre-Built Development VM

Create a new WebLogic Server domain on Oracle’s Pre-built Development VM. Remotely deploy a sample web application to the domain from a remote machine.

Post Introduction Image

Introduction

In my last two posts, Using Oracle’s Pre-Built Enterprise Java VM for Development Testing and Resizing Oracle’s Pre-Built Development Virtual Machines, I introduced Oracle’s Pre-Built Enterprise Java Development VM, aka a ‘virtual appliance’. Oracle has provided ready-made VMs that would take a team of IT professionals days to assemble. The Oracle Linux 5 OS-based VM has almost everything that comprises basic enterprise test and production environment based on the Oracle/Java technology stack. The VM includes Java JDK 1.6+, WebLogic Server, Coherence, TopLink, Subversion, Hudson, Maven, NetBeans, Enterprise Pack for Eclipse, and so forth.

One of the first things you will probably want to do, once your Oracle’s Pre-Built Enterprise Java Development VM is up and running, is deploy an application to WebLogic Server. According to Oracle, WebLogic Server is ‘a scalable, enterprise-ready Java Platform, Enterprise Edition (Java EE) application server.’ Even if you haven’t used WebLogic Server before, don’t worry, Oracle has designed it to be easy to get started.

In this post I will cover creating a new WebLogic Server (WLS) domain, and the deployment a simple application to WLS from a remote development machine. The major steps in the process presented in this post are as follows:

  • Create a new WLS domain
  • Create and build a sample application
  • Deploy the sample application to the new WLS domain
  • Access deployed application via a web browser

Networking

First, let me review how I have my VM configured for networking, so you will understand my deployment methodology, discussed later. The way you configure your Oracle VM VirtualBox appliance will depend on your network topology. Again, keeping it simple for this post, I have given the Oracle VM a static IP address (192.168.1.88). The machine on which I am hosting VirtualBox uses DHCP to obtain an IP address on the same local wireless network.

For the VM’s VirtualBox networking mode, I have chosen the ‘Bridged Adapter‘ mode. Using this mode, any machine on the network can access the VM through the host machine, via the VM’s IP address. One of the best posts I have read on VM networking is on Oracle’s The Fat Bloke Sings blog, here.

Setting Static IP Address for VM

Setting Static IP Address for VM

Using VirtualBox's Bridged Adapter Networking Mode

Using VirtualBox’s Bridged Adapter Networking Mode

Creating New WLS Domain

A domain, according Oracle, is ‘the basic administrative unit of WebLogic Server. It consists of one or more WebLogic Server instances, and logically related resources and services that are managed, collectively, as one unit.’ Although the Oracle Development VM comes with pre-existing domains, we will create our own for this post.

To create the new domain, we will use the Oracle’s Fusion Middleware Configuration Wizard. The Wizard will take you through a step-by-step process to configure your new domain. To start the wizard, from within the Oracle VM, open a terminal window, and use the following command to switch to the Wizard’s home directory and start the application.

/labs/wls1211/wlserver_12.1/common/bin/config.sh

There are a lot of configuration options available, using the Wizard. I have selected some basic settings, shown below, to configure the new domain. Feel free to change the settings as you step through the Wizard, to meet your own needs. Make sure to use the ‘Development Mode’ Start Mode Option for this post. Also, make sure to note the admin port of the domain, the domain’s location, and the username and password you choose.

Creating the New Domain

Creating the New Domain

Starting the Domain

To start the new domain, open a terminal window in the VM and run the following command to change to the root directory of the new domain and start the WLS domain instance. Your domain path and domain name may be different. The start script command will bring up a new terminal window, showing you the domain starting.

/labs/wls1211/user_projects/domains/blogdev_domain/startWebLogic.sh
New WLS Domain Starting

New WLS Domain Starting

WLS Administration Console

Once the domain starts, test it by opening a web browser from the host machine and entering the URL of the WLS Administration Console. If your networking is set-up correctly, the host machine will able to connect to the VM and open the domain, running on the port you indicated when creating the domain, on the static IP address of the VM. If your IP address and port are different, make sure to change the URL. To log into WLS Administration Console, use the username and password you chose when you created the domain.

http://192.168.1.88:7031/console/login/LoginForm.jsp
Logging into the New WLS Domain

Logging into the New WLS Domain

Before we start looking around the new domain however, let’s install an application into it.

Sample Java Application

If you have an existing application you want to install, you can skip this part. If you don’t, we will quickly create a simple Java EE Hello World web application, using a pre-existing sample project in NetBeans – no coding required. From your development machine, create a new Samples -> Web Services -> REST: Hello World (Java EE 6) Project. You now have a web project containing a simple RESTful web service, Servlet, and Java Server Page (.jsp). Build the project in NetBeans. We will upload the resulting .war file manually, in the next step.

In a previous post, Automated Deployment to GlassFish Using Jenkins CI Server and Apache Ant, we used the same sample web application to demonstrate automated deployments to Oracle’s GlassFish application server.

Creating the Sample Java EE 6 Application in NetBeans

Creating the Sample Java EE 6 Web Application in NetBeans

Naming the New Project

Naming the New Project

New Project in NetBeans

New Project in NetBeans

Hello World WAR File After Building Project

Hello World WAR File After Building Project

Deploying the Application

There are several methods to deploy applications to WLS, depending on your development workflow. For this post, we will keep it simple. We will manually deploy our web application’s .war file to WLS using the browser-based WLS Administration Console. In a future post, we will use Hudson, also included on the VM, to build and deploy an application, but for now we will do it ourselves.

To deploy the application, switch back to the WLS Administration Console. Following the screen grabs below, you will select the .war file, built from the above web application, and upload it to the Oracle VM’s new WLS domain. The .war file has all the necessary files, including the RESTful web service, Servlet, and the .jsp page. Make sure to deploy it as an ‘application’ as opposed to a ‘library’ (see ‘target style’ configuration screen, below).

Select the Deployment Tab Lists All Deployed Applications

Select the Deployment Tab Lists All Deployed Applications

Select Install to Start the Installation Process

Select Install to Start the Installation Process

Select Next then Browse for .war File

Select Next then Browse for .war File

Select Next Again to Install the Application

Select Next Again to Install the Application

Install the Deployment as an Application

Install the Deployment as an Application

The Default Settings Are Fine for Application

The Default Settings Are Fine for Application

Select Finish to Complete the Installation

Select Finish to Complete the Installation

The Overview Tab Reviews the Installed Application's Configuration

The Overview Tab Reviews the Installed Application’s Configuration

Switch Back to the Deployments Tab to See the Installed Application

Switch Back to the Deployments Tab to See the Installed Application

Click on the Application and Select the Testing Tab

Click on the Application and Select the Testing Tab

Accessing the Application

Now that we have deployed the Hello World application, we will access it from our browser. From any machine on the same network, point a browser to the following URL. Adjust your URL if your VM’s IP address and domain’s port is different.

http://192.168.1.88:7031/HelloWebLogicServer/resources/helloWorld
Hello World Application Running from WLS Domain

Hello World Application Running from WLS Domain

The Hello World RESTful web service’s Web Application Description Language (WADL) description can be viewed at:

http://192.168.1.88:7031/HelloWebLogicServer/resources/application.wadl
RESTful Web Service's WADL

RESTful Web Service’s WADL

Since the Oracle VM is accessible from anywhere on the network, the deployed application is also accessible from any device on the network, as demonstrated below.

Hello World Application Running from WLS Domain on iPhone

Hello World Application Running from WLS Domain on iPhone

Conclusion

This was a simple demonstration of deploying an application to WebLogic Server on Oracle’s Pre-Built Enterprise Java Development VM. WebLogic Server is a powerful, feature-rich Java application server. Once you understand how to configure and administer WLS, you can deploy more complex applications. In future posts we will show a more common, slightly more complex example of automated deployment from Hudson. In addition, we will show how to create a datasource in WLS and access it from the deployed application, to talk to a relational database.

Helpful Links

, , , , , , , , , , , , , ,

1 Comment