Posts Tagged Debian

Deploying Spring Boot Apps to AWS with Netflix Nebula and Spinnaker: Part 2 of 2

In Part One of this post, we examined enterprise deployment tools and introduced two of Netflix’s open-source deployment tools, the Nebula Gradle plugins, and Spinnaker. In Part Two, we will deploy a production-ready Spring Boot application, the Election microservice, to multiple Amazon EC2 instances, behind an Elastic Load Balancer (ELB). We will use a fully automated DevOps workflow. The build, test, package, bake, deploy process will be handled by the Netflix Nebula Gradle Linux Packaging Plugin, Jenkins, and Spinnaker. The high-level process will involve the following steps:

  • Configure Gradle to build a production-ready fully executable application for Unix systems (executable JAR)
  • Using deb-s3 and GPG Suite, create a secure, signed APT (Debian) repository on Amazon S3
  • Using Jenkins and the Netflix Nebula plugin, build a Debian package, containing the executable JAR and configuration files
  • Using Jenkins and deb-s3, publish the package to the S3-based APT repository
  • Using Spinnaker (HashiCorp Packer under the covers), bake an Ubuntu Amazon Machine Image (AMI), replete with the executable JAR installed from the Debian package
  • Deploy an auto-scaling set of Amazon EC2 instances from the baked AMI, behind an ELB, running the Spring Boot application using both the Red/Black and Highlander deployment strategies
  • Be able to repeat the entire automated build, test, package, bake, deploy process, triggered by a new code push to GitHub

The overall build, test, package, bake, deploy process will look as follows.

DebianPackageWorkflow12.png

DevOps Architecture

Spinnaker’s modern architecture is comprised of several independent microservices. The codebase is written in Java and Groovy. It leverages the Spring Boot framework¹. Spinnaker’s configuration, startup, updates, and rollbacks are centrally managed by Halyard. Halyard provides a single point of contact for command line interaction with Spinnaker’s microservices.

Spinnaker can be installed on most private or public infrastructure, either containerized or virtualized. Spinnaker has links to a number of Quickstart installations on their website. For this demonstration, I deployed and configured Spinnaker on Azure, starting with one of the Azure Spinnaker quick-start ARM templates. The template provisions all the necessary Azure resources. For better performance, I chose upgraded the default VM to a larger Standard D4 v3, which contains 4 vCPUs and 16 GB of memory. I would recommend at least 2 vCPUs and 8 GB of memory at a minimum for Spinnaker.

Another Azure VM, in the same virtual network as the Spinnaker VM, already hosts Jenkins, SonarQube, and Nexus Repository OSS.

From Spinnaker on Azure, Debian Packages are uploaded to the APT package repository on AWS S3. Spinnaker also bakes Amazon Machine Images (AMI) on AWS. Spinnaker provisions the AWS resources, including EC2 instances, Load Balancers, Auto Scaling Groups, Launch Configurations, and Security Groups. The only resources you need on AWS to get started with Spinnaker are a VPC and Subnets. There are some minor, yet critical prerequisites for naming your VPC and Subnets.

Other external tools include GitHub for source control and Slack for notifications. I have built and managed everything from a Mac, however, all tools are platform agnostic. The Spring Boot application was developed in JetBrains IntelliJ.

Spinnaker Architecture 2.png

Source Code

All source code for this post can be found on GitHub. The project’s README file contains a list of the Election service’s endpoints.

Code samples in this post are displayed as Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

APT Repository

After setting up Spinnaker on Azure, I created an APT repository on Amazon S3, using the instructions provided by Netflix, in their Code Lab, An Introduction to Spinnaker: Hello Deployment. The setup involves creating an Amazon S3 bucket to serve as an APT (Debian) repository, creating a GPG key for signing, and using deb-s3 to manage the repository. The Code Lab also uses Aptly, a great tool, which I skipped for brevity.

spin19

GPG Key

On the Mac, I used GPG Suite to create a GPG (GNU Privacy Guard or GnuPG) automatic signing key for my APT repository. The key is required by Spinnaker to verify the Debian packages in the repository, before installation.

The Ruby Gem, deb-s3, makes management of the Debian packages easy and automatable with Jenkins. Jenkins uploads the Debian packages, using a deb-s3 command, such as the following (gist). In this post, Jenkins calls the command from the shell script, upload-deb-package.sh, which is included in the GitHub project.

deb-s3 upload \
--bucket garystafford-spinnaker-repo \
--access-key-id=$AWS_ACCESS_KEY_ID \
--secret-access-key=$AWS_SECRET_ACCESS_KEY \
--arch=amd64 \
--codename=trusty \
--component=main \
--visibility=public \
--sign=$GPG_KEY_ID \
build/distributions/*.deb

The Jenkins user requires access to the signing key, to build and upload the Debian packages. I created my GPG key on my Mac, securely copied the key to my Ubuntu-based Jenkins VM, and then imported the key for the Jenkins user. You could also create your key on Ubuntu, directly. Make sure you backup your private key in a secure location!

Nebula Packaging Plugin

Next, I set up a Gradle task in my build.gradle file to build my Debian packages using the Netflix Nebula Gradle Linux Packaging Plugin. Although Debian packaging tasks could become complex for larger application installations, this task for this post is pretty simple. I used many of the best-practices suggested by Spring for Production-grade deployments. The best-practices guide recommends file location, file modes, and file user and group ownership. I create the JAR as a fully executable JAR, meaning it is started like any other executable and does not have to be started with the standard java -jar command.

In the task, shown below (gist), the JAR and the external configuration file (optional) are copied to specific locations during the deployment and symlinked, as required. I used the older SysVInit system (init.d) to enable the application to automatically starts on boot. You should probably use systemctl for your services with Ubuntu 16.04.

task packDeb(type: Deb) {
description 'Creates .deb package.'
into '/opt/' + project.name // root directory
from(jar.outputs.files) { // copy *.jar
into 'lib'
fileMode 0500
user 'springapp'
permissionGroup 'springapp'
}
from('build/resources/main/' + project.name + '.conf') { // copy .conf
into 'conf'
fileMode 0400
user 'root'
permissionGroup 'root'
}
// symlinks jar to init.d
link('/etc/init.d/election',
'/opt/' + project.name + '/lib/' + jar.archiveName)
// link init.d to rc2.d
link('/opt/' + project.name + '/lib/' + project.name + '-' + project.version + '.conf',
'/opt/' + project.name + '/conf/' + project.name + '.conf')
// link conf to jar location
link('/etc/rc2.d/S02election', '/etc/init.d/election')
postInstall 'chattr +i ' + '/opt/' + project.name + '/lib/' + jar.archiveName
}
view raw packDeb.groovy hosted with ❤ by GitHub

You can use the ar (archive) command (i.e., ar -x spring-postgresql-demo_4.5.0_all.deb), to extract and inspect the structure of a Debian package. The data.tar.gz file, displayed below in Atom, shows the final package structure.

spin47.png

Base AMI

Next, I baked a base AMI for Spinnaker to use. This base AMI is used by Spinnaker to bake (re-bake) the final AMI(s) used for provisioning the EC2 instances, containing the Spring Boot Application. The Spinnaker base AMI is built from another base AMI, the official Ubuntu 16.04 LTS image. I installed the OpenJDK 8 package on the AMI, which is required to run the Java-based Election service. Lastly and critically, I added information about the location of my S3-based APT Debian package repository to the list of configured APT data sources, and the GPG key required for package verification. This information and key will be used later by Spinnaker to bake AMIʼs, using this base AMI. The set-up script, base_ubuntu_ami_setup.sh, which is included in the GitHub project.

#!/usr/bin/env sh
# based on ami-6dfe5010
# Canonical, Ubuntu, 16.04 LTS, amd64 xenial image build on 2018-04-05
# References:
# https://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/building-shared-amis.html
# https://gist.github.com/justindowning/5921369
set +x
# update and install java
sudo apt-get update -y \
&& sudo apt-get upgrade -y \
&& sudo apt-get install \
openjdk-8-jre-headless -y
# a few optional ops tools
sudo apt-get install \
tree htop glances -y
# user:group application will run as
sudo useradd springapp
sudo usermod -a -G springapp springapp
sudo usermod -s /usr/sbin/nologin springapp
# add s3 deb repo
echo "deb http://garystafford-spinnaker-repo.s3-website-us-east-1.amazonaws.com trusty main" | \
sudo tee -a /etc/apt/sources.list.d/gary-stafford.list
curl -s https://s3.amazonaws.com/garystafford-spinnaker-repo/apt/doc/apt-key.gpg | \
sudo apt-key add -
# clean up and secure
sudo passwd -l root
sudo shred -u /etc/ssh/*_key /etc/ssh/*_key.pub
[ -f /home/ubuntu/.ssh/authorized_keys ] && rm /home/ubuntu/.ssh/authorized_keys
sudo rm -rf /tmp/*
cat /dev/null > ~/.bash_history
shred -u ~/.*history
history -c
exit

Jenkins

This post uses a single Jenkins CI/CD pipeline. Using a Webhook, the pipeline is automatically triggered by every git push to the GitHub project. The pipeline pulls the source code, builds the application, and performs unit-tests and static code analysis with SonarQube. If the build succeeds and the tests pass, the build artifact (JAR file) is bundled into a Debian package using the Nebula Packaging plugin, uploaded to the S3 APT repository using s3-deb, and archived locally for Spinnaker to reference. Once the pipeline is completed, on success or on failure, a Slack notification is sent. The Jenkinsfile, used for this post is available in the project on Github.

#!/usr/bin/env groovy
def ACCOUNT = "garystafford"
def PROJECT_NAME = "spring-postgresql-demo"
pipeline {
agent any
tools {
gradle 'gradle'
}
stages {
stage('Checkout GitHub') {
steps {
git changelog: true, poll: false,
branch: 'master',
url: "https://github.com/${ACCOUNT}/${PROJECT_NAME}"
}
}
stage('Build') {
steps {
sh 'gradle wrapper'
sh 'LOGGING_LEVEL_ROOT=INFO ./gradlew clean build -x test --info'
}
}
stage('Unit Test') { // unit test against in-memory h2
steps {
withEnv(['SPRING_DATASOURCE_URL=jdbc:h2:mem:elections']) {
sh 'LOGGING_LEVEL_ROOT=INFO ./gradlew cleanTest test --info'
}
junit '**/build/test-results/test/*.xml'
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonarqube') {
sh "LOGGING_LEVEL_ROOT=INFO ./gradlew sonarqube -Dsonar.projectName=${PROJECT_NAME} --info"
}
}
}
stage('Build Debian Package') {
steps {
sh "LOGGING_LEVEL_ROOT=INFO ./gradlew packDeb --info"
}
}
stage('Upload Debian Package') {
steps {
withCredentials([
string(credentialsId: 'GPG_KEY_ID', variable: 'GPG_KEY_ID'),
string(credentialsId: 'AWS_ACCESS_KEY_ID', variable: 'AWS_ACCESS_KEY_ID'),
string(credentialsId: 'AWS_SECRET_ACCESS_KEY', variable: 'AWS_SECRET_ACCESS_KEY')]) {
sh "sh ./scripts/upload-deb-package.sh ${GPG_KEY_ID}"
}
}
}
stage('Archive Debian Package') {
steps {
archiveArtifacts 'build/distributions/*.deb'
}
}
}
post {
success {
slackSend(color: '#79B12B',
message: "SUCCESS: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")
}
failure {
slackSend(color: '#FF0000',
message: "FAILURE: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")
}
}
}

Below is a traditional Jenkins view of the CI/CD pipeline, with links to unit test reports, SonarQube results, build artifacts, and GitHub source code.

spin01

Below is the same pipeline viewed using the Jenkins Blue Ocean plugin.

spin02

It is important to perform sufficient testing before building the Debian package. You donʼt want to bake an AMI and deploy EC2 instances, at a cost, before finding out the application has bugs.

spin03

Spinnaker Setup

First, I set up a new Spinnaker Slack channel and a custom bot user. Spinnaker details the Slack set up in their Notifications and Events Guide. You can configure what type of Spinnaker events trigger Slack notifications.

spin46.png

AWS Spinnaker User

Next, I added the required Spinnaker User, Policy, and Roles to AWS. Spinnaker uses this access to query and provision infrastructure on your behalf. The Spinnaker User requires Power User level access to perform all their necessary tasks. AWS IAM set up is detailed by Spinnaker in their Cloud Providers Setup for AWS. They also describe the setup of other cloud providers. You need to be reasonably familiar with AWS IAM, including the PassRole permission to set up this part. As part of the setup, you enable AWS for Spinnaker and add your AWS account using the Halyard interface.

spin45

Spinnaker Security Groups

Next, I set up two Spinnaker Security Groups, corresponding to two AWS Security Groups, one for the load balancer and one for the Election service. The load balancer security group exposes port 80, and the Election service security group exposes port 8080.

spin36

Spinnaker Load Balancer

Next, I created a Spinnaker Load Balancer, corresponding to an Amazon Classic Load Balancer. The Load Balancer will load-balance the Election service EC2 instances. Below you see a Load Balancer, balancing a pair of active EC2 instances, the result of a Red/Black deployment.

spin37

Spinnaker can currently create both AWS Classic Load Balancers as well as Application Load Balancers (ALB).

spin25

Spinnaker Pipeline

This post uses a single, basic Spinnaker Pipeline. The pipeline bakes a new AMI from the Debian package generated by the Jenkins pipeline. After a manual approval stage, Spinnaker deploys a set of EC2 instances, behind the Load Balancer, which contains the latest version of the Election service. Spinnaker finishes the pipeline by sending a Slack notification.

spin26

Jenkins Integration

The pipeline is triggered by the successful completion of the Jenkins pipeline. This is set in the Configuration stage of the pipeline. The integration with Jenkins is managed through Spinnaker’s Igor service.

spin22.png

Bake Stage

Next, in the Bake stage, Spinnaker bakes a new AMI, containing the Debian package generated by the Jenkins pipeline. The stageʼs configuration contains the package name to reference.

spin29

The stageʼs configuration also includes a reference to which Base AMI to use, to bake the new AMIs. Here I have used the AMI ID of the base Spinnaker AMI, I created previously.

spin27

Deploy Stage

Next, the Deploy stage deploys the Election service, running on EC2 instances, provisioned from the new AMI, which was baked in the last stage. To configure the Deploy stage, you define a Spinnaker Server Group. According to Spinnaker, the Server Group identifies the deployable artifact, VM image type, the number of instances, autoscaling policies, metadata, Load Balancer, and a Security Group.

spin32

The Server Group also defines the Deployment Strategy. Below, I chose the Red/Black Deployment Strategy (also referred to as Blue/Green). This strategy will disable, not terminate the active Server Group. If the new deployment fails, we can manually or automatically perform a Rollback to the previous, currently disabled Server Group.

spin11

Letʼs Start Baking!

With set up complete, letʼs kick off a git push, trigger and complete the Jenkins pipeline, and finally trigger the Spinnaker pipeline. Below we see the pipelineʼs Bake stage has been started. Spinnakerʼs UI lets us view the Bakery Details. The Bakery, provided by Spinnakerʼs Rosco service, bakes the AMIs. Rosco uses HashiCorp Packer to bake the AMIs, using standard Packer templates.

spin04

Below we see Spinnaker (Rosco/Packer) locating the Base Spinnaker AMI we configured in the Pipelineʼs Bake stage. Next, we see Spinnaker sshʼing into a new EC2 instance with a temporary keypair and Security Group and starting the Election service Debian package installation.

spin23

Continuing, we see the latest Debian package, derived from the Jenkins pipelineʼs archive, being pulled from the S3-based APT repo. The package is verified using the GPG key and then installed. Lastly, we see a new AMI is created, containing the deployed Election service, which was initially built and packaged by Jenkins. Note the AWS Resource Tags created by Spinnaker, as shown in the Bakery output.

spin24

The base Spinnaker AMI and the AMIs baked by Spinnaker are visible in the AWS Console. Note the naming conventions used by Spinnaker for the AMIs, the Source AMI used to build the new APIs, and the addition of the Tags, which we saw being applied in the Bakery output above. The use of Tags indirectly allows full traceability from the deployed EC2 instance all the way back to the original code commit to git by the Developer.

spin48.png

Red/Black Deployments

With the new AMI baked successfully, and a required manual approval, using a Manual Judgement type pipeline stage, we can now begin a Red/Black deployment to AWS.

spin07

Using the Server Group configuration in the Deploy stage, Spinnaker deploys two EC2 instances, behind the ELB.

spin08

Below, we see the successful results of the Red/Black deployment. The single Spinnaker Cluster contains two deployed Server Groups. One group, the previously active Server Group (RED), comprised of two EC2 instances, is disabled. The ‘RED’ EC2 instances are unregistered with the load balancer but still running. The new Server Group (BLACK), also comprised of two EC2 instances, is now active and registered with the Load Balancer. Spinnaker will spread EC2 instances evenly across all Availability Zones in the US East (N. Virginia) Region.

spin38

From the AWS Console, we can observe four running instances, though only two are registered with the load-balancer.

spin34

Here we see each deployed Server Group has a different Auto Scaling Group and Launch Configuration. Note the continued use of naming conventions by Spinnaker.

spin33

 There can be only one, Highlander!

Now, in the Deploy stage of the pipeline, we will switch the Server Groupʼs Strategy to Highlander. The Highlander strategy will, as you probably guessed by the name, destroy all other Server Groups in the Cluster. This is more typically used for lower environments, like Development or Test, where you are only interested in the next version of the application for testing. The Red/Black strategy is more applicable to Production, where you want the opportunity to quickly rollback to the previous deployment, if necessary.

spin12

Following a successful deployment, below, we now see the first two Server Groups have been terminated, and a third Server Group in the Cluster is active.

spin40.png

In the AWS Console, we can confirm the four previous EC2 instances have been successfully terminated as a result of the Highlander deployment strategy, and two new instances are running.

spin39

As well, the previous Auto Scaling Groups and Launch Configurations have been deleted from AWS by Spinnaker.

spin44.png

As expected, the Classic Load Balancer only contains the two most recent EC2 instances from the last Server Group deployed.

spin41

Confirming the Deployment

Using the DNS address of the load balancer, we can hit the Election service endpoints, on either of the EC2 instances. All API endpoints are listed in the Projectʼs README file. Below, from a web browser, we see the candidates resource returning candidate information, retrieved from the Electionʼs PostgreSQL RDS database Test instance.

spin42

Similarly, from Postman, we can hit the load balancer and get back election information from the elections resource, using an HTTP GET.

spin43.png

I intentionally left out a discussion of the service’s RDS database and how configuration management was handled with Spring Profiles and Spring Cloud Config. Both topics were out of scope for this post.

Conclusion

Although this was a brief, whirlwind overview of deployment tools, it shows the power of delivery tools like Spinnaker, when seamlessly combined with other tools, like Jenkins and the Nebula plugins. Together, these tools are capable of efficiently, repeatably, and securely deploying large numbers of containerized and non-containerized applications to a variety of private, public, and hybrid cloud infrastructure.

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

¹ Running Spinnaker on Compute Engine

, , , , , , , , , , , ,

1 Comment

Deploying Spring Boot Apps to AWS with Netflix Nebula and Spinnaker: Part 1 of 2

Listening to DevOps industry pundits, you might be convinced everyone is running containers in Production (or by now, serverless). Although containerization is growing at a phenomenal rate, several recent surveys¹ indicate less than 50% of enterprises are deploying containers in Production. Filter those results further with the fact, of those enterprises, only a small percentage of their total application portfolios are containerized, let alone in Production.

As a DevOps Consultant, I regularly work with corporations whose global portfolios are in the thousands of applications. Indeed, some percentage of their applications are containerized, with less running in Production. However, a majority of those applications, even those built on modern, light-weight, distributed architectures, are still being deployed to bare-metal and virtualized public cloud and private data center infrastructure, for a variety of reasons.

Enterprise Deployment

Due to the scale and complexity of application portfolios, many organizations have invested in enterprise deployment tools, either commercially available or developed in-house. The enterprise deployment tool’s primary objective is to standardize the process of securely, reliably, and repeatably packaging, publishing, and deploying both containerized and non-containerized applications to large fleets of virtual machines and bare-metal servers, across multiple, geographically dispersed data centers and cloud providers. Enterprise deployment tools are particularly common in tightly regulated and compliance-driven organizations, as well as organizations that have undertaken large amounts of M&A, resulting in vastly different application technology stacks.

Enterprise CI/CD/Release Workflow

Better-known examples of commercially available enterprise deployment tools include IBM UrbanCode Deploy (aka uDeploy), XebiaLabs XL Deploy, CA Automic Release Automation, Octopus Deploy, and Electric Cloud ElectricFlow. While commercial tools continue to gain market share³, many organizations are tightly coupled to their in-house solutions through years of use and fear of widespread process disruption, given current economic, security, compliance, and skills-gap sensitivities.

Deployment Tool Anatomy

Most Enterprise deployment tools are compatible with standard binary package types, including Debian (.deb) and Red Hat  (RPM) Package Manager (.rpm) packages for Linux, NuGet (.nupkg) packages for Windows, and Node Package Manager (.npm) and Bower for JavaScript. There are equivalent package types for other popular languages and formats, such as Go, Python, Ruby, SQL, Android, Objective-C, Swift, and Docker. Packages usually contain application metadata, a signature to ensure the integrity and/or authenticity², and a compressed payload.

Enterprise deployment tools are normally integrated with open-source packaging and publishing tools, such as Apache Maven, Apache Ivy/Ant, Gradle, NPMNuGet, BundlerPIP, and Docker.

Binary packages (and images), built with enterprise deployment tools, are typically stored in private, open-source or commercial binary (artifact) repositories, such as SpacewalkJFrog Artifactory, and Sonatype Nexus Repository. The latter two, Artifactory and Nexus, support a multitude of modern package types and repository structures, including Maven, NuGet, PyPI, NPM, Bower, Ruby Gems, CocoaPods, Puppet, Chef, and Docker.

Mature binary repositories provide many features in addition to package management, including role-based access control, vulnerability scanning, rich APIs, DevOps integration, and fault-tolerant, high-availability architectures.

Lastly, enterprise deployment tools generally rely on standard package management systems to retrieve and install cryptographically verifiable packages and images. These include YUM (Yellowdog Updater, Modified), APT (aptitude), APK (Alpine Linux), NuGet, Chocolatey, NPM, PIP, Bundler, and Docker. Packages are deployed directly to running infrastructure, or indirectly to intermediate deployable components as Amazon Machine Images (AMI), Google Compute Engine machine images, VMware machines, Docker Images, or CoreOS rkt.

Open-Source Alternative

One such enterprise with an extensive portfolio of both containerized and non-containerized applications is Netflix. To standardize their deployments to multiple types of cloud infrastructure, Netflix has developed several well-known open-source software (OSS) tools, including the Nebula Gradle plugins and Spinnaker. I discussed Spinnaker in my previous post, Managing Applications Across Multiple Kubernetes Environments with Istio, as an alternative to Jenkins for deploying container workloads to Kubernetes on Google (GKE).

As a leader in OSS, Netflix has documented their deployment process in several articles and presentations, including a post from 2016, ‘How We Build Code at Netflix.’ According to the article, the high-level process for deployment to Amazon EC2 instances involves the following steps:

  • Code is built and tested locally using Nebula
  • Changes are committed to a central git repository
  • Jenkins job executes Nebula, which builds, tests, and packages the application for deployment
  • Builds are “baked” into Amazon Machine Images (using Spinnaker)
  • Spinnaker pipelines are used to deploy and promote the code change

The Nebula plugins and Spinnaker leverage many underlying, open-source technologies, including Pivotal Spring, Java, Groovy, Gradle, Maven, Apache Commons, Redline RPM, HashiCorp Packer, Redis, HashiCorp Consul, Cassandra, and Apache Thrift.

Both the Nebula plugins and Spinnaker have been battle tested in Production by Netflix, as well as by many other industry leaders after Netflix open-sourced the tools in 2014 (Nebula) and 2015 (Spinnaker). Currently, there are approximately 20 Nebula Gradle plugins available on GitHub. Notable core-contributors in the development of Spinnaker include Google, Microsoft, Pivotal, Target, Veritas, and Oracle, to name a few. A sign of its success, Spinnaker currently has over 4,600 Stars on GitHub!

Part Two: Demonstration

In Part Two, we will deploy a production-ready Spring Boot application, the Election microservice, to multiple Amazon EC2 instances, behind an Elastic Load Balancer (ELB). We will use a fully automated DevOps workflow. The build, test, package, bake, deploy process will be handled by the Netflix Nebula Gradle Linux Packaging Plugin, Jenkins, and Spinnaker. The high-level process will involve the following steps:

  • Configure Gradle to build a production-ready fully executable application for Unix systems (executable JAR)
  • Using deb-s3 and GPG Suite, create a secure, signed APT (Debian) repository on Amazon S3
  • Using Jenkins and the Netflix Nebula plugin, build a Debian package, containing the executable JAR and configuration files
  • Using Jenkins and deb-s3, publish the package to the S3-based APT repository
  • Using Spinnaker (HashiCorp Packer under the covers), bake an Ubuntu Amazon Machine Image (AMI), replete with the executable JAR installed from the Debian package
  • Deploy an auto-scaling set of Amazon EC2 instances from the baked AMI, behind an ELB, running the Spring Boot application using both the Red/Black and Highlander deployment strategies
  • Be able to repeat the entire automated build, test, package, bake, deploy process, triggered by a new code push to GitHub

The overall build, test, package, bake, deploy process will look as follows.

DebianPackageWorkflow12

References

 

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

¹ Recent Surveys: ForresterPortworx,  Cloud Foundry Survey
² Courtesy Wikipedia – rpm
³ XebiaLabs Kicks Off 2017 with Triple-Digit Growth in Enterprise DevOps

, , , , , , , , , , , ,

1 Comment

Travel-Size Wireless Router for Your Raspberry Pi

Use a low-cost nano-size wireless router to connect to your Raspberry Pi while traveling. Set up your own private wireless network in your vehicle, hotel, or coffee shop.

Introduction

Recently, I purchased a USB-powered wireless router for to use with my Raspberry Pi when travelling. In an earlier post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I discussed the use of the Raspberry Pi, combined with a webcam, Motion, and FFmpeg, to create a low-cost dashboard video camera. Like many, I find one the big challenges with the Raspberry Pi, is how to connect and interact with it. Being in my car, and usually out of range of my home’s wireless network, except maybe in the garage, this becomes even more of an issue. That’s where adding an inexpensive travel-size router to my vehicle comes in handy.

I chose the TP-LINK TL-WR702N Wireless N150 Travel Router, sold by Amazon. The TP-LINK router, described as ‘nano size’, measures only 2.2 inches square by 0.7 inches wide. It has several modes of operation, including as a router, access point, client, bridge, or repeater. It operates at wireless speeds up to 150Mpbs and is compatible with IEEE 802.11b/g/n networks. It supports several common network security protocols, including WEP, WPA/WPA2, WPA-PSK/WPA2-PSK encryption. For $22 USD, what more could you ask for!

TP-LINK Nano Router

My goal with the router was to do the following:

  1. Have the Raspberry Pi auto-connect to the new TP-LINK router’s wireless network when in range, just like my home network.
  2. Since I might still be in range of my home network, have the Raspberry Pi try to connect to the TP-LINK first, before falling back to my home network.
  3. Ensure the network was relatively secure, since I would be exposed to many more potential threats when traveling.

My vehicle has two power outlets. I plug my Raspberry Pi into one outlet and the router into the other. You could daisy chain the router off the Pi. However, my Pi’s ports are in use my the USB wireless adapter and the USB webcam. Using the TP-LINK router, I can easily connect to the Raspberry Pi with my mobile phone or tablet, using an SSH client.

Using Fing to Locate the Pi on the TP-LINK Wireless Network

Using Fing to Locate the Pi on the TP-LINK Wireless Network

When I arrive at my destination, I log into the Pi and do a proper shutdown. This activates my shutdown script (see my last post), which moves the newly created Motion/FFmpeg time-lapse dash-cam videos to a secure folder on my Pi, before powering down.

Using SSH Terminal for iOS to Shutdown the Pi

Using SSH Terminal for iOS to Shutdown the Pi

Of course there are many other uses for the router. For example, I can remove the Pi and router from my car and plug it back in at the hotel while traveling, or power the router from my laptop while at work or the coffee shop. I now have my own private wireless network wherever I am to use the Raspberry Pi, or work with other users. Remember the TP-LINK can act as a router, access point, client, bridge, or a repeater.

The Raspberry Pi and Router both fit in a Small Container for Travel

The Raspberry Pi and Router both fit in a Small Container for Travel

Network Security

Before configuring your Raspberry Pi, the first thing you should do is change all the default security related settings for the router. Start with the default SSID and the PSK password. Both these default values are printed right on the router. That’s motivation enough to change!

TP-LINK Administration Console 2

Additionally, change the default IP address of the router and the username and password for the browser-based Administration Console.

TP-LINK Administration Console

Lastly, pick the most secure protocol possible. I chose ‘WPA-PSK/WPA2-PSK’. All these changes are done through the TP-LINK’s browser-based Administration Console.

Configuring Multiple Wireless Networks

In an earlier post, Installing a Miniature WiFi Module on the Raspberry Pi (w/ Roaming Enabled), I detailed the installation and configuration of a Miniature WiFi Module, from Adafruit Industries, on a Pi running Soft-float Debian “wheezy”. I normally connect my Pi to my home wireless network. I wanted to continue to do this in the house, but connect the new router when traveling.

Based on the earlier post, I was already using Jouni Malinen’s wpa_supplicant, the WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2. This made network configuration relatively simple. If you use wpa_supplicant, your ‘/etc/network/interfaces’ file should look like the following. If you’re not familiar with configuring the interfaces file for wpa_supplicant, this post on NoWiresSecurity.com is a good starting point.

Interfaces File

Note that in this example, I am using DHCP for all wireless network connections. If you chose to use static IP addresses for any of the networks, you will have to change the interfaces file accordingly. Once you add multiple networks, configuring static IP addresses for each network, becomes more complex. That is my next project…

First, I generated a new pre-shared key (PSK) for the router’s SSID configuration using the following command. Substitute your own SSID (‘your_ssid’) and passphrase (‘your_passphrase’).

wpa_passphrase your_ssid your_passphrase

Based your SSID and passphrase, this command will generate a pre-shared key (PSK), similar to the following. Save or copy the PSK to the clipboard. We will need the PSK in the next step.

Creating PSK 2

Then, I modified my wpa_supplicant configuration file with the following command:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

I added the second network configuration, similar to the existing configuration for my home wireless network, using the newly generated PSK. Below is an example of what mine looks like (of course, not the actual PSKs).

WPA Supplicant Configuration

Depending on your Raspberry Pi and router configurations, your wpa_supplicant configuration will look slightly different. You may wish to add more settings. Don’t consider my example the absolute right way for your networks.

Wireless Network Priority

Note the priority of the TP-LINK router is set to 2, while my home NETGEAR router is set to 1. This ensures wpa_supplicant will attempt to connect to the TP-LINK network first, before attempting the home network. The higher number gets priority. The best resource I’ve found, which explains all the configuration options is detail, is here. In this example wpa_supplicant configuration file, priority is explained this way, ‘by default, all networks will get same priority group (0). If some of the networks are more desirable, this field can be used to change the order in which wpa_supplicant goes through the networks when selecting a BSS. The priority groups will be iterated in decreasing priority (i.e., the larger the priority value, the sooner the network is matched against the scan results). Within each priority group, networks will be selected based on security policy, signal strength, etc.’

Conclusion

If you want an easy, inexpensive, secure way to connect to your Raspberry Pi, in the vehicle or other location, a travel-size wireless router is a great solution. Best of all, configuring it for your Raspberry Pi is simple if you use wpa_supplicant.

, , , , , , , , , , , , ,

6 Comments

Using a Startup Script to Save Motion/FFmpeg Videos and Images on The Raspberry Pi

Use a start-up script to overcome limitations of Motion/FFmpeg and save multiple Raspberry Pi dashboard camera timelapse videos and images, automatically.

01-20130624190454-00

Introduction

In my last post, Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg, I demonstrated how the Raspberry Pi can be used as a low-cost dashboard video camera. One of the challenges I faced in that post was how to save the timelapse videos and individual images (frames) created by Motion and FFmpeg when the Raspberry Pi is turned on and off. Each time the car starts, the Raspberry Pi boots up, and Motion begins to run, the previous images and video, stored in the default ‘/tmp/motion/’ directory are removed and new images and video, created.

Take the average daily commute, we drive to and from work. Maybe we stop for a morning coffee, or stop at the store on the way home to pick up dinner. Maybe we use our car go out for lunch. Our car starts, stops, starts, stops, starts, and stops. Our daily commute actually encompasses a series small trips, and therefore multiple dash-cam timelapse videos. If you are only interested in keeping the latest timelapse video in case of an accident, then this may not be a problem. When the accident occurs, simply pull the SDHC card from the Raspberry Pi and copy the video and images off to your laptop.

However, if you are interested in capturing and preserving series of dash-cam videos, such as in the daily commute example above, then the default behavior of Motion is insufficient. To preserve each video segment or series of images, we need a way to preserve the content created by Motion and FFmpeg, before they are overwritten. In this post, I will present a solution to overcome this limitation.

The process involves the following steps:

  1. Change the default location where Motion stores timelapse videos and images to somewhere other than a temporary directory;
  2. Create a startup script that will move the video and images to a safe location when restarting the Pi;
  3. Configure the Pi’s Debian operating system to run this script at startup (and optionally shutdown), before Motion starts;

Sounds pretty simple. However, understanding how startup scripts work with Debian’s Init program, making sure the new move script run before Motion starts, and knowing how to move a huge number of files, all required forethought.

Change Motion’s Default Location for Video and Images

To start, change the default location where Motion stores timelapse video and images, from ‘/tmp/motion/’ to a location outside the ‘/tmp’ directory. I chose to create a directory called ‘/motiontmp’. Make sure you set the permissions on the new ‘/motiontmp’ directory, so Motion can write to it:

sudo chmod -R 777 /motiontmp

To have Motion  use this location, we need to modify the Motion configuration file:

sudo nano /etc/motion/motion.conf

Change the following setting (in bold below). Note when Motion starts for the first time, it will create the ‘motion’ sub folder inside ‘motiontmp’. You do not have to create it yourself.

# Target base directory for pictures and films
# Recommended to use absolute path. (Default: current working directory)
target_dir /motiontmp/motion

Motion Target Directory

Create the Startup Script to Move Video and Images

Next, create the new shell script that will run at startup to move Motion’s video and images. The script creates a timestamped folder in new ‘motiontmp’ directory for each series of images and video. The script then copies all files from the ‘motion’ directory to the new timestamped directory. Before copying, the script deletes any zero-byte jpegs, which are images that did not fully process prior to the Raspberry Pi being shut off when the car stopped. To create the new script, run the following command.

sudo nano /etc/init.d/motionStartup.sh

Copy the following contents into the script and save it.

### BEGIN INIT INFO
# Provides:          motionStartup
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Move motion files at startup.
# Description:       Move motion files at startup.
# X-Start-Before:    motion
### END INIT INFO

#! /bin/sh
# /etc/init.d/motionStartup
#

# Some things that run always
#touch /var/lock/motionStartup
logger -s "Script motionStartup called"

# Carry out specific functions when asked to by the system
case "$1" in
  start)
    logger -s "Script motionStartup started"
    TIMESTAMP=$(date +%Y%m%d%H%M%S | sed 's/ //g') # No spaces
    logger -s "Script motionStartup $TIMESTAMP"
    sudo mkdir /motiontmp/$TIMESTAMP || logger -s "Error mkdir start"
    find /motiontmp/motion/. -type f -size 0 -print0 -delete
    find /motiontmp/motion/. -maxdepth 1 -type f | \
        xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP
    ;;
  stop)
    logger -s "Script motionStartup stopped"
    ;;
  *)
    echo "Usage: /etc/init.d/motionStartup {start|stop}"
    exit 1
    ;;
esac

exit 0

Note the ‘X-Start-Before’ setting at the top of the script (in bold). An explanation of this setting is found on the Debian Wiki website. According to the site, ”There is no such standard-defined header, but there is a proposed extension implemented in the insserv package (since version 1.09.0-8). Use the X-Start-Before and X-Stop-After headers proposed by SuSe.” To make sure you have a current version of ‘insserv‘, you can run the following command:

dpkg -l insserv

Also, note how the files are moved by the script:

find /motiontmp/motion/. -maxdepth 1 -type f | \
        xargs -I '{}' sudo mv {} /motiontmp/$TIMESTAMP

It’s not as simple as using ‘mv *.*’ when you have a few thousand files. This will likely throw a ‘Argument list too long’ exception. According to one stackoverflow, the exception is because bash actually expands the asterisk to every matching file, producing a very long command line. Using the ‘find’ combined with ‘xargs’ gets around these problem. The ‘xargs’ command splits up the list and issues several commands if necessary. This issue applies to several commands, including rm, cp, and mv.

Lastly, note the use of the ‘logger‘ commands throughout the script. These are optional and may be removed. I like to log the script’s progress for troubleshooting purposes. Using the above ‘logger’ commands, I can easily pinpoint issues by looking at the log with grep, such as:

tail -500 /var/log/messages | grep 'motionStartup' | grep 'logger:'

View of Log with Script Messages

You can test the script by running the following command:

/etc/init.d/./motionStartup.sh start

You should see a series of three messages output to the screen by the script, confirming the script is working. Note the new timestamped folder created by the script, below.

Testing the New Script

Below, is an example of the how the directory structure should look after a few videos are created by Motion, and the Raspberry Pi cycled off and on. You need to complete the rest of the steps in this post for this to work automatically.


Shutdown Script?

I know, the name of the post clearly says ‘Startup Script’. Well, a little tip, if you copy the code from the ‘start’ method and paste it in the ‘stop’ method, this now also works at shutdown. If you do a proper shutdown (like ‘sudo reboot’), the Raspberry Pi’s OS will call the script’s ‘stop’ method. The ‘start’ method is more useful to use for us in the car, where we may not be able to do a proper shutdown; we just turn the car off and kill power to the Pi. However, if you are shutting down from mobile device via ssh, or using a micro keyboard and LCD monitor, the script will do it’s work on the way down.

Configure Debian OS to Run the New Startup Script

To have our new script run on startup, install it by running the following command:

sudo update-rc.d motionStartup.sh defaults

A full explanation of this command is to complex for this brief post. A good overview of creating startup scripts and installing them in Debian is found on the Debian Administration website. This is the source I used to start to understand runlevels. There are also a few links at the end of the post. To tell which runlevel (state) you running at, use the following command:

runlevel

To make sure the startup script was installed properly, run the following command. This will display the contents of each ‘rc*.d’ folder. Each folder corresponds to a runlevel – 0, 1, 2, etc. Each folder contains symbolic links to the actual scripts. The links are named by order of executed (S01…, S02…, S03…):

ls /etc/rc*.d

Look for the new script listed under the appropriate runlevel(s). The new script should be listed before ‘motion’, as shown below.

View of Runlevels

View of Runlevels 2

If for any reason you need to uninstall the new script (not delete/remove), run the following command. This not a common task, but necessary to change the order of execution of the scripts or rename a script.

sudo update-rc.d -f motionStartup.sh remove

Copy and Remove Files from the Raspberry Pi

Once the startup script is working and we are capturing images and timelapse video, the next thing we will probably want to do is copy files off the Raspberry Pi. To do this over your WiFi network, use a ‘scp’ command from a remote machine. The below script copies all directories, stating with ‘2013’, and their contents to remote machine, preserving the directory structure.

scp -rp user@ip_address_of_pi:/motiontmp/2013* ~/local_directory/

Maybe you just want all the timelapse videos Motion/FFmpeg creates; you don’t care about the images. The following command copies just the MPEG videos from all ‘2013’ folders to a single directory on the your remote machine. The directory structure is ignored during the copy. This is the quickest way to just store all the videos.

scp -rp user@ip_address_of_pi:/motiontmp/2013*/*.mpg ~/local_directory/

If you are going to save all the MPEG timelapse videos in one location, I recommend changing the naming convention of the videos in the motion.conf file. I have added the hour, minute, and seconds to mine. This will ensure the names don’t conflict when moved to a common directory:

# File path for timelapse mpegs relative to target_dir
# Default: %Y%m%d-timelapse
# Default value is near equivalent to legacy oldlayout option
# For Motion 3.0 compatible mode choose: %Y/%m/%d-timelapse
# File extension .mpg is automatically added so do not include this
timelapse_filename %Y%m%d%H%M%S-timelapse

To remove all the videos and images once they have been moved off the Pi and are no longer needed, you can run a rm command. Using the ‘-rf’ options make sure the directories and their contents are removed.

sudo rm -rf /motiontmp/2013*

Conclusion

The only issue I have yet to overcome is maintaining the current time on the Raspberry Pi. The Pi lacks a Real Time Clock (RTC). Therefore, turning the Pi on and off in the car causes it to loose the current time. Since the Pi is not always on a WiFi network, it can’t sync to the current time when restarted. The only side-effects I’ve seen so far caused by this, the videos occasionally contain more than one driving event and the time displayed in the videos is not always correct. Otherwise, the process works pretty well.

Resources

The following are some useful resources on this topic:

Debian Reference: Chapter 3. The system initialization

How-To: Managing services with update-rc.d

Files and scripts that execute on boot

Making scripts run at boot time with Debian

Finding all files and move to new directory from shell prompt

Shell scripting: Write message to a syslog / log file

“Argument list too long”: Beyond Arguments and Limitations

Linux / Unix Command: date (used for TIMESTAMP)

Time / Date Commands

To have our new script run on startup, install it by running the following command:

, , , , , , , , , ,

3 Comments

Raspberry Pi-Powered Dashboard Video Camera Using Motion and FFmpeg

Demonstrate the use of the Raspberry Pi and a basic webcam, along with Motion and FFmpeg, to build low-cost dashboard video camera for your daily commute.

01-20130622162908-00_feature

Dashboard Video Cameras

Most of us remember the proliferation of dashboard camera videos of the February 2013 meteor racing across the skies of Russia. This rare astronomical event was captured on many Russian motorist’s dashboard cameras. Due to the dangerous driving conditions in Russia, many drivers rely on dashboard cameras for insurance and legal purposes. In the United States, we are more use to seeing dashboard cameras used by law-enforcement. Who hasn’t seen those thrilling police videos of car crashes, drunk drivers, and traffic stops gone wrong.

Although driving in the United States is not as dangerous as in Russia, there is reason we can’t also use dashboard cameras. In case you are involved in an accident, you will have a video record of the event for your insurance company. If you witness an accident or other dangerous situation, your video may help law enforcement and other emergency responders. Maybe you just want to record a video diary of your next road trip.

A wide variety of dashboard video cameras, available for civilian vehicles, can be seen on Amazon’s website. They range in price and quality from less that $50 USD to well over $300 USD or more, depending on their features. In a popular earlier post, Remote Motion-Activated Web-Based Surveillance with Raspberry Pi, I demonstrated the use of the Raspberry Pi and a webcam, along with Motion and FFmpeg, to provide low-cost web-based, remote surveillance. There are many other uses for this combination of hardware and software, including as a dashboard video camera.

Methods for Creating Dashboard Camera Videos

I’ve found two methods for capturing dashboard camera videos. The first and easiest method involves configuring Motion to use FFmpeg to create a video. FFmpeg creates a video from individual images (frames) taken at regular intervals while driving. The upside of the FFmpeg option, it gives you a quick ready-made video. The downside of FFmpeg option, your inability to fully control the high-level of video compression and high frame-rate (fps). This makes it hard to discern fine details when viewing the video.

Alternately, you can capture individual JPEG images and combine them using FFmpeg from the command line or using third-party movie-editing tools. The advantage of combining the images yourself, you have more control over the quality and frame-rate of the video. Altering the frame-rate, alters your perception of the speed of the vehicle recording the video. The only disadvantage of combining the images yourself, you have the extra steps involved to process the images into a video.

At one frame every two seconds (.5 fps), a 30 minute commute to work will generate 30 frames/minute x 30 minutes, or 900 jpeg images. At 640 x 480 pixels, depending on your jpeg compression ratio, that’s a lot of data to move around and crunch into a video. If you just want a basic record of your travels, use FFmpeg. If you want a higher-quality record of trip, maybe for a video-diary, combining the frames yourself is a better way to go.

Configuring Motion for a Dashboard Camera

The installation and setup of FFmpeg and Motion are covered in my earlier post so I won’t repeat that here. Below are several Motion settings I recommend starting with for use with a dashboard video camera. To configure Motion, open it’s configuration file, by entering the following command on your Raspberry Pi:

sudo nano /etc/motion/motion.conf

To use FFmpeg, the first method, find the ‘FFMPEG related options’ section of the configuration and locate ‘Use ffmpeg to encode a timelapse movie’. Enter a number for the ‘ffmpeg_timelapse’ setting. This is the rate at which images are captured and combined into a video. I suggest starting with 2 seconds. With a dashboard camera, you are trying to record important events as you drive. In as little as 2-3 seconds at 55 mph, you can miss a lot of action. Moving the setting down to 1 second will give more detail, but you will chew up a lot of disk space, if that is an issue for you. I would experiment with different values:

# Use ffmpeg to encode a timelapse movie
# Default value 0 = off - else save frame every Nth second
ffmpeg_timelapse 2

To use the ‘do-it-yourself’ FFmpeg method, locate the ‘Snapshots’ section. Find ‘Make automated snapshot every N seconds (default: 0 = disabled)’. Change the ‘snapshot_interval’ setting, using the same logic as the ‘ffmpeg_timelapse’ setting, above:

# Make automated snapshot every N seconds (default: 0 = disabled)
snapshot_interval 2

Irregardless of which method you choose (or use them both), you will want to tweak some more settings. In the ‘Text Display’ section, locate ‘Set to ‘preview’ will only draw a box in preview_shot pictures.’ Change the ‘locate’ setting to ‘off’. As shown in the video frame below, since you are moving in your vehicle most of the time, there is no sense turning on this option. Motion cannot differentiate between the highway zipping by the camera and the approaching vehicles. Everything is in motion to the camera, the box just gets in the way:

# Set to 'preview' will only draw a box in preview_shot pictures.
locate off

01-20130625193032-01

Optionally, I recommend turning on the time-stamp option. This is found right below the ‘locate’ setting. Especially in the event of an accident, you want an accurate time-stamp on the video or still images (make sure you Raspberry Pi’s time is correct):

# Draws the timestamp using same options as C function strftime(3)
# Default: %Y-%m-%d\n%T = date in ISO format and time in 24 hour clock
# Text is placed in lower right corner
text_right %Y-%m-%d\n%T-%q

01-20130622162908-00

Starting with the largest, best quality images will ensure  the video quality is optimal. Start with a large size capture and reduce it only if you are having trouble capturing the video quickly enough. These settings are found in the ‘Capture device options’ section:

# Image width (pixels). Valid range: Camera dependent, default: 352
width 640

# Image height (pixels). Valid range: Camera dependent, default: 288
height 480

Similarly, I suggest starting with a low amount of jpeg compression to maximize quality and only lower if necessary. This setting is found in the ‘Image File Output’ section:

# The quality (in percent) to be used by the jpeg compression (default: 75)
quality 90

Once you have completed the configuration of Motion, restart Motion for the changes to take effect:

sudo /etc/init.d/motion restart

Since you will be powering on your Raspberry Pi in your vehicle, and may have no way to reach Motion from a command line, you will want Motion to start capturing video and images for you automatically at startup. To enable Motion (the motion daemon) on start-up, edit the /etc/default/motion file.

sudo nano /etc/default/motion

Change the ‘start_motion_daemon‘ setting to ‘yes’. If you decide to stop using the Raspberry Pi for capturing video, remember to disable this option. Motion will keep generating video and images, even without a camera connected, if the daemon process is running.

Capturing Dashboard Video

Although taking dashboard camera videos with your Raspberry Pi sounds easy, it presents several challenges. How will you mount your camera? How will you adjust your camera’s view? How will you power your Raspberry Pi in the vehicle? How will you power-down your Raspberry Pi from the vehicle? How will you make sure Motion is running? How will you get the video and images off the Raspberry Pi? Do you have one a mini keyboard and LCD monitor to use in your vehicle? Or, is your Raspberry Pi on your wireless network? If so, do you know how to bring up the camera’s view and Motion’s admin site on your smartphone’s web-browser?

My start-up process is as follows:

  1. Start my car.
  2. Plug the webcam and the power cable into the Raspberry Pi.
  3. Let the Raspberry Pi boot up fully and allow Motion to start. This takes less than one minute.
  4. Open the http address Motion serves up using my mobile browser.
    (Since my Raspberry Pi has a wireless USB adapter installed and I’m still able to connect from my garage).
  5. Adjust the camera using the mobile browser view from the camera.
  6. Optionally, use Motion’s ‘HTTP Based Control’ feature to adjust any Motion configurations, on-the-fly (great option).
Logitech Webcam C210 Webcam Mounted on Car Sun Visor

Logitech Webcam C210 Webcam Mounted on Car Sun Visor

Raspberry Pi in Vehicle with iPhone Preview of Dashboard Camera

Raspberry Pi in Vehicle with iPhone Preview of Dashboard Camera

Adjusting Dashboard Camera using iPhone Preview over LAN Connection to Raspberry Pi

Adjusting Camera using iPhone WiFi Connection to Raspberry Pi

Using Motion's HTTP Based Control on iPhone Mobile Web Browser

Using Motion’s HTTP Based Control on iPhone Mobile Web Browser

Once I reach my destination, I copy the video and/or still image frames off the Raspberry Pi:

  1. Let the car run for at least 1-2 minutes after you stop. The Raspberry Pi is still processing the images and video.
  2. Copy the files off the Raspberry Pi over the local network, right from car (if in range of my LAN).
  3. Alternately, shut down the Raspberry Pi by using a SSH mobile app on your smartphone, or just shut the car off (this not the safest method!).
  4. Place the Pi’s SDHC card into my laptop and copy the video and/or still image frames.
Shutting Down Raspberry Pi Using SSH Terminal iPhone App

Shutting Down Raspberry Pi Using SSH Terminal iPhone App

Here are some tips I’ve found to make creating dashboard camera video’s easier and better quality:

  • Leave your camera in your vehicle once you mount and position it.
  • Make sure your camera is secure so the vehicle’s vibrations while driving don’t create bouncy-images or change the position of the camera field of view.
  • Clean your vehicle’s front window, inside and out. Bugs or other dirt are picked up by the camera and may affect the webcam’s focus.
  • Likewise, film on the window from smoking or dirt will soften the details of the video and create harsh glare when driving on sunny days.
  • Similarly, make sure your camera’s lens is clean.
  • Keep your dashboard clear of objects such as paper, as it reflects on the window and will obscure the dashboard camera’s video.
  • Constantly stopping your Raspberry Pi by shutting the vehicle off can potential damage the Raspberry Pi and/or corrupt the operating system.
  • Make sure to keep your Raspberry Pi out of sight of potential thieves and the direct sun when you are not driving.
  • Backup your Raspberry Pi’s SDHC card before using for dashboard camera, see Duplicating Your Raspberry Pi’s SDHC Card.

Creating Video from Individual Dashboard Camera Images

FFmpeg

If you choose the second method for capturing dashboard camera videos, the easiest way to combine the individual dashboard camera images is by calling FFmpeg from the command line. To create the example #3 video, shown below, I ran two commands from a Linux Terminal prompt. The first command is a bash command to rename all the images to four-digit incremented numbers (‘0001.jpg’, ‘0002.jpg’, ‘0003.jpg’, etc.). This makes it easier to execute the second command. I found this script on stackoverflow. It requires Gawk (‘sudo apt-get install gawk’). If you are unsure about running this command, make a copy of the original images in case something goes wrong.

The second command is a basic FFmpeg command to combine the images into a 20 fps MPEG-4 video file. More information on running FFmpeg can be found on their website. There is a huge number of options available with FFmpeg from the command line. Running this command, FFmpeg processed 4,666 frames at 640 x 480 pixels in 233.30 seconds, outputting a 147.5 Mb MPEG-4 video file.

find -name '*.jpg' | sort | gawk '{ printf "mv %s %04d.jpg\n", $0, NR }' | bash 
ffmpeg -r 20 -qscale 2  -i %04d.jpg output.mp4
FFmpeg Command Line Video Creation Output

FFmpeg Command Line Video Creation Output


Example #3 – FFmpeg Video from Command Line

If you want to compress the video, you can chain a second FFmpeg command to the first one, similar to the one below. In my tests, this reduced the video size to 20-25% of the original uncompressed version.

ffmpeg -r 20 -qscale 2 -i %04d.jpg output.mp4 && ffmpeg -i output.mp4 -vcodec mpeg2video output_compressed.mp4

If your images are to dark (early morning or overcast) or have a color-cast (poor webcam or tinted-windows), you can use programs like ImageMagick to adjust all the images as a single batch. In example #5 below, I pre-processed all the images prior to making the video. With one ImageMagick command, I adjusting their levels to make them lighter and less flat.

mogrify -level 12%,98%,1.79 *.jpg


Example #5 – FFmpeg Uncompressed Video from Command Line

Windows MovieMaker

Using Windows MovieMaker was not my first choice, but I’ve had a tough time finding an equivalent Linux gui-based application. If you are going to create your own video from the still images, you need to be able to import and adjust thousands of images quickly and easily. I can import, create, and export a typical video of a 30 minute trip in 10 minutes with MovieMaker. With MovieMaker, you can also add titles, special effects, and so forth.

Single Images Combined in Windows MovieMaker

Single Images Combined in Windows MovieMaker

Sample Videos

Below are a few dashboard video examples using a variety of methods. In the first two examples, I captured still images and created the FFmpeg video at the same time. You can compare quality of Method #1 to #2.


Example #2a – Motion/FFmpeg Video


Example #2b – Windows MovieMaker


Example #5 – FFmpeg Compressed Video from Command Line


Example #6 – FFmpeg Compressed Video from Command Line

Useful Links

Renaming files in a folder to sequential numbers

Useful FFmpeg Syntax Examples

ImageMagick: Command-line Options

ImageMagick: Mogrify — in-place batch processing

http://wp.me/p1RD28-AW

, , , , , , , , , ,

12 Comments

Installing TightVNC on the Raspberry Pi

Sometimes connecting a keyboard, mouse, and monitor to Raspberry Pi is really inconvenient. But what’s the alternative if you want to interact directly with your Raspberry Pi’s GUI? PuTTY is an excellent SSH client, but the command shell is no substitute. WinSCP is an excellent SFTP client, but again, no substitute for a fully-functional GUI. The answer to this predicament? TightVNC, by GlavSoft LLC.

Background

According to TightVNC Software’s website, ‘TightVNC is a free remote control software package. With TightVNC, you can see the desktop of a remote machine and control it with your local mouse and keyboard, just like you would do it sitting in the front of that computer.

What is VNC? According to Wikipedia, ‘Virtual Network Computing (VNC) is a graphical desktop sharing system that uses the RFB protocol (remote framebuffer) to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.

If you are a Windows user, you are no doubt familiar with Microsoft’s Remote Desktop Connection (RDC). GlavSoft’s TightVNC and Microsoft’s RDC are almost identical in terms of functionality.

Installation

TightVNC has two parts, the client and the server. The TightVNC Server software is installed on the Raspberry Pi (RaspPi). The RaspPi acts as the TightVNC Server. The client software, the TightVNC Java Viewer, is installed on a client laptop or desktop computer.

I used PuTTY from my Windows 8 laptop to perform the following installation and configuration. I successfully performed this process on a RaspPi Model B, with copies of both Raspbian “wheezy” and Soft-float Debian “wheezy”.

TightVNC Server
To install the TightVNC Server software, run the following commands from the RaspPi. The first command is are optional, but usually recommended before installing new software.

sudo apt-get update && sudo apt-get upgrade
sudo apt-get install tightvncserver

To test the success of the TightVNC Server installation, enter ‘vncserver‘ in the command shell. The first time you run this command, you will be asked to set a VNC password for the current user (‘pi’). The password can be different than the system password used by this user. After inputting a password, you should see output similar to the below screen grab. This indicates that TightVNC is running.

First Time vcnserver Command 1

By default, TightVNC runs on a port 5901. To verify TightVNC is running on 5901, enter the command ‘sudo netstat -tulpn‘. You should see output similar to the screen grab below. Note the entry for TightVNC on port 5901. Stop TightVNC by entering the ‘vncserver -kill :1‘ command.

First Time vcnserver Command 2

You may have noticed TightVNC was also running on port 6001. This is actually used by the X Window System, aka ‘X11’. A discussion of X11 is out of scope for this post, but more info can be found here.

Automatic Startup
For TightVNC Server to start automatically when we boot up our RaspPi, we need to create an init script and add it to the default runlevels. I had a lot of problems with this part until I found this post, with detailed instructions on how to perform these steps.

Start by entering the following command to create the init script:

sudo nano /etc/init.d/tightvncserver

Copy and paste the init script from the above post, into this file. Change the user from ‘pi’ to your user if it is different than ‘pi’. Save and close the file.

Next, execute these two commands to add the script to the default runlevels:

sudo chmod 755 /etc/init.d/tightvncserver
sudo update-rc.d tightvncserver defaults

To complete the TightVNC Server installation, restart the RaspPi.

TightVNC Java Viewer
According to the website, TightVNC Java Viewer is a fully functional remote control client written entirely in Java. It can work on any computer where Java is installed. It requires Java SE version 1.6 or any later version. That can be Windows or Mac OS, Linux or Solaris — it does not make any difference. And it can work in your browser as well. On the client computer, download and unzip the TightVNC Java Viewer. At the time of this post, the current TightVNC Java Viewer version was 2.6.2.

Once the installation is complete, double-click on the ‘tightvnc-jviewer.jar’ file. Running the Java jar file will bring up the ‘New TightVNC Connection’ window, as seen in the example below. Input the RaspPi’s IP address or hostname, and the default TightVNC port of 5901. The use of SSH tunneling is optional with the TightVNC Viewer. If you are concerned about security, use SSH.

TightVNC Connection Window

Clicking the ‘Connect’ button, you are presented with a window to input the user’s VNC password.

TightVNC VNC Authentication Window

Optionally, if using SSH, the user’s SSH password is required. Again, the same user can have different SSH and VNC passwords, as mine does.

TightVNC SSH Authentication Window

If everything was installed and configured correctly, you should be presented with a TightVNC window displaying the RaspPi’s desktop. Note the TightVNC toolbar along the top edge of the window. The ‘Ctrl’ and ‘Alt’ buttons are especially useful to send either of these two key inputs to the RaspPi on a Windows client. Using the ‘Set Options’ button, you can change the quality of TightVNC’s remote display. Note these changes this can affect performance.

Raspberry Pi's X Desktop

Congratulations, no more connecting a keyboard, mouse, and monitor to you RaspPi to access the GUI. I suggest reading the documentation on the TightVNC website, as well as the ‘README.txt’ file, included with the TightVNC Java Viewer. There is a lot more to TightVNC than I have covered in this brief introductory post, especially in the README.txt file. -gs

, , , , , , , , , , , , , , ,

11 Comments