Posts Tagged RaspberryPi
There are a few reasons you might want to duplicate (clone/copy) your Raspberry Pi’s Secure Digital High-Capacity (SDHC) card. I had two, backup and a second Raspberry Pi. I spent untold hours installing and configuring software on your Raspberry Pi with Java, OpenCV, Motion, etc. Having a backup of all my work seemed like a good idea.
Second reason, a second Raspberry Pi. I wanted to set up a second Raspberry Pi, but didn’t want to spend the time to duplicate my previous efforts. Nor, could I probably ever duplicate the first Pi’s configuration, exactly. To ensure consistency across multiple Raspberry Pi’s, duplicating my first Raspberry Pi’s SDHC card made a lot of sense.
I found several posts on the web about duplicating an SDHC card. One of the best articles was on the PIXHAWK website. It only took me a few simple steps to backup my original 8 GB SDHC card, and then create a clone by copying the backup to a new 8 GB SDHC card, as follows:
1) Remove the original SDHC card from Raspberry Pi and insert it into a card reader on your computer. I strongly suggest locking the card to protect it against any mistakes while backing up.
2) Locate where the SDHC card is mounted on your computer. This can be done using GParted, or in a terminal window, using the ‘blkid’ (block device attributes) command. My Raspberry Pi’s SDHC card, with its three separate partitions was found at ‘/dev/sdb’.
3) Use the ‘dd’ (convert and copy a file) command to duplicate the contents of the SDHC card to your computer. This can take a while and there is no progress bar. The command I used to back up the card to my computer’s $HOME directory was:
sudo dd if=/dev/sdb of=~/sdhc-card-bu.bin
4) Unmount and unlock the original SDHC card. Mount the new SDHC card. It should mount in the same place.
5) Reverse the process by copying the backup file, ‘sdhc-card-bu.bin’, to the new SDHC card. Again, this can take a while and there is no progress bar. The command I used was:
sudo dd if=~/sdhc-card-bu.bin of=/dev/sdb
Using ‘dd’, backups and restores the entire SDHC card, partitions and all. I was able to insert the card into a brand new Raspberry Pi and boot it up, without any problems.
Obviously, there are some things you may want to change on a cloned Raspberry Pi. For example, you should change the cloned Raspberry Pi’s host name, so it doesn’t conflict with the original Raspberry Pi on the network. This is easily done:
sudo nano /etc/hostname sudo /etc/init.d/hostname.sh start
Also, changing the cloned Raspberry Pi’s root password is a wise idea for both security and sanity, especially if you have more than one Pi on your network. This guarantees you know which one you are logging into. This is easily done using the ‘passwd’ command:
Use C++ with OpenCV and cvBlob to perform image processing and object tracking on the Raspberry Pi, using a webcam.
Source code and compiled samples are now available on GitHub. The below post describes the original code on the ‘Master’ branch. As of May 2014, there is a revised and improved version of the project on the ‘rev05_2014’ branch, on GitHub. The README.md details the changes and also describes how to install OpenCV, cvBlob, and all dependencies!
As part of a project with a local FIRST Robotics Competition (FRC) Team, I’ve been involved in developing a Computer Vision application for use on the Raspberry Pi. Our FRC team’s goal is to develop an object tracking and target acquisition application that could be run on the Raspberry Pi, as opposed to the robot’s primary embedded processor, a National Instrument’s NI cRIO-FRC II. We chose to work in C++ for its speed, We also decided to test two popular open-source Computer Vision (CV) libraries, OpenCV and cvBlob.
Due to its single ARM1176JZF-S 700 MHz ARM processor, a significant limitation of the Raspberry Pi is the ability to perform complex operations in real-time, such as image processing. In an earlier post, I discussed Motion to detect motion with a webcam on the Raspberry Pi. Although the Raspberry Pi was capable of running Motion, it required a greatly reduced capture size and frame-rate. And even then, the Raspberry Pi’s ability to process the webcam’s feed was very slow. I had doubts it would be able to meet the processor-intense requirements of this project.
Development for the Raspberry Pi
Using C++ in NetBeans 7.2.1 on Ubuntu 12.04.1 LTS and 12.10, I wrote several small pieces of code to demonstrate the Raspberry Pi’s ability to perform basic image processing and object tracking. Parts of the follow code are based on several OpenCV and cvBlob code examples, found in my research. Many of those examples are linked on the end of this article. Examples of cvBlob are especially hard to find.
There are five files: ‘main.cpp’, ‘testfps.cpp (testfps.h)’, and ‘testcvblob.cpp (testcvblob.h)’. The main.cpp file’s main method calls the test methods in the other two files. The cvBlob library only works with the pre-OpenCV 2.0. Therefore, I wrote all the code using the older objects and methods. The code is not written using the latest OpenCV 2.0 conventions. For example, cvBlob uses 1.0’s ‘IplImage’ image type instead 2.0’s newer ‘CvMat’ image type. My next projects is to re-write the cvBlob code to use OpenCV 2.0 conventions and/or find a newer library. The cvBlob library offered so many advantages, I felt not using the newer OpenCV 2.0 features was still worthwhile.
Main Program Method (main.cpp)
Tests 1-2 (testcvblob.hpp)
Tests 1-2 (testcvblob.cpp)
Tests 2-6 (testfps.hpp)
Tests 2-6 (testfps.cpp)
Compiling Locally on the Raspberry Pi
After writing the code, the first big challenge was cross-compiling the native C++ code, written on Intel IA-32 and 64-bit x86-64 processor-based laptops, to run on the Raspberry Pi’s ARM architecture. After failing to successfully cross-compile the C++ source code using crosstools-ng, mostly due to my lack of cross-compiling experience, I resorted to using g++ to compile the C++ source code directly on the Raspberry Pi.
First, I had to properly install the various CV libraries and the compiler on the Raspberry Pi, which itself is a bit daunting.
Compiling OpenCV 2.4.3, from the source-code, on the Raspberry Pi took an astounding 8 hours. Even though compiling the C++ source code takes longer on the Raspberry Pi, I could be assured the complied code would run locally. Below are the commands that I used to transfer and compile the C++ source code on my Raspberry Pi.
Copy and Compile Commands
Special Note About cvBlob on ARM
At first I had given up on cvBlob working on the Raspberry Pi. All the cvBlob tests I ran, no matter how simple, continued to hang on the Raspberry Pi after working perfectly on my laptop. I had narrowed the problem down to the ‘cvLabel’ method, but was unable to resolve. However, I recently discovered a documented bug on the cvBlob website. It concerned cvBlob and the very same ‘cvLabel’ method on ARM-based devices (ARM = Raspberry Pi!). After making a minor modification to cvBlob’s ‘cvlabel.cpp’ source code, as directed in the bug post, and re-compiling on the Raspberry Pi, the test worked perfectly.
Testing OpenCV and cvBlob
The code contains three pairs of tests (six total), as follows:
- OpenCV (w/ live webcam feed)
Determine if OpenCV is installed and functioning properly with the complied C++ code. Capture a webcam feed using OpenCV, and display the feed and frame rate (fps).
- OpenCV (w/o live webcam feed)
Same as Test #1, but only print the frame rate (fps). The computer doesn’t need display the video feed to process the data. More importantly, the webcam’s feed might unnecessarily tax the computer’s processor and GPU.
- OpenCV and cvBlob (w/ live webcam feed)
Determine if OpenCV and cvBlob are installed and functioning properly with the complied C++ code. Detect and display all objects (blobs) in a specific red color range, contained in a static jpeg image.
- OpenCV and cvBlob (w/o live webcam feed)
Same as Test #3, but only print some basic information about the static image and number of blobs detected. Again, the computer doesn’t need display the video feed to process the data.
- Blob Tracking (w/ live webcam feed)
Detect, track, and display all objects (blobs) in a specific blue color range, along with the largest blob’s positional data. Captured with a webcam, using OpenCV and cvBlob.
- Blob Tracking (w/o live webcam feed)
Same as Test #5, but only display the largest blob’s positional data. Again, the computer doesn’t need the display the webcam feed, to process the data. The feed taxes the computer’s processor unnecessarily, which is being consumed with detecting and tracking the blobs. The blob’s positional data it sent to the robot and used by its targeting system to position its shooting platform.
There are two ways to run this program. First, from the command line you can call the application and pass in three parameters. The parameters include:
- Test method you want to run (1-6)
- Width of the webcam capture window in pixels
- Height of the webcam capture window in pixels.
An example would be ‘./TestFps 2 640 480’ or ‘./TestFps 5 320 240’.
The second method to run the program and not pass in any parameters. In that case, the program will prompt you to input the test number and other parameters on-screen.
Test 1: Laptop versus Raspberry Pi
Test 3: Laptop versus Raspberry Pi
Test 5: Laptop versus Raspberry Pi
Each test was first run on two Linux-based laptops, with Intel 32-bit and 64-bit architectures, and with two different USB webcams. The laptops were used to develop and test the code, as well as provide a baseline for application performance. Many factors can dramatically affect the application’s ability do image processing. They include the computer’s processor(s), RAM, HDD, GPU, USB, Operating System, and the webcam’s video capture size, compression ratio, and frame-rate. There are significant differences in all these elements when comparing an average laptop to the Raspberry Pi.
Frame-rates on the Intel processor-based Ubuntu laptops easily performed at or beyond the maximum 30 fps rate of the webcams, at 640 x 480 pixels. On a positive note, the Raspberry Pi was able to compile and execute the tests of OpenCV and cvBlob (see bug noted at end of article). Unfortunately, at least in my tests, the Raspberry Pi could not achieve more than 1.5 – 2 fps at most, even in the most basic tests, and at a reduced capture size of 320 x 240 pixels. This can be seen in the first and second screen-grabs of Test #1, above. Although, I’m sure there are ways to improve the code and optimize the image capture, the results were much to slow to provide accurate, real-time data to the robot’s targeting system.
Links of Interest
Static Test Images Free from: http://www.rgbstock.com/
Great Website for OpenCV Samples: http://opencv-code.com/
Another Good Website for OpenCV Samples: http://opencv-srf.blogspot.com/2010/09/filtering-images.html
cvBlob Code Sample: https://code.google.com/p/cvblob/source/browse/samples/red_object_tracking.cpp
Detecting Blobs with cvBlob: http://8a52labs.wordpress.com/2011/05/24/detecting-blobs-using-cvblobs-library/
Best Post/Script to Install OpenCV on Ubuntu and Raspberry Pi: http://jayrambhia.wordpress.com/2012/05/02/install-opencv-2-3-1-and-simplecv-in-ubuntu-12-04-precise-pangolin-arch-linux/
Measuring Frame-rate with OpenCV: http://8a52labs.wordpress.com/2011/05/19/frames-per-second-in-opencv/
OpenCV and Raspberry Pi: http://mitchtech.net/raspberry-pi-opencv/
Want to keep an eye on your home or business while you’re away? Maybe observe wildlife close-up without disturbing them? Or, keep an eye on your kids playing in the backyard? Low-end wireless IP cameras start at $50-$75 USD. Higher-end units can run into the hundreds of dollars. Add motion detection and the price raises even further. How about a lower-cost solution? Using a Raspberry Pi with an inexpensive webcam, a wireless WiFi Module, and an optional battery pack, you can have a remote, motion-activated camera solution, at a fraction of the cost. Best of all, you won’t need to write a single line of code or hack any electronics to get started.
There are many posts on the Internet, demonstrating how to build a Raspberry Pi-powered motion-activated camera system. One of the more frequently used off-the-shelf applications for these projects is Motion. According to their website, ‘Motion is a program that monitors the video signal from one or more cameras and is able to detect if a significant part of the picture has changed; in other words, it can detect motion‘. Motion uses a technique known as visual motion detection (VMD) to compare a series of sequential camera frames for differences at a pixel level. A change between a series of sequential frames is an indication of movement.
Motion has the ability to stream images from a webcam and server them from it’s built-in web server, with little or no configuration. In addition, Motion is easily configured to work with streaming video applications like the very popular FFmpeg, and save images to databases like mySQL or PostgreSQL. Motion can also execute external scripts such as python or shell. In this post, we are going to use Motion’s most basic features, motion detection and web-streaming.
Before installing Motion, I recommend ensuring your Raspberry Pi is up-to-date with the latest software and firmware. Updating firmware is not necessary. However, I was recently helping someone with camera issue on their Raspberry Pi. Finding a few suggestions online for similar problems, we updated the firmware on the Raspberry Pi. It fixed the problem. Installing firmware can sound a bit intimidating. However, Liam McLoughlin (hexxeh) has made the process easy with rpi-update. I have used it successfully on multiple Raspberry Pi’s. Three commands is all it takes to update your Raspberry Pi to the latest firmware.
You should also update your Raspberry Pi’s existing software. To update your Raspberry Pi’s software, execute the following apt-get commands:
sudo apt-get update && sudo apt-get upgrade
If you don’t do this on a regular basis, as recommended, these could take up to several minutes. Watch for errors. If there are any errors, try to run the command again. Sometimes the Raspberry Pi cannot connect to all code repositories for updates.
Once the updates are complete, install Motion by issuing the following command:
sudo apt-get install motion
As the installation completes, you should see a warning in the command shell about Motion being disabled by default.
... Adding user `motion' to group `video' ... Adding user motion to group video Done. [warn] Not starting motion daemon, disabled via /etc/default/motion ... (warning). Setting up ffmpeg (6:0.8.4-1) ... pi@garyrasppi ~ $
To enable Motion (the motion daemon), we need to edit the
sudo nano /etc/default/motion
Change the ‘
start_motion_daemon‘ parameter to ‘yes’.
Motion is easy to customize with loads of parameters you can tweak based on your needs. Motion has no GUI. All configuration is all done through Motion’s configuration file (
/etc/motion/motion.conf). Before editing the configuration file, we need to change the permissions on it, so Motion can get access to it. While we are at it, we will also change permissions on the folder where Motion stores captured images.
sudo chmod -R 777 /etc/motion/motion.conf sudo chmod -R 777 /tmp/motion
After changing the permissions, to configure Motion, open the Motion’s configuration file in a text editor, as root (sudo). I like using Nano. The configuration file can be opened in Nano with the following command:
sudo nano /etc/motion/motion.conf
Motion’s configuration file is lengthy. However, it is broken down into logical sections, making finding the setting you are looking for, easy. First, we need to change the ‘Live Webcam Server’ section of configuration. Below are the default settings:
############################################################ # Live Webcam Server ############################################################ # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 8081 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off # Maximum framerate for webcam streams (default: 1) webcam_maxrate 1 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0
The first thing you will want to change is Motion’s default setting that restricts image streaming to ‘
localhost‘, only ( ‘
webcam_localhost on‘). This means you can only view images in a web browser on the Raspberry Pi, not remotely over your network. Change that line of code to read ‘
The next setting I recommend changing for security purposes is the default port Motion’s web server uses to stream images, 8081. Security through obscurity is better than no security at all. Change port 8081 to a different arbitrary port, for example, 6789 (‘
webcam_port 6789‘). Just make sure you don’t pick a port already in use by another service or application. Having made this change, if your Raspberry Pi’s local IP address is 192.168.1.9, images from the webcam should be accessible at 192.168.1.9:6789.
The other two settings in this section you can play with are the webcam quality and maximum frame-rate. You will have to adjust this based on your network speed and the processing power of your Raspberry Pi. The default settings are a good place to start. I changed my quality from the default of 50 to 80 (‘
webcam_quality 80‘), and changed my max frame-rate to 2 (‘
Speaking of quality, the other two settings you may want to change are the width and height of the image being captured by Motion. The ‘Capture device options’ section is where we change these settings. As the configuration’s comments suggest, these settings are dependent on your camera. Check the camera’s available image sizes; you will need to use one of those size combinations. I have mine set to an average size of 352 x 288. This is a good size for those of us with a slower network, or when streaming video over the Internet to mobile web browser. Conversely, a larger image is better for viewing over your local network.
Image size, like compression quality, and frame-rate are dependent on processing power of your Raspberry Pi and it’s OS (Raspbian, Debian, Arch, etc.). You may need to play with these settings to get the desired results. I couldn’t stream images larger than 352 x 288 over the Internet, with my Raspberry Pi, even though my webcam could capture up to 640 x 480 pixels.
# Image width (pixels). Valid range: Camera dependent, default: 352 width 352 # Image height (pixels). Valid range: Camera dependent, default: 288 height 288
It’s important to remember, each time you make changes to Motion’s configuration file, you must restart Motion, using the following command.
sudo /etc/init.d/motion restart
Viewing Your Webcam Remotely
To view your webcam’s output from another device on your local network, point your web browser to the IP address of your Raspberry Pi, and add the port you assigned in Motion’s configuration file. Motion may take up to 15-20 seconds to start responding in the browser. If it takes longer, you probably have your image size, frame-rate, and compression settings to high for your Raspberry Pi.
Over the Internet
Enabling your webcam’s output over the Internet is relatively easy with the average home router and Internet service provider. Suppose the IP address of my Raspberry Pi, on my local network, is 192.168.1.9. Suppose I assigned port 6789 to Motion’s web server. Lastly, suppose my router’s external Internet IP address is 220.127.116.11. With this information, I can create a port-forwarding rule in my router, allowing all external HTTP traffic over TCP to 18.104.22.168:3456, to be automatically forwarded internally to 192.168.1.9:6789. The external port, 3456, is totally arbitrary, just make sure you don’t pick a port already in use.
IMPORTANT SECURITY NOTE: There are no passwords or other network protection used with this method. Make sure to keep the external IP address and port combination private, and always stop Motion, or better yet your Raspberry Pi, when not in use. Otherwise, someone could potentially be watching you!
Down at the local coffee shop, I decide to check if the mailman has delivered my new Raspberry Pi to the front porch. Having set-up port-forwarding, I enter 22.214.171.124:3456 in my smartphone’s web browser. My Internet provider routes the HTTP request to my Internet router. My router receives the request and forwards it over my local network to 192.168.1.9:6789, where Motion’s built-in web server on my Raspberry Pi is running. Motion’s web server responds by streaming still images back to my phone at the coffee shop when it detects motion. Still no sign of the mailman or my Raspberry Pi…
Static IP Addresses
I recommend using a static IP address for your Raspberry Pi, versus DHCP, if possible. Else, you will have to change your router’s port-forwarding rules each time your Raspberry Pi’s DHCP lease is renewed and its local IP address changes. There are some ways to prevent addressed from changing frequently with DHCP, if your router supports it. Look for configurable lease times or reservations options in your router’s configuration; these may be able to be extended.
Locating Your External Internet IP Address
What is your router’s external Internet IP address? To find mine, I looked in Netgear’s Router Status window. You can also use a ‘tracert’ from the command line, if you know what to look for in the output.
Since I do not pay my Internet-provider for a static external Internet IP address, the address my provider assigns to my router is dynamic. It can and will change, sometimes almost never, or sometimes daily. The frequency of change depends on your provider. To view your webcam’s images, you will need to know your router’s current external Internet IP address.
Here are some example from a Microsoft LifeCam VX-500 and Logitech Webcam C210 webcams. The highest quality I could consistently stream over the Internet, from my Raspberry Pi 512Mb Model B, with both Soft-float Debian “wheezy” and Raspbian “wheezy”, was 352 x 288 at 80% compression and 2 fsp max. Locally on my LAN, I could reach a frame size of 640 x 480 pixels.
In the first example, I’ve placed the Raspberry Pi in a plastic container to protect it, and mounted the webcam in a flower box. Viewing the feed over my local network, we are able to watch the hummingbirds without scaring them.
In the next two images, I’ve turned on Motion’s ‘locate box’ option, which tracks the exact area within the image that is moving. As the person come into view of the camera mounted near the front door, Motion detects and outlines the area of the images where it detects movement.
In the next video, you see the view from a Google Nexus 7 tablet. My wife and I use the Raspberry Pi surveillance system to watch our backyard when our kids are outside (the camera is no substitute for adult supervision when the kids are in the pool).
This last image is from my iPhone, while shopping at the local grocery store. My wife was impressed with my port-forwarding knowledge. OK, not really, but she did enjoy showing off the Christmas tree to friends, remotely, even if it wasn’t in motion.
Here are a few links to other useful articles on the use of Motion with the Raspberry Pi:
motion(1) – Linux man page (good source for understand Motion config)
Linux UVC Supported Devices (a good starting point for buying a webcam)
Sometimes connecting a keyboard, mouse, and monitor to Raspberry Pi is really inconvenient. But what’s the alternative if you want to interact directly with your Raspberry Pi’s GUI? PuTTY is an excellent SSH client, but the command shell is no substitute. WinSCP is an excellent SFTP client, but again, no substitute for a fully-functional GUI. The answer to this predicament? TightVNC, by GlavSoft LLC.
According to TightVNC Software’s website, ‘TightVNC is a free remote control software package. With TightVNC, you can see the desktop of a remote machine and control it with your local mouse and keyboard, just like you would do it sitting in the front of that computer.‘
What is VNC? According to Wikipedia, ‘Virtual Network Computing (VNC) is a graphical desktop sharing system that uses the RFB protocol (remote framebuffer) to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.‘
If you are a Windows user, you are no doubt familiar with Microsoft’s Remote Desktop Connection (RDC). GlavSoft’s TightVNC and Microsoft’s RDC are almost identical in terms of functionality.
TightVNC has two parts, the client and the server. The TightVNC Server software is installed on the Raspberry Pi (RaspPi). The RaspPi acts as the TightVNC Server. The client software, the TightVNC Java Viewer, is installed on a client laptop or desktop computer.
I used PuTTY from my Windows 8 laptop to perform the following installation and configuration. I successfully performed this process on a RaspPi Model B, with copies of both Raspbian “wheezy” and Soft-float Debian “wheezy”.
To install the TightVNC Server software, run the following commands from the RaspPi. The first command is are optional, but usually recommended before installing new software.
sudo apt-get update && sudo apt-get upgrade sudo apt-get install tightvncserver
To test the success of the TightVNC Server installation, enter ‘
vncserver‘ in the command shell. The first time you run this command, you will be asked to set a VNC password for the current user (‘pi’). The password can be different than the system password used by this user. After inputting a password, you should see output similar to the below screen grab. This indicates that TightVNC is running.
By default, TightVNC runs on a port 5901. To verify TightVNC is running on 5901, enter the command ‘
sudo netstat -tulpn‘. You should see output similar to the screen grab below. Note the entry for TightVNC on port 5901. Stop TightVNC by entering the ‘
vncserver -kill :1‘ command.
You may have noticed TightVNC was also running on port 6001. This is actually used by the X Window System, aka ‘X11’. A discussion of X11 is out of scope for this post, but more info can be found here.
For TightVNC Server to start automatically when we boot up our RaspPi, we need to create an init script and add it to the default runlevels. I had a lot of problems with this part until I found this post, with detailed instructions on how to perform these steps.
Start by entering the following command to create the init script:
sudo nano /etc/init.d/tightvncserver
Copy and paste the init script from the above post, into this file. Change the user from ‘pi’ to your user if it is different than ‘pi’. Save and close the file.
Next, execute these two commands to add the script to the default runlevels:
sudo chmod 755 /etc/init.d/tightvncserver sudo update-rc.d tightvncserver defaults
To complete the TightVNC Server installation, restart the RaspPi.
TightVNC Java Viewer
According to the website, TightVNC Java Viewer is a fully functional remote control client written entirely in Java. It can work on any computer where Java is installed. It requires Java SE version 1.6 or any later version. That can be Windows or Mac OS, Linux or Solaris — it does not make any difference. And it can work in your browser as well. On the client computer, download and unzip the TightVNC Java Viewer. At the time of this post, the current TightVNC Java Viewer version was 2.6.2.
Once the installation is complete, double-click on the ‘tightvnc-jviewer.jar’ file. Running the Java jar file will bring up the ‘New TightVNC Connection’ window, as seen in the example below. Input the RaspPi’s IP address or hostname, and the default TightVNC port of 5901. The use of SSH tunneling is optional with the TightVNC Viewer. If you are concerned about security, use SSH.
Clicking the ‘Connect’ button, you are presented with a window to input the user’s VNC password.
Optionally, if using SSH, the user’s SSH password is required. Again, the same user can have different SSH and VNC passwords, as mine does.
If everything was installed and configured correctly, you should be presented with a TightVNC window displaying the RaspPi’s desktop. Note the TightVNC toolbar along the top edge of the window. The ‘Ctrl’ and ‘Alt’ buttons are especially useful to send either of these two key inputs to the RaspPi on a Windows client. Using the ‘Set Options’ button, you can change the quality of TightVNC’s remote display. Note these changes this can affect performance.
Congratulations, no more connecting a keyboard, mouse, and monitor to you RaspPi to access the GUI. I suggest reading the documentation on the TightVNC website, as well as the ‘README.txt’ file, included with the TightVNC Java Viewer. There is a lot more to TightVNC than I have covered in this brief introductory post, especially in the README.txt file. -gs
So, you finally got your Raspberry Pi? Congratulations. It might only cost $35, but if you’re like many of us, you likely waited a long time for your RaspPi to show up in the mail. The first thing you ought to do is protect your new RaspPi with an enclosure. Although, the Raspberry Pi is still fairly new, there no shortage of accessories, including enclosures.
Priced anywhere on average from $10 – $40 USD, RaspPi enclosures (case, container, box) come in all shapes, sizes, colors, and materials. Most entry-level enclosures are made of clear acrylic or colorized plastic. They may require some basic assembly. Slightly better, mid-range enclosures are often made of the same materials, but are manufactured with a better fit and finish than the entry-level enclosures. Many mid-level enclosures come with screws to secure the case and your RaspPi. Others have special treatments, such as such as Raspberry Pi logo die-cut from acrylic. The highest-end enclosures include exotic materials such as wood or metals. These can run as much as $75 or more. This kind of defeats the purpose of the ‘low-cost’ computer for the masses, but I admit, the milled aluminum case is pretty cool.
In this post, I will review three entry-level RaspPi enclosures. These are often the first cases many of us buy, or receive as part of a RaspPi starter kit. One of the enclosures is acrylic, the other two are plastic; all three cost less than $20. The enclosures include:
- Pi Sandwich Enclosure by Bud Industries
- Pi Box by Adafruit Industries
- A1Clear Pi Container by HungryPi.com
Pi Sandwich Enclosure by Bud Industries
The lowest cost enclosure of the three is the Bud Industries’ Pi Sandwich Enclosure. This item sells for an unbelievably low $6.39 at Newark. I purchased it as part of a special bundle, offered to attendees of Rob Bishop’s Raspberry Pi workshop at the recent ARM Processor Symposium at RIT.
- Very inexpensive;
- Large, lots of space around RaspPi to connect cables and attachments;
- Cool mathematical Pi symbol on top of red, translucent enclosure.
- Does not secure RaspPi well with its small plastic grippers;
- Difficult to remove certain cables and attachments from RaspPi while in case;
Pi Box by Adafruit Industries
The Pi Box by Adafruit Industries, retails for $14.95. It’s also included as part of several RaspPi starter kits, including the Adafruit Started Pack, sold by Adafruit and Newark. I liked everything about this case, except one thing. The six pieces of acrylic that make up the enclosure have a very loose fit. Although it’s intentional, I feel it makes an otherwise handsome case, feel cheap. Aside from the ‘fit’, it’s a really nice, inexpensive, clear acrylic enclosure. I especially like the etched Adafruit logo and port names on the case.
- Small footprint;
- Good fit around ports, easy to plug and unplug cables;
- Clear acrylic design with etching;
- GPIO ribbon-cable opening on side vs. top;
- Loose fitting parts;
- Peeling off all the paper protecting the clear acrylic;
- Must disassemble into six separate pieces to remove RaspPi.
A1Clear Pi Container by HungryPi.com
The A1Clear Pi Container by HungryPi.com, retails for $19.95 from Amazon, where I purchased mine. It’s a two-piece, clear plastic enclosure. Similar to the Adafruit case, above, I loved everything about this case, except two small things, the ridiculously small opening for the power cable and the narrow top opening for the GPIO ribbon-cable. My stock power cable was not able to connect to the RaspPi at all before I ‘modded’ the case (see below). Otherwise it’s a really nice, relatively inexpensive, plastic enclosure. I prefer the tighter-fit and smaller profile of this case, along with the clear plastic, to show the RaspPi.
- Small footprint;
- Holds the RaspPi securely;
- Clear plastic design;
- Impossibly small opening for power cable (easy to fix);
- Narrow opening for GPIO ribbon-cable;
- Difficult to remove RaspPi from enclosure once secured.
Increasing the size of the power cable and GPIO ribbon-cable openings, was very simple. Being a hard-plastic case, a small cut with a hack saw, and quick couple of passes with a good file did the job. I enlarged the power opening considerably and only slightly expanded the width of the GPIO cable opening. I am much happier after taking less than five minutes to modify the case to suit my needs.
Although, I would like to upgrade to a better-quality Pibow layered case at some point, for now I’m happy with my modified A1Clear Pi Container. For the price, any of these three enclosures will protect your RaspPi. Some just look better than others, and some make it easier to interact with the RaspPi itself, than others. I suggest reading the reviews on Amazon and other manufacturer’s sites, as well as Blogs like this, before making a decision on an enclosure for your RaspPi.
All photos copyright Gary A. Stafford, 2012.