Time lapse photography with a Raspberry Pi

My first Raspberry Pi project was to use it for time lapse photography during a snowstorm this week.  The storm dumped over a foot of snow on much of the northeast, and about 9 inches at my house.  The video below was my second attempt (the first is here).

On the Raspberry Pi side, this was pretty simple.  I was just using the Pi as a timed remote shutter trigger, nothing very fancy. I would have preferred to use gphoto2, because it would allow a lot more interesting things, like camera control, on-board processing, uploading photos in real time, and so on.  It was not working for me, however. I got errors like this:

*** Error ***
PTP I/O error

*** Error ***
An error occurred in the io-library ('Unspecified error'): No error description available

This is apparently a known issue, and may have to do with Raspberry Pi device support, though it also looks like this gphoto issue. This blog gets around it by resetting the serial connection, but I couldn’t get that to work consistently.  I tried compiling the latest version of gphoto2 on the Raspberry Pi as well, but that didn’t fix my problem.

Some people have triggered the shutter release directly with the GPIO pins (or even embedded a Raspberry Pi in their camera), but I already had a serial port cable I had built, so I hooked it up to a USB-serial adapter and used that.

Triggering the pin is just a matter of setting RTS high. In python:

s = serial.Serial('/dev/ttyUSB0')
s.setRTS(True) ; time.sleep(0.2);  s.setRTS(False)

Timing seems to matter; sleeping 0.1 s does not trigger.  For longer exposures, I had to sleep a bit longer to trigger consistently.  (Here’s the simple script I actually used.)

Stitching the resulting images together into a movie is straightforward with ffmpeg or avconv.  The only trick I found was that you need to renumber the images to start at 1, at least with the version of avconv I have on my Ubuntu machine.  I used 10 frames per second, with the frames taken 5 minutes apart.  A slower framerate than 10 fps seems too jumpy to my eyes.  I would probably increase the shooting rate by a factor of 2 or more next time for more flexibility.

Another relevant site for shooting with linux is tethered shooting in Ubuntu [ed.: site down, mirror here].  This has a bit about gphoto2 and uses Darktable as an interactive application.  If I actually get photo2 working, I may try something like that.

Eventually, I am interested in using the Raspberry Pi for astrophotography.  Triggering exposures is a start, but even more interesting would be using it to drive autoguiding and telescope control.  It’s a great little platform, even if device support is sometimes frustrating.

Reprocessed Milky Way image

I’ve made my first attempt to process astronomical images from raw camera images in python.  I made this composite image from the same 15 raw exposures that went into this image on Flickr, which was stacked with IRIS.  There is a lot more detail visible in the image below than in the original.  I’m not claiming that my methods are inherently better at this point, or that it can do everything IRIS can do, but I have full control over every step of the process.  The biggest difference between the two images is that I’ve done a better job of removing the uneven background in the image below.

Cygnus Milky Way

Stack of 15 90-second exposures in light polluted sky of northern New Jersey. Tracked with motorized barn door tracking mount.

The processing steps so far are:

  1. Convert from raw to FITS with CR2FITS
  2. Align images with alipy
  3. Stack images as numpy arrays
  4. Fit 2nd-order polynomial to sky background
  5. Export to 8-bit tiff image
  6. Adjust color levels in GIMP

A few of the items on my to do list are:

  1. Calibration of raw images
  2. Stack images with drizzle
  3. Export to 16-bit tiff (which can be used by the GIMP 2.10 development branch)

There is still quite a bit of work to do to clean up the code and turn my rough script into something more robust, flexible and maintainable as well.   I will certainly post my code at some point as well.

Aligning astronomical images with alipy

I was out taking some pictures of the night sky last week trying to capture the Geminid meteor shower. I shot a lot of frames from a fixed tripod, and caught at least 6 meteors. My best frame is on Flickr, with minimal processing so far. I haven’t processed and stacked the whole set yet (and I’m not sure it would be worth doing), but I did make a quick movie (about 90 minutes compressed down to a few seconds, so the meteors go by fast!).

Working with these images got me thinking about my image processing tools again, and I revisited the next part I wanted to get working: alignment of multiple images. It looks like alipy has progressed quite a bit since I looked at it, releasing version 2.0 and a much neater architecture. I was able to get it running on some raw files converted to FITS pretty quickly by following the tutorial, and it does a nice job of picking out stars and correlating them between images. That’s true even for the wide-angle fixed tripod shots, with a fair bit of lens distortion and some trailing stars to make it a bit harder. I haven’t quite gotten to the point where I can directly stack the images, since there is a significant amount of distortion left in the frames, but the center of the frame looks pretty good.  The default settings of geomap that alipy uses assume a fairly simple transformation, but more complicated ones are possible.

There are a couple ways I can think of to tackle the distortion issue. Using a reference image that is in a flat coordinate system instead of one of the distorted images would help. Transforming all of the images to try to undo the barrel distortion of the lens would probably be useful as well. I would expect the residual distortion after these two steps to be a lot more manageable, and iraf.geomap should have an easier time with that.  IRAF does have a function that can correct for barrel distortion with a simplified 3rd order polynomial function, though I don’t see it in my current IRAF installation.

After working out the distortion issue, the next step will be stacking. Here the complication will be working with the different color pixels in the raw image. I would love to get a drizzle algorithm working, but I don’t know yet how possible it will be to use DrizzlePac directly (it is intended for Hubble Space Telescope images, not wide angle terrestrial cameras, and the challenges are rather different even if the algorithm is the same).

I also found a new open source python tool that targets amateur astronomers with DSLRs: arcsecond. It is PyGTK based, and I haven’t had a chance to try it out yet, but it looks interesting.  Finally, I also came across a sample script that uses sextractor directly, which may be useful for comparison with alipy’s methodology.

More to come as I have time.  I’d be happy to hear from anyone else exploring the possibilities of python tools for DSLR astrophotography.

Thinking about building a subwoofer

A recent visit to a friend who builds and sells high-fidelity speakers
has got me thinking of starting a project to build a subwoofer. My
home theater/stereo system has needed a subwoofer for a long time, and
a building subwoofer would be a (relatively) simple project.

The thing is, there are a number of apparently decent subwoofers on the market for
under $500. Some examples:

My guess is that I could do a bit better than the BIC America for
about the same cost in parts if I build it myself. But I don’t know
if I could do that much better, and building a decent enclosure would
take a fair amount of time and effort.

My first step in this direction was to write a simple modeling program
to model the frequency response of a given driver as a function of the
volume of its enclosure. There are free tools and online calculators
that do this for you, but I wanted to understand the tradeoffs, and
see the response change interactively. I used the equations found at
The Subwoofer DIY Page, and my results seem to be in line with other
tools, although I haven’t systematically verified them.

sample view of my subwoofer modeling program

My modeling shows that I could get very good low frequency response
from a 12″ Infinity 1260W that costs under $60 in a ported enclosure
with a volume of about 80 liters (e.g., plans for a similar sized enclosure). A sealed enclosure would have about half the volume, and
probably more precise sound reproduction, but not go to such low
frequencies. Then I would need an amplifier and to design and build
the enclosure itself.

I haven’t decided yet whether I will take the next step and follow
through on designing and building a subwoofer. On one hand, I think
it would be a fun and rewarding project. On the other, I’ve got
plenty of things to spend my time on, and I could just buy a
subwoofer. I’m still torn.

Astrophotography and python

I’ve recently gotten out to take some wide-field astrophotography images for the first time in a long while. In processing the images (a series of 27 1-minute exposures for the most recent set), I’ve gotten to thinking again about how best to process my image data. There are a lot of tools available, but there seems to be a fairly large division between amateur and professional tools available. There are some very good amateur tools out there, including the free IRIS (which I use) and Deep Sky Stacker (DSS), and the commercial PixInsight and Nebulosity. They all have some drawbacks for me; my main home computer runs Linux, and the first two are Windows-only.  Although I am able to run IRIS under Wine, it is not ideal. DSS I haven’t gotten to work completely on my Linux machine, either in Wine or a VirtualBox Windows XP install. Nebulosity looks good and runs on Mac, but not Linux, and PixInsight is truly cross-platform, but expensive.

The other option is to explore professional tools for astronomical processing. The modern astronomical community largely uses Python nowadays, and tools include PyRAF and PyFITS for processing image data. Since I am a scientist by training (though not an astronomer), and now a full-time Python developer, this route is appealing to me. I’ve spent a little time investigating possibilities, and it seems that surprisingly few amateurs are using these Python tools. Most of the processing steps will be very similar. The first major difference I could see is in the sensor data. Professional sensors are generally monochrome CCDs, with color filters that may be applied. Amateur imaging (including mine) is usually done with a digital SLR which has a color CMOS sensor. The quality and resolution of these sensors is very good, but they have the color filter built into the sensor. Your 10 mexapixel camera is really taking 2.5 million red, 2.5 million blue, and 5 million green samples in a Bayer array.  That’s fine, and there are lots of tools for processing raw files from DSLRs, but they almost always do interpolation and scaling of the pixels while converting them, and to use the image as raw sensor data, you want to get it from the camera directly and process it before it is altered.

Calibrated sensor image

The only attempt to convert Canon raw (.CR2) files to FITS files for astronomy that I found was cr2fits, which uses dcraw to do the conversion.  That worked, but the FITS files were interpolated and scaled.  Luckily, dcraw has options to output the raw 12-bit sensor data, unscaled and uninterpolated, and I added that ability in a fork of cr2fits.  Now I am able to load the sensor values into numpy arrays and manipulate them in python.  I’ve converted them to floating point arrays and done a simple calibration with the “dark” images I took the same night.  A portion of a calibrated image is shown here, as raw grayscale values and as a colorized version showing the Bayer array.

Colorized sensor image


The next step will be to figure out how to do alignment and stacking. Some tools that may help include alipy (which uses PyRAF) and astrometry.net, which has downloadable software in addition to their blind astronomy solver on Flickr.

Finding Comet Garradd

The past couple weekends I’ve been out in the early morning to capture some photographs of Comet Garradd, now high in the northern sky.  Something I’ve wanted for a long time is a simple tool to tell me where an object will when it gets dark (or before dawn twilight).  There are plenty of online calculators and astronomy software packages that can do this, but I just wanted something simple.  Recently I started playing with PyEphem, which is a python interface to the excellent XEphem astronomical routines, and wrote a simple script to calculate the time of sunrise/sunset, various stages of twilight, and the moon as well as the comet I was interested in.  The output looks like this:

Sun Feb 26 14:42:52 2012
Moon phase: 24% full

Comet C/2009 P1 (Garradd) mag 7.1 in Draco: RA 15:47:07.94 dec 63:46:48.6
 Sunset:                       Sun Feb 26 17:44:38 2012
 Civil twilight ends:          Sun Feb 26 18:12:44 2012
 Nautical twilight ends:       Sun Feb 26 18:43:46 2012
 Astronomical twilight ends:   Sun Feb 26 19:15:31 2012
 Moonset:                      Sun Feb 26 22:42:50 2012
 Astronomical twilight starts: Mon Feb 27 05:04:02 2012
 Comet Garradd transit:        Mon Feb 27 05:17:39 2012
 Nautical twilight starts:     Mon Feb 27 05:35:44 2012
 Civil twilight starts:        Mon Feb 27 06:06:42 2012
 Sunrise:                      Mon Feb 27 06:34:44 2012
 Moonrise:                     Mon Feb 27 09:00:17 2012

alt 17°, az 13° at end of astronomical twilight
alt 67°, az 4° at start of astronomical twilight
altitude at transit: 67°

It does just what I need, no more, no less.  The script can easily be adjusted for other locations and objects.  Eventually it would be nice to build some more flexible tools to tell you what interesting objects are visible at the end or start of twilight.

The script itself is here:


Preliminary results from this morning are below.  The stacked image follows the stars, which shows the motion of the comet during the 30 minute interval in which the exposures were taken.

cropped view of Comet Garradd

What Rocks?

Screenshot of geologic map from an iPhone on the road south of Chicago

I’ve updated my interactive geologic maps to use on a mobile device with geolocation to map the bedrock around you.  The result is a mobile app that is a useful accompaniment to a the Roadside Geology series, or any geological explorations. I’ve done some testing on my holiday road trip of the midwest, and it seems to work well enough.

So far, it only includes the northeastern states from Maine to Minnesota, and along the East Coast as far south as Virginia.  I have the data for the lower 48 states, and will get it updated as soon as I get the bugs worked out and figure out how to get it within the Fusion Tables size limits. The most obvious remaining glitch is that some very large polygons (notably in Ohio) don’t show up at all, I think because they contain too many points.

I’ve also added Texas, disconnected from the other available states, since I’ll spend the next week there, mostly in west Texas and the Big Bend area. Lots of interesting geology there!

Feel free to test it on a mobile browser, and let me know how it works for you.

Link: What rocks?

Geologic map of the Virginia earthquake

I made a quick interactive geologic map of Virginia and the surrounding states to show the contect of the M5.8 earthquake that occurred yesterday. The map uses Google Fusion Tables and USGS GIS data, as in my previous map of geology of the northeastern states.

Screenshot of geologic map of area surrounding the earthquake. Click the image for the interactive version.

You can toggle bedrock and fault layers with the controls at upper right, and click on rock units for more information.

A very nice summary of the geologic context of the earthquake is at Callan Bentley’s Mountain Beltway blog.

Virginia Earthquake Intensity Map

The magnitude 5.8 earthquake felt up and down the East coast today generated a lot of interest.  The best source of information, as always, is the USGS, who had updated information about the quake all day.  One of the interesting projects they have been doing for a while is crowdsourcing observations of intensity of earthquakes with their Did You Feel It? page.  The map for this event seemed a little sparse, so I wrote a simple GMT script to download, smooth, and replot the data from the USGS site.

A couple comments on this figure: over 12,000 people reported that they had felt it within a few hours, and the numbers look higher than the instrument measured intensities you can find in the official Shakemap (visuallized nicely in TileMill).  I don’t remember noticing that for California earthquakes.  If true, it might be a result of excitable Easterners overestimating how strong the shaking actually was. (Note: I haven’t actually compared these values with California examples.)

The script that generated the plot is here:

Git clone of GMT

Now that GMT has moved to SVN for source management, it is much easier to clone in git. I had tried with the previous CVS and git cvsimport, but it was awkward and unreliable. Now a full git clone of the GMT5 SVN repository is as simple as:

% git svn clone svn://pohaku.soest.hawaii.edu/gmt5/trunk gmt5

This takes a while (over an hour?) to run for the first time. Thereafter, updates are faster and a matter of running:

git svn fetch ; git svn rebase

to get up to date. I like to keep a clean GMT5 clone in /usr/local/GMTsvn/gmt5 on a workstation and clone that (which works fairly quickly) to my laptop for a local build. From a cloned build copy, I just run

% git pull
% make
% make install-gmt

So far, this has worked well for me, and I am building GMT5 on both OS X and 64-bit Ubuntu. On Ubuntu there was a minor issue with FFT libraries, it assumed by default that it should use the Accelerate framework from OS X. I took that out by editing configure.in and it was fine. It looks like the FFT libraries happen to be in transition in the GMT5 source at the moment (configure --enable-fftw doesn’t work, for example), so I expect this will be resolved soon.