Time lapse photography with a Raspberry Pi

My first Raspberry Pi project was to use it for time lapse photography during a snowstorm this week.  The storm dumped over a foot of snow on much of the northeast, and about 9 inches at my house.  The video below was my second attempt (the first is here).

On the Raspberry Pi side, this was pretty simple.  I was just using the Pi as a timed remote shutter trigger, nothing very fancy. I would have preferred to use gphoto2, because it would allow a lot more interesting things, like camera control, on-board processing, uploading photos in real time, and so on.  It was not working for me, however. I got errors like this:

*** Error ***
PTP I/O error

*** Error ***
An error occurred in the io-library ('Unspecified error'): No error description available

This is apparently a known issue, and may have to do with Raspberry Pi device support, though it also looks like this gphoto issue. This blog gets around it by resetting the serial connection, but I couldn’t get that to work consistently.  I tried compiling the latest version of gphoto2 on the Raspberry Pi as well, but that didn’t fix my problem.

Some people have triggered the shutter release directly with the GPIO pins (or even embedded a Raspberry Pi in their camera), but I already had a serial port cable I had built, so I hooked it up to a USB-serial adapter and used that.

Triggering the pin is just a matter of setting RTS high. In python:

s = serial.Serial('/dev/ttyUSB0')
s.setRTS(True) ; time.sleep(0.2);  s.setRTS(False)

Timing seems to matter; sleeping 0.1 s does not trigger.  For longer exposures, I had to sleep a bit longer to trigger consistently.  (Here’s the simple script I actually used.)

Stitching the resulting images together into a movie is straightforward with ffmpeg or avconv.  The only trick I found was that you need to renumber the images to start at 1, at least with the version of avconv I have on my Ubuntu machine.  I used 10 frames per second, with the frames taken 5 minutes apart.  A slower framerate than 10 fps seems too jumpy to my eyes.  I would probably increase the shooting rate by a factor of 2 or more next time for more flexibility.

Another relevant site for shooting with linux is tethered shooting in Ubuntu.  This has a bit about gphoto2 and uses Darktable as an interactive application.  If I actually get photo2 working, I may try something like that.

Eventually, I am interested in using the Raspberry Pi for astrophotography.  Triggering exposures is a start, but even more interesting would be using it to drive autoguiding and telescope control.  It’s a great little platform, even if device support is sometimes frustrating.

Posted in photography, Python | Leave a comment

Reprocessed Milky Way image

I’ve made my first attempt to process astronomical images from raw camera images in python.  I made this composite image from the same 15 raw exposures that went into this image on Flickr, which was stacked with IRIS.  There is a lot more detail visible in the image below than in the original.  I’m not claiming that my methods are inherently better at this point, or that it can do everything IRIS can do, but I have full control over every step of the process.  The biggest difference between the two images is that I’ve done a better job of removing the uneven background in the image below.

Cygnus Milky Way

Stack of 15 90-second exposures in light polluted sky of northern New Jersey. Tracked with motorized barn door tracking mount.

The processing steps so far are:

  1. Convert from raw to FITS with CR2FITS
  2. Align images with alipy
  3. Stack images as numpy arrays
  4. Fit 2nd-order polynomial to sky background
  5. Export to 8-bit tiff image
  6. Adjust color levels in GIMP

A few of the items on my to do list are:

  1. Calibration of raw images
  2. Stack images with drizzle
  3. Export to 16-bit tiff (which can be used by the GIMP 2.10 development branch)

There is still quite a bit of work to do to clean up the code and turn my rough script into something more robust, flexible and maintainable as well.   I will certainly post my code at some point as well.

Posted in Astronomy, Python | 2 Comments

Aligning astronomical images with alipy

I was out taking some pictures of the night sky last week trying to capture the Geminid meteor shower. I shot a lot of frames from a fixed tripod, and caught at least 6 meteors. My best frame is on Flickr, with minimal processing so far. I haven’t processed and stacked the whole set yet (and I’m not sure it would be worth doing), but I did make a quick movie (about 90 minutes compressed down to a few seconds, so the meteors go by fast!).

Working with these images got me thinking about my image processing tools again, and I revisited the next part I wanted to get working: alignment of multiple images. It looks like alipy has progressed quite a bit since I looked at it, releasing version 2.0 and a much neater architecture. I was able to get it running on some raw files converted to FITS pretty quickly by following the tutorial, and it does a nice job of picking out stars and correlating them between images. That’s true even for the wide-angle fixed tripod shots, with a fair bit of lens distortion and some trailing stars to make it a bit harder. I haven’t quite gotten to the point where I can directly stack the images, since there is a significant amount of distortion left in the frames, but the center of the frame looks pretty good.  The default settings of geomap that alipy uses assume a fairly simple transformation, but more complicated ones are possible.

There are a couple ways I can think of to tackle the distortion issue. Using a reference image that is in a flat coordinate system instead of one of the distorted images would help. Transforming all of the images to try to undo the barrel distortion of the lens would probably be useful as well. I would expect the residual distortion after these two steps to be a lot more manageable, and iraf.geomap should have an easier time with that.  IRAF does have a function that can correct for barrel distortion with a simplified 3rd order polynomial function, though I don’t see it in my current IRAF installation.

After working out the distortion issue, the next step will be stacking. Here the complication will be working with the different color pixels in the raw image. I would love to get a drizzle algorithm working, but I don’t know yet how possible it will be to use DrizzlePac directly (it is intended for Hubble Space Telescope images, not wide angle terrestrial cameras, and the challenges are rather different even if the algorithm is the same).

I also found a new open source python tool that targets amateur astronomers with DSLRs: arcsecond. It is PyGTK based, and I haven’t had a chance to try it out yet, but it looks interesting.  Finally, I also came across a sample script that uses sextractor directly, which may be useful for comparison with alipy’s methodology.

More to come as I have time.  I’d be happy to hear from anyone else exploring the possibilities of python tools for DSLR astrophotography.

Posted in Astronomy, Python | 1 Comment

Thinking about building a subwoofer

A recent visit to a friend who builds and sells high-fidelity speakers
has got me thinking of starting a project to build a subwoofer. My
home theater/stereo system has needed a subwoofer for a long time, and
a building subwoofer would be a (relatively) simple project.

The thing is, there are a number of apparently decent subwoofers on the market for
under $500. Some examples:

My guess is that I could do a bit better than the BIC America for
about the same cost in parts if I build it myself. But I don’t know
if I could do that much better, and building a decent enclosure would
take a fair amount of time and effort.

My first step in this direction was to write a simple modeling program
to model the frequency response of a given driver as a function of the
volume of its enclosure. There are free tools and online calculators
that do this for you, but I wanted to understand the tradeoffs, and
see the response change interactively. I used the equations found at
The Subwoofer DIY Page, and my results seem to be in line with other
tools, although I haven’t systematically verified them.

sample view of my subwoofer modeling program

My modeling shows that I could get very good low frequency response
from a 12″ Infinity 1260W that costs under $60 in a ported enclosure
with a volume of about 80 liters (e.g., plans for a similar sized enclosure). A sealed enclosure would have about half the volume, and
probably more precise sound reproduction, but not go to such low
frequencies. Then I would need an amplifier and to design and build
the enclosure itself.

I haven’t decided yet whether I will take the next step and follow
through on designing and building a subwoofer. On one hand, I think
it would be a fun and rewarding project. On the other, I’ve got
plenty of things to spend my time on, and I could just buy a
subwoofer. I’m still torn.

Posted in Electronics, Python | Leave a comment

Astrophotography and python

I’ve recently gotten out to take some wide-field astrophotography images for the first time in a long while. In processing the images (a series of 27 1-minute exposures for the most recent set), I’ve gotten to thinking again about how best to process my image data. There are a lot of tools available, but there seems to be a fairly large division between amateur and professional tools available. There are some very good amateur tools out there, including the free IRIS (which I use) and Deep Sky Stacker (DSS), and the commercial PixInsight and Nebulosity. They all have some drawbacks for me; my main home computer runs Linux, and the first two are Windows-only.  Although I am able to run IRIS under Wine, it is not ideal. DSS I haven’t gotten to work completely on my Linux machine, either in Wine or a VirtualBox Windows XP install. Nebulosity looks good and runs on Mac, but not Linux, and PixInsight is truly cross-platform, but expensive.

The other option is to explore professional tools for astronomical processing. The modern astronomical community largely uses Python nowadays, and tools include PyRAF and PyFITS for processing image data. Since I am a scientist by training (though not an astronomer), and now a full-time Python developer, this route is appealing to me. I’ve spent a little time investigating possibilities, and it seems that surprisingly few amateurs are using these Python tools. Most of the processing steps will be very similar. The first major difference I could see is in the sensor data. Professional sensors are generally monochrome CCDs, with color filters that may be applied. Amateur imaging (including mine) is usually done with a digital SLR which has a color CMOS sensor. The quality and resolution of these sensors is very good, but they have the color filter built into the sensor. Your 10 mexapixel camera is really taking 2.5 million red, 2.5 million blue, and 5 million green samples in a Bayer array.  That’s fine, and there are lots of tools for processing raw files from DSLRs, but they almost always do interpolation and scaling of the pixels while converting them, and to use the image as raw sensor data, you want to get it from the camera directly and process it before it is altered.

Calibrated sensor image

The only attempt to convert Canon raw (.CR2) files to FITS files for astronomy that I found was cr2fits, which uses dcraw to do the conversion.  That worked, but the FITS files were interpolated and scaled.  Luckily, dcraw has options to output the raw 12-bit sensor data, unscaled and uninterpolated, and I added that ability in a fork of cr2fits.  Now I am able to load the sensor values into numpy arrays and manipulate them in python.  I’ve converted them to floating point arrays and done a simple calibration with the “dark” images I took the same night.  A portion of a calibrated image is shown here, as raw grayscale values and as a colorized version showing the Bayer array.

Colorized sensor image

 

The next step will be to figure out how to do alignment and stacking. Some tools that may help include alipy (which uses PyRAF) and astrometry.net, which has downloadable software in addition to their blind astronomy solver on Flickr.

Posted in Astronomy, Python | 5 Comments