** I have moved my blog to www.ianhopkinson.org.uk, you can find this post here**
Lytro, Inc, a technology spin-off company founded by Ren Ng, have been in the news recently with their announcement of a re-focusable camera: take one “image”, and change where the focal plane lies after the fact. This is illustrated in the images above, generated from a single shot from the prototype camera. As you move from left to right across this sequence you can see the focus shifting from the front left of image to back right.I saw this work a few years ago at the mighty SIGGRAPH conference, it comes out of a relatively new field of “computational photography”.
All photography is computational to a degree. In the past the computation was done using lenses and chemicals, different chemical mixes and processing times led to different colour effects in the final image. Nowadays we can do things digitally, or in new combinations of physical and digital.
These days your digital camera will already be doing significant computation on any image. The CCD sensor in a camera is fundamentally a photon-counting device – it doesn’t know anything about colour. Colour is obtained by putting a Bayer mask over the sensor, a cunning array of red, green and blue filters. It requires computation to unravel the effect of this filter array to make a colour image. Your camera will also make a white balance correction to take account of lighting colour. Finally, the manufacturer may apply image sharpening and colour enhancement, since colour is a remarkably complex thing there are a range of choices about how to present measured colours. These days compact cameras often come with face recognition, a further level of computation.
The Lytro system works by placing a microlens array in the optical train, the prototype device (described here) used a 296x296 array of lenses focusing onto a 16 million pixel medium format CCD chip, just short 40mmx40mm in size. The array of microlenses means means that for each pixel on the sensor you can work out the direction in which it was travelling, rather than just where it landed. For this reason this type of photography is sometimes called 4D or light-field photography. The 4 dimensions are the 2 dimensions locating where on the sensor the photon lands, and the direction in which it travels, described by another two dimensions. Once you have this truckload of data you can start doing neat tricks, such as changing the aperture and focal position of the displayed image, you can even shift the image viewpoint.
As well as refocusing there are also potentially benefits in being able to take images before accurate autofocus is achieved and then using computation to recover a focused image.
The work leading to Lytro was done by Ren Ng in Marc Levoy’s group at Stanford, home of the Stanford Multi-Camera Array: dispense with all that fiddly microlens stuff: just strap together 100 separate digital video cameras! This area can also result in terrible things being done to innocent cameras, for example in this work on deblurring images by fluttering the shutter, half a camera has been hacked off! Those involved have recognized this propensity and created the FrankenCamera.
Another example of computational photography is in high dynamic range imaging, normal digital images are acquired in a limited dynamic range: the ratio of the brightest thing they can show to the darkest thing they can show in a single image. The way around this is to take multiple images with different exposures and then combine together. This seems to lead, rather often, to some rather “over cooked” shots. However, this is a function of taste, fundamentally there is nothing wrong with this technique. The reason that such processing occurs is that although we can capture very high dynamic range images, displaying them is tricky so we have to look for techniques to squish the range down for viewing. There’s more on high dynamic range imaging here on the Cambridge in Colour website, which I recommend for good descriptions of all manner of things relating to photography.
I’m not sure whether the Lytro camera will be a commercial success. Users of mass market cameras are not typically using the type of depth-of-field effect shown at the top of the post (and repeated ad nauseum on the Lytro website). However, the system does offer other benefits, and it may be that ultimately it ends up in cameras without us really being aware of it. It’s possible Lytro will never make a camera, but instead license the technology to the big players like Canon, Panasonic or Nikon. As it stands we are part way through the journey from research demo to product.
No comments:
Post a Comment