TechCrunch’s Devin Coldewey has a early article about the Lytro camera, based on some early photographs made by Eric Cheng. He agrees that the camera itself is fascinating, but believes that it’s more damaging to photography that beneficial.
Speaking from the perspective of a tech writer and someone interested in cameras, optics, and this sort of thing in general, I have to say the technology is absolutely amazing. But from the perspective of a photographer, I’m troubled. To start with, a large portion of the photography process has been removed — and not simply a technical part, but a creative part. There’s a reason focus is called focus and not something like “optical optimum” or “sharpness.” Focus is about making a decision as a photographer about what you’re taking a picture of. It’s clear that Ng is not of the same opinion: he describes focusing as “a chore,” and believes removing it simplifies the process. In a way, it does — the way hot dogs simplify meat. Without focus, it’s just the record of a bunch of photons. And saying it’s a revolution in photography is like saying dioramas are a revolution in sculpture.
I disagree with him. Of course this first version won’t offer much to professional photographers, but just as early digital cameras were nothing but toys but eventually became the mainstay of photography, so will computational cameras like the Lytro. The first offering is too limited and resolution-shy to be of much use to professionals, but as the resolution climbs and people come up with more fascinating feature that can be done with the plenoptic photographs, I’m sure they’ll eventually become the new mainstay of photography.
Yeah, this is about Computational Photography (with something other than a N900). It’s not about depth of field, that’s just something investors with no technical background can latch onto.
Assuming a video version isn’t too hard to make, the obvious application to me is to use it as a depth sensor like the Kinect but one that can operate in sunlight (but probably more poorly in lower light, and also requires sufficient object texture like stereo depth sensors do).
There are some interesting artistic things to do with it but the novelty of them would probably wear off really quickly if used on a large scale.
The other path to acceptance is if it offers something to do with future tiny sensor pixels that are beyond the diffraction limit (other than oversampling). But I’m not sure about the optics there.
There was a comment on techcrunch that said it already- blurry photos are more often due to motion blur and low light levels and not failure of the autofocus or manual focus, so the real money is in addressing those problems.
You’re right about lowlight and motion-blur, however with a Plenoptic array you could easily have each lens calibrated for multiple light settings, and then use computational methods to create a better image that traditional methods. Similarly with the motionblur problem (an example I actually saw at SIGGRRAPH2010’s Computational Photography session), you could combine an accelerometer with the lens array to let the camera automatically remove motion-blur based on depth data collected by neighboring cells in the lens combined with motion data from the accelerometer. The results weren’t perfect, but a vast improvement in what would have been garbage photos.