The news in the photosphere this morning is a New York Times article that has hit our little world like a bombshell. It's been five years or so since I last had occasion to mention the plenoptic camera of Ren Ng (right) in these precincts, and he and his associates have been plenty busy in the meantime, raising $50 million in venture capital and readying a commercial product! That was fast. The brand name is to be Lytro, and with five years' worth of Moore's Law since it was invented, it's getting more and more feasible all the time.
You know how you don't have to set white balance on the camera when you use RAW? You do it later, of course, in your photo editing program. Well, the Lytro does the same thing for focus and depth of field—shoot first, and select your desired plane of focus and desired depth of field later, at your leisure, on your computer. Here's the intro page to the original Stanford University paper (the .avi video at this link takes a while to load, I presume because of heavy demand, but it's pretty cool). Here's a brief Lytro promo video, and here's Lytro's website. (I'd recommend reading the Times article (first link) first.)
Back in 2006 I scribbled, "I'm no Nostradamus, but it's probably likely that this fascinating development will figure in the photography of the future." They're certainly getting closer. It could well be that discussions of "autofocus" and "manual focus" (and focus lag and everything else related to mechanical before-the-shot focus) are soon to be as quaint and old-fashioned as kerosene lamps. And without the need to allow us to find and set focus as we shoot, camera design will be greatly freed from one of its principle constraints, so camera designs would be likely to change radically, too.
Things like this can go either way, of course—there are plenty of examples in history of great ideas getting buried in the backwaters of proprietary protection, public uninterest, or predatory suppression by more powerful competitors, and withering away there. I have a feeling this isn't going to be one of them, but who knows? It's certainly going to be interesting to watch the progress of Lytro's "Light Field Photography" from here on in.
Mike
(Thanks to Oren and many others)
Send this post to a friend
Please help support TOP by patronizing our sponsors B&H Photo and Amazon
Note: Links in this post may be to our affiliates; sales through affiliate links may benefit this site. More...
Original contents copyright 2011 by Michael C. Johnston and/or the bylined author. All Rights Reserved.
Featured Comment by Ken Tanaka: "I've no doubt that this technology will find its way into every little camera by 2020. It will be right there with the various 'detection' features (i.e., face, smile, blink, ugly), many of which were also probably brought to photography by computer science doctoral candidates.
"This past weekend I read an extensive article on just how far drone technology has been developed for war and surveillance. Surely it won't be long before we can just type GPS coordinates into a camera, open a window, and send our solar-powered cameras off to take pictures for us while we watch TV. The most popular snapshot sites will never be the same. Tourism will be down at places like Paris and the Grand Canyon. Instead, it will look like they are constantly plagued by swarms of flying drone cams. The whole idea of 'street photography' will also be revolutionized.
"Of course the cams will upload everything directly to Flickr and Facebook unedited. (Not much change there.)"
Mike replies: Ken, like me sometimes these days, you are sounding a touch cranky.... ;-)
Featured Comment by Marc Rochkind: "I am trying to catch up as quickly as I can. Today, for my camera history work, I spent the entire morning researching the motivations behind Kodak's replacing Instamatic (126) with Pocket Instamatic (110). Tomorrow I plan to start on Disc, and then by Sat. I should be ready for Dr. Ng."
Mike replies: Cranky...cranky....
Featured Comment by SRay: "Remember when the Segway was supposed to revolutionize the way people get around? Well, I've never seen an actual Segway, but I sure do see a lot of bikes. Lytro...another Segway? Or a step in a whole new direction for photography? One thing is for sure, imagemaking is evolving. If Lytro becomes a movement, so to speak, it'll be interesting to see what Canon and Nikon will do next. Meanwhile, play with the Lytro. Looks like it might be fun."
Based on the Times description, Lytro will depend on a large number of tiny lenses taking pictures simultaneously. If so the optical quality should be on par with what you get if you take a picture with fifty cell phone cameras. At the same time! Quite cool but not likely to keep Nikon or Canon up at night.
Posted by: Tim F | Wednesday, 22 June 2011 at 10:30 AM
Tim,
If there's one thing that I can almost guarantee, it's that this is keeping Nikon and Canon up at night. [g]
Mike
Posted by: Mike Johnston | Wednesday, 22 June 2011 at 10:35 AM
Bad news: It's a trade-off between resolution and ability to adjust DOF after the fact.
Good news: It's going to profit a lot more from higher resolution sensors than conventional cameras.
Better news: It allows some very special other features, like calculating actual (or at least relative) depth to various parts of the scene or adjusting perspective (slightly) in macro photography.
Posted by: Lars Clausen | Wednesday, 22 June 2011 at 10:52 AM
Like full-resolution video, this may seem like convenience, but in the end it will cost you more time. With the former, you're going to have to sift through hundreds of thousands of shots to get the right one, and here you'll have to pick the focus point and DOF of every shot you take.
Posted by: Poagao | Wednesday, 22 June 2011 at 10:52 AM
Very cool concept. Playing with the example pictures was an interesting experience. It'll be interesting to see how it all shakes out.
Posted by: Peter Cameron | Wednesday, 22 June 2011 at 10:55 AM
It will still need some sort of "focusing" - better called "subject identification" or it won't be able to create a JPEG. Deciding the focus point and depth of field later is great, but most folks just want a decent JPEG...
Posted by: KeithB | Wednesday, 22 June 2011 at 11:04 AM
If you can change to focus point, then theoretically at least, you should be able to use different focus points on different parts of the image, no? View camera tilts and swings? Or even better, a focus brush?
Posted by: Rob | Wednesday, 22 June 2011 at 11:04 AM
I would recommend also if you have the time to read the full dissertation. It explains how the prototype works with a combination of microlenses in front of the sensor and some image processing.
http://www.lytro.com/renng-thesis.pdf
What I really like about this solution is the use of the microlenses -- the rest of the math is somewhat straight forward and well known from an image processing perspective, but the microlens solution to enable focusing at multiple depths of field in one photo is really innovative.
It is also going to give camera makers a real reason to drive up the megapixel count (and sensitivity), because from what I see, this solution would benefit from a better detailed and more sensitive sensor.
Pak
Posted by: Pak-Ming Wan | Wednesday, 22 June 2011 at 11:14 AM
That's not just a new camera system, it's a new camera paradigm (to use an already over-used word).
Posted by: Lars Clausen | Wednesday, 22 June 2011 at 11:16 AM
Alas, that's another nail on the coffin of decisive moment. Now we have to take high res, head-mounted, panoramic, digicam movies, crop, and lytro. The last nail would be a made-to-order perspective control that one chooses while on the armchair. So much for intuitive surrealism.
Posted by: Animesh Ray | Wednesday, 22 June 2011 at 11:21 AM
Here is what I see at the end of this path. Cameras will be obsolete. "Photographers" will sit in front of their computers, decide to photograph the Grand Canyon and purchase on-line the Grand Canyon light-field files. They will be able to set the time of day, move around in the virtual canyon, set focus, DOF and shutter speed, frame and capture, all from the comfort of their easy chair and glowing screen.
Doesn't really sound like much fun to me.
Posted by: Edd Fuller | Wednesday, 22 June 2011 at 11:27 AM
Looks pretty cool for snapshots and travel pics, but from a fine art perspective I wonder if artists will be interested. I'm not sure if I am. Part of the joy of photography is knowing how to adjust aperture and choose the correct focal lengths for a given scene while you're composing. Controlling every aspect of a photo at the moment of capture ties you to that photo. The idea of just pointing a camera at a scene then figuring out what should be in focus later horrifies me.
That said, the camera nerd in me sure wants to play with it.
Posted by: Eric | Wednesday, 22 June 2011 at 11:27 AM
This is ridiculous. Now anyone will be able to take photographs.
Posted by: peter.gilbert | Wednesday, 22 June 2011 at 11:45 AM
Now that, as they say, has posibilities.
Posted by: Dennis Huteson | Wednesday, 22 June 2011 at 11:49 AM
My wife sent me a link this morning about this, and I am still trying to digest all of it. The link I saw at least had a lot of mumbo-jumbo about 'light fields', but what I take away from it is that the camera is taking a series of shallow dof shots and creating a composite. Software can be used after the fact to slect the focal point.
It is interesting, and it has me wondering about the application and what the possibilities are. The biggest question I have is whether the technology can be used to create pictures with more dof.
The stuff about 'light fields' also has me somewhat skeptical, and I wonder about things like processing time, but it will be interesting to see what develops.
Posted by: scotth | Wednesday, 22 June 2011 at 11:49 AM
If you look closely at the faces in the sample images on the Lytro site, you can see some pretty serious banding (Zoom in on the baby's face). Perhaps not an issue for snapshots, but they have a way to go for anything more serious than this. Although I can see that once the technology is perfected, it will be a game changer.
Posted by: TBannor | Wednesday, 22 June 2011 at 11:53 AM
Very nice. I played with some of the images at the website and while the plane of focus does shift forward and back, there seems to be some limitation as to just how sharp the most forward and rear elements can get especially over a great depth. In other words, it is not perfect or total focus, at least to me, with the images on the website. Also, I wonder if one can bring the entire image into focus and not just certain planes.
Still pretty neat and I am sure we are looking at the next billionaire...
Posted by: Ed Kirkpatrick | Wednesday, 22 June 2011 at 11:57 AM
Does this mean we no longer have to worry about aperture and just adjust the speed for proper exposure?
Posted by: Johnskrill | Wednesday, 22 June 2011 at 12:09 PM
Video hits: As of 12:08 EDT there were only 308
I'll check back in 24 hours and see what impact you post has.
Posted by: Bob Mc | Wednesday, 22 June 2011 at 12:10 PM
I'm not sure if I get the point of this... how is it different from me shooting at f16 so that I can simply see everything in focus rather than messing around with it all afterwards?
Isn't it really just another gimmick? Since when has there been a need to take something out of focus in an image?
Just give me a bigger sensor in a smaller body dammit!!!
Posted by: T | Wednesday, 22 June 2011 at 12:23 PM
I imagine Ctein is smiling happily...
Posted by: James | Wednesday, 22 June 2011 at 12:38 PM
There is no doubt that selecting a point of focus from several possible choices would be interesting and useful. The truth is that the idea is so far out of my personal experience that I can't quite grasp all the possibilities for my personal photography.
If the camera actually works then using the phrase Paradigm Shift would be an appropriate description of the possible impact of the technology.
Posted by: Ken White | Wednesday, 22 June 2011 at 12:42 PM
Sorry for the double post. As I was just thinking about the possibilities of multiple focus points, suddenly an image of a Braque painting popped into my mind. The creative possibilities for this technology seem formidable to say the least.
Posted by: Ken White | Wednesday, 22 June 2011 at 12:48 PM
Seems to me this is for the average person just out shooting for an afternoon. Any pro (or advanced amateur) I'd hope would have a battle plan for accomplishing the look he's after. I can however see this taking off if it's combined with an instagram style sharing component. Getting the technology into a cell phone may be a trick though.
Posted by: Chad Thompson | Wednesday, 22 June 2011 at 12:54 PM
I suspect the "letting the viewer play with the photo" aspect that the Times focuses on is exactly the path that will kill it as a niche toy. It's superficially cool, but of little lasting interest.
Faster-acting captures, and simpler mechanical systems in fast cameras, though, are both really valuable.
3D out of the same captures seems potentially interesting as well.
Posted by: David Dyer-Bennet | Wednesday, 22 June 2011 at 01:04 PM
Raytrix, a German company, brought a 3 MP camera of this type to market earlier this year, although with a price tag reputed to be way high. Lytro's aim is at the consumer market. I agree with you Mike that such developments will profoundly affect the evolution of photography in the upcoming decades.
Posted by: Malcolm S | Wednesday, 22 June 2011 at 01:08 PM
I read the NYT article (but not the paper) and am fascinated by the possibilities.
Can the Depth of Field also be chosen later?
And can it deal with motion in the image?
Just wondering.
Steve
Posted by: Steven House | Wednesday, 22 June 2011 at 01:12 PM
I'm having so much troubling focusing on all of the new camera technologies. IMO, perhaps it's better for me to focus on what I already know.
Posted by: Mark Hobson | Wednesday, 22 June 2011 at 01:25 PM
Cropping after the fact. Focusing after the fact. Just what online forums need...more heated debates.
Posted by: Jeff | Wednesday, 22 June 2011 at 01:58 PM
that is possibly the highest ratio of innovation to letters in one's name that I have seen.
Well done Mr. Ng.
Posted by: Michel | Wednesday, 22 June 2011 at 02:14 PM
Dear Mike,
I am sure I do not understand the science behind this nor its implications really well but would it mean that aperture becomes an obsolete parameter? I understand one can choose the plane of focus afterwards but what about the depth of field? Would it not be limited by the aperture when the picture was taken?
Forgive me my bad English, I am Belgian-Dutch.
Posted by: Erik | Wednesday, 22 June 2011 at 02:27 PM
..and it still won't matter one bit. A good photograph has nothing to do with technology.
Posted by: Mark Olwick | Wednesday, 22 June 2011 at 02:40 PM
I thought back when they were called re-focus imaging that this would be great for macro and telephoto work, the million dollar question is what lens mount will they use? their own or someone elses? If their own (I suspect the lens design might be quite important) it will very much depend on lens range for initial take up for a lot of us.
all the best. phil
Posted by: phil | Wednesday, 22 June 2011 at 02:42 PM
On the one hand, I find this fascinating.
On the other, I shudder at the thought of fifteen years of "Lytro vs real photography" pointless internet acrimony, just like we've been getting with "digital vs film" (and like we'd likely have had with "collodion vs dry plate" had the web been invented in 1894 instead.)
Posted by: Ludovic | Wednesday, 22 June 2011 at 02:57 PM
This brings new meaning to the term "retrofocus."
Posted by: Chuck Albertson | Wednesday, 22 June 2011 at 03:21 PM
Good news: point the camere at whatever direction, press the shutter and you can go home.
Bad news: until you go home and fiddle for couple of hours for one image, you don't actually hava a picutre.
They will figure out automatic selection of everything finally anyway.
Posted by: wchen | Wednesday, 22 June 2011 at 03:29 PM
I'm a little surprised that retrospectively changing focus is perceived as such a game-changer, subjectively, though I can see it could have major implications for camera and len design, which might in itself revolutionize the "consumer" market, at least.
Now, retrospectively changing the exposure variably across the whole image from a single shot ... That really would be a game changer.
Mike
Posted by: Mike Chisholm | Wednesday, 22 June 2011 at 03:54 PM
I'm not sure if I get the point of this... how is it different from me shooting at f16 so that I can simply see everything in focus rather than messing around with it all afterwards?
The point is you can take a picture and choose which part is in focus, rather than having everything in focus as at f/16. You are effectively setting aperture and focus distance on the computer, meaning the only parameters you need to worry about on the camera are ISO and shutter speed.
That's not to say that can only adjust focus on the computer - I'm sure a decent camera of this type will allow the photographer to manually set a 'focus' distance and 'depth of field', or have the camera emulate an AF system and aperture. For non-photographers, the Auto mode will probably just render everything in focus.
They will be able to set the time of day, move around in the virtual canyon, set focus, DOF and shutter speed, frame and capture, all from the comfort of their easy chair and glowing screen.
Doesn't really sound like much fun to me.
Well, nobody's forcing you to do it - you can still go out and take photos in the real world, just like people can still use film if they want to.
Posted by: Andy | Wednesday, 22 June 2011 at 03:58 PM
Definitely cool technology. But the last thing I want to do is spend more time in the post-production of my photos.
Posted by: Dean Tomasula | Wednesday, 22 June 2011 at 04:01 PM
Basically, I think it'll be a niche product, analogous to that other photographic game-changer, Polaroid...which didn't change the game that much.
I don't see that much function for serious professionals or advanced amateurs. How many advanced photographers miss focus so often that they need to fix it later? Sure, you miss on an occasional shot, but how much are you willing to pay to fix that? As for P&S and phone-cam shooters, how many are really interested in post processing, as opposed, say, to chimping? You shoot, you look at your shot, and if it's out of focus, no problem, you shoot again.
So, I give it a niche, but not a home run. Sort of like a more-successful GXR.
Posted by: John Camp | Wednesday, 22 June 2011 at 04:02 PM
I suspect the "letting the viewer play with the photo" aspect that the Times focuses on is exactly the path that will kill it as a niche toy. It's superficially cool, but of little lasting interest.
Faster-acting captures, and simpler mechanical systems in fast cameras, though, are both really valuable.
3D out of the same captures seems potentially interesting as well.
I think this is exactly right. The ability to play with the focus is really cool, but I'm not sure it's really that big a deal. Once this technology becomes commonplace, the focus shift will allow you to tweak and correct the focus after the fact, but it's hard to see it having much of a lasting creative impact.
That said, I could see this leading to interactive, 3D photos that really would revolutionize the field. Imagine a photo like the stop time ones in "The Matrix" that you can manipulate on a future version of the iPad.
This reminds me a little of the "morphing" technology from the early 90s. That was a big fad for a while, but ultimately it was just a gimmick with limited value. But focusing on the "morphing" stuff minimized the real story -- the incredible advances in computer animation. The real value in the technology was the ability to use computer animation to put incredible, life-like images along side real actors in movies.
Posted by: rp | Wednesday, 22 June 2011 at 04:12 PM
Dear folks,
A handful of answers to technical questions:
1) Do not assume the image quality on the website is indicative of what the camera produces. It may only be intended to show off the “cool features.” I can think of technical reasons why the image quality might be significantly worse than what a real camera would produce… Or not. But smarter and more experienced companies than this have posted crappy and unrepresentative demo photos (think of some of the Fuji X100 samples).
2) Of course the camera will “autofocus.” Hardly any photographers, professional or amateur, want the additional workflow of having to set a point of focus after the fact. As an option, nice. As a necessity, hardly.
Autofocus detection works very quickly. The difference in this case is that the detection module doesn't then have to drive a lens to focus; it simply sticks the information in the file metadata as the image gets written out. Effective focus delay is essentially zero.
3) The lens is, of necessity, a large aperture, fixed focus lens (fortunately good ones are not hard to design). No, there is no way to stop it down, but the only reasons for stopping a lens down are to increase the depth of field (now solved) and to allow slower shutter speeds (which only a miniscule fraction of photographers want).
4) It's not hard to produce sensors with lots and lots of pixels. Simply scaling up the Fuji S100 sensor to full frame would get you 300-500 megapixels. Problem is that such sensors are not cheap. Even although this one would be extremely defect tolerant. And the cameras are bigger. And so on.
Anyway, I don't think we can make assumptions about what the output resolution of the camera will be; depends on the market they're actually aiming at, and your guess is as good as mine.
5) Yes, this technology can allow you to paint in the points of focus where you wanted them to be within the frame, with suitable user software. (Adobe demonstrated proof-of-concept hardware/software for that - I think Mike wrote about it some three or four years ago).
pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================
Posted by: ctein | Wednesday, 22 June 2011 at 04:21 PM
"Autofocus detection works very quickly. The difference in this case is that the detection module doesn't then have to drive a lens to focus; it simply sticks the information in the file metadata as the image gets written out. Effective focus delay is essentially zero."
More than that, the camera can decide after the fact where the focus point should be, sort of like setting white balance.
Posted by: Andy | Wednesday, 22 June 2011 at 04:56 PM
Fascinating. (right eyebrow raised)
Posted by: Eric G. Rose | Wednesday, 22 June 2011 at 05:09 PM
Could this easy method of obtaining distance information lead to a really fast method of modelling 3d objects in a computer, say, for copying using a 3d printer?
Posted by: Michael Barkowski | Wednesday, 22 June 2011 at 05:23 PM
Kerosene lamps? I still light my dwelling with oil lamps. Trim those wicks and spin those aperture rings...
Posted by: Erik | Wednesday, 22 June 2011 at 05:36 PM
Now all we need is a "time field" camera so we can go back and find when the decisive moment was. Hocus-pocus, it's all in focus!
Posted by: Will Whitaker | Wednesday, 22 June 2011 at 05:44 PM
The kid in me loves the concept of Ken Tanaka's comment:
To have a radio control camera drone with HD video in the nose and a high MP turret cam so that you could fly with a flock of birds or a dragonfly and selective focus/DOF afterwards would be amazing – being able currently to dangle a digi-cam on the end of a monopod with AF and continuous shooting is good compared to the single shot MF I had, but a planecam would be a real life flightsim with photography as well; so who would need TV?........Forum threads with 'tog the togger's planecam', 'flash-flocks' when something interesting happens, gaggles of planecams flying off on a photographic meet, etc.
The adult in me realises you'd have to photoshop out all the other unwanted planecams and quiet privacy might become a thing of the past, insurance would be a nightmare as well, but at least it would keep kids with catapults and air-rifles happy!
If it keeps the masses at home in their easy chairs there's all the more outdoors for the rest of us. As a hobby I might even buy some film again ;-)
best. phil
Posted by: phil | Wednesday, 22 June 2011 at 05:54 PM
We can always combine a pinhole camera with a digital back.
Posted by: Herman | Wednesday, 22 June 2011 at 06:00 PM
Hopefully the cameras will all come with an on-off switch (for this feature) so that one could get back to 'real' photography when one felt the urge to! lol
Posted by: David | Wednesday, 22 June 2011 at 07:03 PM
This will mark the death of photography.... there will be people pushing buttons to capture images, but they won't be photographers taking photographs. They'll be people wearing special glasses that capture pixels for the future... where everyone will be spending time at a computer working as graphic artists. And many of us old-timers will be asking if technology is truly advantageous!
Posted by: Crage | Wednesday, 22 June 2011 at 07:13 PM
Dear Crage,
Absolutely!
It's been nothing but a downhill slide ever since that upstart Eastman started selling box cameras and roll film to the masses and ruined it for us REAL craftspeople.
Why it makes... uhh, 'scuse me for a minute ...
"HEY, ALL YOU DIGITAL PIXEL-PUSHIN' KIDS! GET OFF OF MY LAWN!"
OK, now where was I?
pax / Ctein
Posted by: ctein | Wednesday, 22 June 2011 at 07:29 PM
Combine this with digital video frame rates and Heuristically programmed ALgorithmic post-production tools (that can do things like evaluate 500 frames of a group portrait and find all the shots with everyone's eyes open and smiles on their faces - child's play) and no one will ever need a still camera for anything.
Guess I'll just go slit my wrists now.
Posted by: Paul De Zan | Wednesday, 22 June 2011 at 07:49 PM
Hey, Don't mess with my bokeh
Posted by: Misha Erwitt | Wednesday, 22 June 2011 at 08:25 PM
Now, retrospectively changing the exposure variably across the whole image from a single shot ... That really would be a game changer.
-Mike
Yes, yes... and yes- especially with a wide angle! May live to see it. Post selective focus, nice- but meh...
Posted by: Stan B. | Wednesday, 22 June 2011 at 09:23 PM
"Real revolutions don't arrive at high noon with marching bands and coverage on the 6:00 PM news. Real revolutions arrive unannounced in the middle of the night and kind of sneak up on you."
Law 20, The Law of Hype
from "The 22 Immutable Laws of Marketing" by Al Ries and Jack Trout, 1993
Posted by: J Hayes | Wednesday, 22 June 2011 at 09:30 PM
I guess the proof of the pudding will be the relative loss of resolution and sensitivity implied by the design. Assuming they will have to use "off the shelf" sensors to be price competitive what will this mean in practice?
This will of couse make mirrorless cameras and lenses much easier to design and much faster in operation. I like the basic implication of mechanical and optical simplicity. Be interesting to see the actual camera though.
Posted by: Steve Jacob | Wednesday, 22 June 2011 at 10:14 PM
There ain't no free lunch.
Posted by: fred | Wednesday, 22 June 2011 at 10:23 PM
I just realized the fact that they're trying to market this new device to point-and-shooters. That really is a BAD idea. They for sure haven't done their research.
I have a good friend that has a photo store and point-and-shooters would come in asking for a new memory card. Turns out the old one was full and they wanted another card to replace the full one. He tried to explain how to empty the old card of pictures but they almost always said to just give them a new card and forget the other stuff.
Point-and-shooters rarely do anything with their pictures at a computer. Just like the film days they just want 2 4X6 prints for each picture. The negatives would get stuffed in a draw or thrown out.
Now you want them to figure out where the focus point is and adjust the depth-of-field?
Now that may happen the day after Death Valley freezes over.
No this is advanced stuff. I could have used it when I covered some anti-war rallies in the 60s. Just point and shoot and worry about the rest later.
Underwater photography would be another area for this stuff.
Sports of all kinds. Just follow the action and fix it at the computer.
But for point-and-shooters never happen.
Posted by: John Krill | Wednesday, 22 June 2011 at 10:26 PM
Looks like an interesting article, Mike. Things are a bit crazy around here at the moment, though, so I've bookmarked it and will return later when I have more time to focus.
Posted by: Paul Pomeroy | Wednesday, 22 June 2011 at 10:52 PM
Much of what passes as photography are
Photoshop generated images. With all the bits and pieces of actual photographs that can be appropiated, use of a camera may eventually be superfluous.
Posted by: Herman | Wednesday, 22 June 2011 at 11:07 PM
Still not too sure about the Second Coming, but the death of photography as we know it is surely now in my lifetime.... :(
I feel like going out to buy some film. Should never have sold that Leica in favor of digital!
Posted by: David Teo Boon Hwee | Wednesday, 22 June 2011 at 11:44 PM
Just as it became a huge PITA to mess with color balance on each RAW image imagine how huge a PITA it would be to post process each image for point of focus and DOF. Another solution in search of a problem. My prediction, in ten years the only customers for this technology will be the CIA and NSA.
Posted by: Dave Kee | Wednesday, 22 June 2011 at 11:53 PM
I think I understand Crage's point. When I look at some of the anachrophile forums (Photonet's 'Classic Manual Cameras', for example), I get a sense of people trying to recapture some of the original 'fun' that they experienced earlier in their photographic lives. Now, a large element of that was the freshness and novelty of the experience, but I agree with many of them that the ability to endlessly manipulate an image post-capture can be as tedious as it is liberating. So why not just shoot jpegs and forbid oneself the experience? I don't know. There's still, to me, a lingering, entirely subjective sense that by being given the ability to revisit every step of the process and address the variables, I should not only be making the effort but increasingly subjecting my photography to a level of critical evaluation that rather invalidates the point of why I'm doing it in the first place. I'm not a professional photographer, but the way that digital has invited us all to aim at that role makes the hobby seem too much like work sometimes.
The other day, I thought about how far this technology had advanced in just a decade and caught myself thinking that it wasn't necessarily a good thing. I can understand how many people find it exciting (and I fully take ctein's point in his response to Crage - this is, after all, an old fart's standard whine about the New), but I'm beginning to feel that I'm permanently playing catch-up on something I used to do to relax.
I'd take up golf instead, but I understand that five irons now come with laser-guidance modules.
Posted by: James McDermott | Thursday, 23 June 2011 at 12:04 AM
If this tech eventually makes it into professional market cameras, I can see how I'd use it... much like I have the other advances that came with digital.
That is, get it right in camera! But once in a while I'll stuff up a shot due to fast-paced action. Stuff up the exposure and it's a quick tweak of a slider to fix, maybe not perfectly, but still usable. Stuff up focus and it's often a lost photo (especially on a 5D, but I digress). I forsee it'll just be another slider in the RAW processor - 'Focus'.
Posted by: Josh Marshall | Thursday, 23 June 2011 at 12:18 AM
Their thesis was interesting - using a 40MP sensor with a 292x292 microlens array... essentially achieving an f/22 DoF with the objective lens at around f/4. The sensor had a 9 micron pixel. Resulting images were basically .09MP (yikes!)
Issues:
1) They need a very very high MP sensor because each "effective" pixel (microlens) needs a number of pixels at the sensor on which to project the image. Compromises here reduce the effect
1b) Even with, say a 50MP sensor, they may be able to squeeze out a 1MP image... maybe 2 if there is a trick up their sleeve?
2) Smaller pixels will be diffraction-limiting and impair the effect - but they aren't cramming medium format sensors into a consumer-focused camera, right?
3) The ability to "project" virtual cameras or change image perspective is generally limited to images that are quite close-up
4) The system appears to be optimized to one lens aperture for the main lens
The commercial barriers are ridiculous. So - they present a very low-resolution image that you can mess around with using software (which may be fun for a minute). That's why they are targeting the consumer market and "social sharing" vs. the professional market. Now they have to cram a lot of tech, a novel high-res sensor, and a slick microlens array into a camera at an attractive price point... in a market where everyone's cell phone has a huge DoF and 5+ MP resolutions and cheap digicams are $99.
Here's a riddle:
What sound does $50million make when it goes down the drain?
Posted by: Bart (Leica Boss) | Thursday, 23 June 2011 at 12:40 AM
The comments here are all spot on, you've got some astute readers. People are excited because it sounds as if they can finally get shallow DoF effects on a tiny little pocket camera. They'll be disappointed -- while it can simulate a narrower aperture, it cannot simulate a wider one. This is because it captures in image-side light field only. It cannot synthesize missing light. So yes, you'll have to take dramatically posed shots to get a cool DoF effect, otherwise you can still see everything perfectly clearly, which is just so un-chic.
Posted by: Kukui Nut | Thursday, 23 June 2011 at 01:51 AM
Well, thank you. Adds another chapter to my book: 'all about the unnecessary' (hardcover).
Posted by: cb | Thursday, 23 June 2011 at 03:08 AM
People will love this thing - and the Facebook app is the perfect complement to it.
The final image is low resolution, so you're not going to be making prints. But how many of those 50 billion images on Facebook have been printed.
Of course consumers won't want to mess with the images in post to produce a JPG, but that's what the Facebook app will be for.
This technology is going to have a very interesting long term effect.
Posted by: Craig Arnold | Thursday, 23 June 2011 at 03:31 AM
I have seen these papers a couple of years ago, and they are neat in a proof-of-concept kind of way, but I can't quite imagine that the thing I love most about my camera systems, the magic lenses, can be retained in this kind of a system. Will Dr. Ng be able to make a plenoptic Noctilux or ZF.2 100 MP or 180mm Summicron?
Posted by: Carsten W | Thursday, 23 June 2011 at 04:41 AM
Potential users: Sports photographers, wedding photographers and photojournalists, where you can't redo a shot and getting the focus right is hit-or-miss. Just like having extra dynamic range makes it easier to rely on auto exposure for fast-changing scenes.
Macrophotographers who can now get a shot even if their target moves a bit, and who want greater DOF without the light loss.
Landscape photographers who want extra DOF during low-light shots without sacrificing shutter speed.
Anyone with an interest in depth imaging.
Non-users:
Studio photographers who have the time to set everything up perfectly.
Casual compact camera users who already have large enough DOF anyway.
Enthusiast photographers who care more about the process than the end product (*waves*).
But perhaps most importantly: Interested parties:
Sensor makers who wil have a market for even higher MP sensors.
Storage makes for the larger images.
Computer makers for the extra CPU power needed.
Software makers for new programs that can do this well.
I'm interested, for sure.
Posted by: Lars Clausen | Thursday, 23 June 2011 at 04:07 PM
"The idea of just pointing a camera at a scene then figuring out what should be in focus later horrifies me."
Really? Horrifies? Sounds kind of awesome to me.
"This is ridiculous. Now anyone will be able to take photographs."
Best comment yet. :)
Posted by: David Bostedo | Thursday, 23 June 2011 at 06:06 PM
If it is possible to choose a focus point after the fact, why not have a default setting where everything is in focus to start with and one can selectively defocus in post-processing? Since it is aimed squarely at the consumer market, I think that'd be the smarter way to go - as the consumers (who likely don't understand or care about selective focus) would have everything sharp as they're likely to want and hobbyists can selectively "unfocus" parts of the photo later. Plus, you could "unfocus" a distracting background or unwanted an ex-spouse out of a picture after the fact ;)
p.s. SRay: Segways are actually commonly sighted around the National Mall here in DC - they are rented to tourists, but yeah, hardly anyone else uses them!
Posted by: anonymous | Thursday, 23 June 2011 at 08:20 PM
With my 7 year old standing next to me, I was checking out the online pictures with various focus points.
Her comment? "Why do you have to choose what becomes sharp? Why don't they make the pictures come out sharp all over?"
Posted by: Mani Sitaraman | Thursday, 23 June 2011 at 10:49 PM
I just had a thought: if you can choose which parts of the image are in focus, could you perhaps also do other things to them?
For example, you could choose a your depth of field, then make everything outside it darker, or B&W, or lower contrast. I'm sure folks with more imagination than I could come up with some really interesting ideas.
Posted by: Andy | Friday, 24 June 2011 at 05:08 AM
This report really made my imagination go wild about the future of image making. Imagine an image where the viewer chooses the point of focus (as we do with our eyes); now imagine that image in 3D and panoramic. Dispose of the flat screen and replace with a spherical room or even a helmet. Still it’s only a static image; add video, the sounds, smells and air movement recorded along with the scene.
Star Trek Holodeck anyone?
Posted by: Tom | Friday, 24 June 2011 at 08:55 AM
I've found that the best way to recapture the magic of originally getting involved in photography is to:
1. Get more involved in photography.
2. Learn something completely new, like say digital. That puts me right back where I was in 1968-1970, learning a whole new skill. (The argument applies just as well to somebody who started in digital, got good at it, and needs to kick their interest, trying film, of course.)
Posted by: David Dyer-Bennet | Friday, 24 June 2011 at 01:17 PM
A lot of the comments (here and elsewhere) seem to be missing what I think is the key difference that this technology might make. What is the main user complaint about most small cameras? Shutter lag. What is the main time waster that causes lag? Autofocus. Without any autofocus delay it will be straightforward to make small cameras with virtually no lag. Now that is something all fans of the decisive moment can cheer about :)
Posted by: Nicolas Woollaston | Saturday, 25 June 2011 at 01:05 AM