« More Camera Not-Yet-News | Main | Minority Report »

Wednesday, 22 June 2011

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Based on the Times description, Lytro will depend on a large number of tiny lenses taking pictures simultaneously. If so the optical quality should be on par with what you get if you take a picture with fifty cell phone cameras. At the same time! Quite cool but not likely to keep Nikon or Canon up at night.

Tim,
If there's one thing that I can almost guarantee, it's that this is keeping Nikon and Canon up at night. [g]

Mike

Bad news: It's a trade-off between resolution and ability to adjust DOF after the fact.
Good news: It's going to profit a lot more from higher resolution sensors than conventional cameras.
Better news: It allows some very special other features, like calculating actual (or at least relative) depth to various parts of the scene or adjusting perspective (slightly) in macro photography.

Like full-resolution video, this may seem like convenience, but in the end it will cost you more time. With the former, you're going to have to sift through hundreds of thousands of shots to get the right one, and here you'll have to pick the focus point and DOF of every shot you take.

Very cool concept. Playing with the example pictures was an interesting experience. It'll be interesting to see how it all shakes out.

It will still need some sort of "focusing" - better called "subject identification" or it won't be able to create a JPEG. Deciding the focus point and depth of field later is great, but most folks just want a decent JPEG...

If you can change to focus point, then theoretically at least, you should be able to use different focus points on different parts of the image, no? View camera tilts and swings? Or even better, a focus brush?

I would recommend also if you have the time to read the full dissertation. It explains how the prototype works with a combination of microlenses in front of the sensor and some image processing.

http://www.lytro.com/renng-thesis.pdf

What I really like about this solution is the use of the microlenses -- the rest of the math is somewhat straight forward and well known from an image processing perspective, but the microlens solution to enable focusing at multiple depths of field in one photo is really innovative.

It is also going to give camera makers a real reason to drive up the megapixel count (and sensitivity), because from what I see, this solution would benefit from a better detailed and more sensitive sensor.

Pak

That's not just a new camera system, it's a new camera paradigm (to use an already over-used word).

Alas, that's another nail on the coffin of decisive moment. Now we have to take high res, head-mounted, panoramic, digicam movies, crop, and lytro. The last nail would be a made-to-order perspective control that one chooses while on the armchair. So much for intuitive surrealism.

Here is what I see at the end of this path. Cameras will be obsolete. "Photographers" will sit in front of their computers, decide to photograph the Grand Canyon and purchase on-line the Grand Canyon light-field files. They will be able to set the time of day, move around in the virtual canyon, set focus, DOF and shutter speed, frame and capture, all from the comfort of their easy chair and glowing screen.

Doesn't really sound like much fun to me.

Looks pretty cool for snapshots and travel pics, but from a fine art perspective I wonder if artists will be interested. I'm not sure if I am. Part of the joy of photography is knowing how to adjust aperture and choose the correct focal lengths for a given scene while you're composing. Controlling every aspect of a photo at the moment of capture ties you to that photo. The idea of just pointing a camera at a scene then figuring out what should be in focus later horrifies me.

That said, the camera nerd in me sure wants to play with it.

This is ridiculous. Now anyone will be able to take photographs.

Now that, as they say, has posibilities.

My wife sent me a link this morning about this, and I am still trying to digest all of it. The link I saw at least had a lot of mumbo-jumbo about 'light fields', but what I take away from it is that the camera is taking a series of shallow dof shots and creating a composite. Software can be used after the fact to slect the focal point.

It is interesting, and it has me wondering about the application and what the possibilities are. The biggest question I have is whether the technology can be used to create pictures with more dof.

The stuff about 'light fields' also has me somewhat skeptical, and I wonder about things like processing time, but it will be interesting to see what develops.

If you look closely at the faces in the sample images on the Lytro site, you can see some pretty serious banding (Zoom in on the baby's face). Perhaps not an issue for snapshots, but they have a way to go for anything more serious than this. Although I can see that once the technology is perfected, it will be a game changer.

Very nice. I played with some of the images at the website and while the plane of focus does shift forward and back, there seems to be some limitation as to just how sharp the most forward and rear elements can get especially over a great depth. In other words, it is not perfect or total focus, at least to me, with the images on the website. Also, I wonder if one can bring the entire image into focus and not just certain planes.
Still pretty neat and I am sure we are looking at the next billionaire...

Does this mean we no longer have to worry about aperture and just adjust the speed for proper exposure?

Video hits: As of 12:08 EDT there were only 308

I'll check back in 24 hours and see what impact you post has.

I'm not sure if I get the point of this... how is it different from me shooting at f16 so that I can simply see everything in focus rather than messing around with it all afterwards?

Isn't it really just another gimmick? Since when has there been a need to take something out of focus in an image?

Just give me a bigger sensor in a smaller body dammit!!!

I imagine Ctein is smiling happily...

There is no doubt that selecting a point of focus from several possible choices would be interesting and useful. The truth is that the idea is so far out of my personal experience that I can't quite grasp all the possibilities for my personal photography.
If the camera actually works then using the phrase Paradigm Shift would be an appropriate description of the possible impact of the technology.

Sorry for the double post. As I was just thinking about the possibilities of multiple focus points, suddenly an image of a Braque painting popped into my mind. The creative possibilities for this technology seem formidable to say the least.

Seems to me this is for the average person just out shooting for an afternoon. Any pro (or advanced amateur) I'd hope would have a battle plan for accomplishing the look he's after. I can however see this taking off if it's combined with an instagram style sharing component. Getting the technology into a cell phone may be a trick though.

I suspect the "letting the viewer play with the photo" aspect that the Times focuses on is exactly the path that will kill it as a niche toy. It's superficially cool, but of little lasting interest.

Faster-acting captures, and simpler mechanical systems in fast cameras, though, are both really valuable.

3D out of the same captures seems potentially interesting as well.

Raytrix, a German company, brought a 3 MP camera of this type to market earlier this year, although with a price tag reputed to be way high. Lytro's aim is at the consumer market. I agree with you Mike that such developments will profoundly affect the evolution of photography in the upcoming decades.

I read the NYT article (but not the paper) and am fascinated by the possibilities.

Can the Depth of Field also be chosen later?

And can it deal with motion in the image?

Just wondering.

Steve

I'm having so much troubling focusing on all of the new camera technologies. IMO, perhaps it's better for me to focus on what I already know.

Cropping after the fact. Focusing after the fact. Just what online forums need...more heated debates.

that is possibly the highest ratio of innovation to letters in one's name that I have seen.

Well done Mr. Ng.

Dear Mike,

I am sure I do not understand the science behind this nor its implications really well but would it mean that aperture becomes an obsolete parameter? I understand one can choose the plane of focus afterwards but what about the depth of field? Would it not be limited by the aperture when the picture was taken?

Forgive me my bad English, I am Belgian-Dutch.

..and it still won't matter one bit. A good photograph has nothing to do with technology.

I thought back when they were called re-focus imaging that this would be great for macro and telephoto work, the million dollar question is what lens mount will they use? their own or someone elses? If their own (I suspect the lens design might be quite important) it will very much depend on lens range for initial take up for a lot of us.

all the best. phil

On the one hand, I find this fascinating.

On the other, I shudder at the thought of fifteen years of "Lytro vs real photography" pointless internet acrimony, just like we've been getting with "digital vs film" (and like we'd likely have had with "collodion vs dry plate" had the web been invented in 1894 instead.)

This brings new meaning to the term "retrofocus."

Good news: point the camere at whatever direction, press the shutter and you can go home.
Bad news: until you go home and fiddle for couple of hours for one image, you don't actually hava a picutre.

They will figure out automatic selection of everything finally anyway.

I'm a little surprised that retrospectively changing focus is perceived as such a game-changer, subjectively, though I can see it could have major implications for camera and len design, which might in itself revolutionize the "consumer" market, at least.

Now, retrospectively changing the exposure variably across the whole image from a single shot ... That really would be a game changer.

Mike

I'm not sure if I get the point of this... how is it different from me shooting at f16 so that I can simply see everything in focus rather than messing around with it all afterwards?

The point is you can take a picture and choose which part is in focus, rather than having everything in focus as at f/16. You are effectively setting aperture and focus distance on the computer, meaning the only parameters you need to worry about on the camera are ISO and shutter speed.

That's not to say that can only adjust focus on the computer - I'm sure a decent camera of this type will allow the photographer to manually set a 'focus' distance and 'depth of field', or have the camera emulate an AF system and aperture. For non-photographers, the Auto mode will probably just render everything in focus.

They will be able to set the time of day, move around in the virtual canyon, set focus, DOF and shutter speed, frame and capture, all from the comfort of their easy chair and glowing screen.

Doesn't really sound like much fun to me.

Well, nobody's forcing you to do it - you can still go out and take photos in the real world, just like people can still use film if they want to.

Definitely cool technology. But the last thing I want to do is spend more time in the post-production of my photos.

Basically, I think it'll be a niche product, analogous to that other photographic game-changer, Polaroid...which didn't change the game that much.

I don't see that much function for serious professionals or advanced amateurs. How many advanced photographers miss focus so often that they need to fix it later? Sure, you miss on an occasional shot, but how much are you willing to pay to fix that? As for P&S and phone-cam shooters, how many are really interested in post processing, as opposed, say, to chimping? You shoot, you look at your shot, and if it's out of focus, no problem, you shoot again.

So, I give it a niche, but not a home run. Sort of like a more-successful GXR.

I suspect the "letting the viewer play with the photo" aspect that the Times focuses on is exactly the path that will kill it as a niche toy. It's superficially cool, but of little lasting interest.

Faster-acting captures, and simpler mechanical systems in fast cameras, though, are both really valuable.

3D out of the same captures seems potentially interesting as well.

I think this is exactly right. The ability to play with the focus is really cool, but I'm not sure it's really that big a deal. Once this technology becomes commonplace, the focus shift will allow you to tweak and correct the focus after the fact, but it's hard to see it having much of a lasting creative impact.

That said, I could see this leading to interactive, 3D photos that really would revolutionize the field. Imagine a photo like the stop time ones in "The Matrix" that you can manipulate on a future version of the iPad.

This reminds me a little of the "morphing" technology from the early 90s. That was a big fad for a while, but ultimately it was just a gimmick with limited value. But focusing on the "morphing" stuff minimized the real story -- the incredible advances in computer animation. The real value in the technology was the ability to use computer animation to put incredible, life-like images along side real actors in movies.

Dear folks,

A handful of answers to technical questions:

1) Do not assume the image quality on the website is indicative of what the camera produces. It may only be intended to show off the “cool features.” I can think of technical reasons why the image quality might be significantly worse than what a real camera would produce… Or not. But smarter and more experienced companies than this have posted crappy and unrepresentative demo photos (think of some of the Fuji X100 samples).

2) Of course the camera will “autofocus.” Hardly any photographers, professional or amateur, want the additional workflow of having to set a point of focus after the fact. As an option, nice. As a necessity, hardly.

Autofocus detection works very quickly. The difference in this case is that the detection module doesn't then have to drive a lens to focus; it simply sticks the information in the file metadata as the image gets written out. Effective focus delay is essentially zero.

3) The lens is, of necessity, a large aperture, fixed focus lens (fortunately good ones are not hard to design). No, there is no way to stop it down, but the only reasons for stopping a lens down are to increase the depth of field (now solved) and to allow slower shutter speeds (which only a miniscule fraction of photographers want).

4) It's not hard to produce sensors with lots and lots of pixels. Simply scaling up the Fuji S100 sensor to full frame would get you 300-500 megapixels. Problem is that such sensors are not cheap. Even although this one would be extremely defect tolerant. And the cameras are bigger. And so on.

Anyway, I don't think we can make assumptions about what the output resolution of the camera will be; depends on the market they're actually aiming at, and your guess is as good as mine.

5) Yes, this technology can allow you to paint in the points of focus where you wanted them to be within the frame, with suitable user software. (Adobe demonstrated proof-of-concept hardware/software for that - I think Mike wrote about it some three or four years ago).


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

"Autofocus detection works very quickly. The difference in this case is that the detection module doesn't then have to drive a lens to focus; it simply sticks the information in the file metadata as the image gets written out. Effective focus delay is essentially zero."

More than that, the camera can decide after the fact where the focus point should be, sort of like setting white balance.

Fascinating. (right eyebrow raised)

Could this easy method of obtaining distance information lead to a really fast method of modelling 3d objects in a computer, say, for copying using a 3d printer?

Kerosene lamps? I still light my dwelling with oil lamps. Trim those wicks and spin those aperture rings...

Now all we need is a "time field" camera so we can go back and find when the decisive moment was. Hocus-pocus, it's all in focus!

The kid in me loves the concept of Ken Tanaka's comment:
To have a radio control camera drone with HD video in the nose and a high MP turret cam so that you could fly with a flock of birds or a dragonfly and selective focus/DOF afterwards would be amazing – being able currently to dangle a digi-cam on the end of a monopod with AF and continuous shooting is good compared to the single shot MF I had, but a planecam would be a real life flightsim with photography as well; so who would need TV?........Forum threads with 'tog the togger's planecam', 'flash-flocks' when something interesting happens, gaggles of planecams flying off on a photographic meet, etc.
The adult in me realises you'd have to photoshop out all the other unwanted planecams and quiet privacy might become a thing of the past, insurance would be a nightmare as well, but at least it would keep kids with catapults and air-rifles happy!

If it keeps the masses at home in their easy chairs there's all the more outdoors for the rest of us. As a hobby I might even buy some film again ;-)

best. phil

We can always combine a pinhole camera with a digital back.

Hopefully the cameras will all come with an on-off switch (for this feature) so that one could get back to 'real' photography when one felt the urge to! lol

This will mark the death of photography.... there will be people pushing buttons to capture images, but they won't be photographers taking photographs. They'll be people wearing special glasses that capture pixels for the future... where everyone will be spending time at a computer working as graphic artists. And many of us old-timers will be asking if technology is truly advantageous!

Dear Crage,

Absolutely!

It's been nothing but a downhill slide ever since that upstart Eastman started selling box cameras and roll film to the masses and ruined it for us REAL craftspeople.

Why it makes... uhh, 'scuse me for a minute ...

"HEY, ALL YOU DIGITAL PIXEL-PUSHIN' KIDS! GET OFF OF MY LAWN!"

OK, now where was I?

pax / Ctein

Combine this with digital video frame rates and Heuristically programmed ALgorithmic post-production tools (that can do things like evaluate 500 frames of a group portrait and find all the shots with everyone's eyes open and smiles on their faces - child's play) and no one will ever need a still camera for anything.

Guess I'll just go slit my wrists now.

Hey, Don't mess with my bokeh

Now, retrospectively changing the exposure variably across the whole image from a single shot ... That really would be a game changer.
-Mike

Yes, yes... and yes- especially with a wide angle! May live to see it. Post selective focus, nice- but meh...

"Real revolutions don't arrive at high noon with marching bands and coverage on the 6:00 PM news. Real revolutions arrive unannounced in the middle of the night and kind of sneak up on you."

Law 20, The Law of Hype
from "The 22 Immutable Laws of Marketing" by Al Ries and Jack Trout, 1993

I guess the proof of the pudding will be the relative loss of resolution and sensitivity implied by the design. Assuming they will have to use "off the shelf" sensors to be price competitive what will this mean in practice?

This will of couse make mirrorless cameras and lenses much easier to design and much faster in operation. I like the basic implication of mechanical and optical simplicity. Be interesting to see the actual camera though.

There ain't no free lunch.

I just realized the fact that they're trying to market this new device to point-and-shooters. That really is a BAD idea. They for sure haven't done their research.

I have a good friend that has a photo store and point-and-shooters would come in asking for a new memory card. Turns out the old one was full and they wanted another card to replace the full one. He tried to explain how to empty the old card of pictures but they almost always said to just give them a new card and forget the other stuff.

Point-and-shooters rarely do anything with their pictures at a computer. Just like the film days they just want 2 4X6 prints for each picture. The negatives would get stuffed in a draw or thrown out.

Now you want them to figure out where the focus point is and adjust the depth-of-field?

Now that may happen the day after Death Valley freezes over.

No this is advanced stuff. I could have used it when I covered some anti-war rallies in the 60s. Just point and shoot and worry about the rest later.

Underwater photography would be another area for this stuff.

Sports of all kinds. Just follow the action and fix it at the computer.

But for point-and-shooters never happen.

Looks like an interesting article, Mike. Things are a bit crazy around here at the moment, though, so I've bookmarked it and will return later when I have more time to focus.

Much of what passes as photography are
Photoshop generated images. With all the bits and pieces of actual photographs that can be appropiated, use of a camera may eventually be superfluous.

Still not too sure about the Second Coming, but the death of photography as we know it is surely now in my lifetime.... :(

I feel like going out to buy some film. Should never have sold that Leica in favor of digital!

Just as it became a huge PITA to mess with color balance on each RAW image imagine how huge a PITA it would be to post process each image for point of focus and DOF. Another solution in search of a problem. My prediction, in ten years the only customers for this technology will be the CIA and NSA.

I think I understand Crage's point. When I look at some of the anachrophile forums (Photonet's 'Classic Manual Cameras', for example), I get a sense of people trying to recapture some of the original 'fun' that they experienced earlier in their photographic lives. Now, a large element of that was the freshness and novelty of the experience, but I agree with many of them that the ability to endlessly manipulate an image post-capture can be as tedious as it is liberating. So why not just shoot jpegs and forbid oneself the experience? I don't know. There's still, to me, a lingering, entirely subjective sense that by being given the ability to revisit every step of the process and address the variables, I should not only be making the effort but increasingly subjecting my photography to a level of critical evaluation that rather invalidates the point of why I'm doing it in the first place. I'm not a professional photographer, but the way that digital has invited us all to aim at that role makes the hobby seem too much like work sometimes.

The other day, I thought about how far this technology had advanced in just a decade and caught myself thinking that it wasn't necessarily a good thing. I can understand how many people find it exciting (and I fully take ctein's point in his response to Crage - this is, after all, an old fart's standard whine about the New), but I'm beginning to feel that I'm permanently playing catch-up on something I used to do to relax.

I'd take up golf instead, but I understand that five irons now come with laser-guidance modules.

If this tech eventually makes it into professional market cameras, I can see how I'd use it... much like I have the other advances that came with digital.

That is, get it right in camera! But once in a while I'll stuff up a shot due to fast-paced action. Stuff up the exposure and it's a quick tweak of a slider to fix, maybe not perfectly, but still usable. Stuff up focus and it's often a lost photo (especially on a 5D, but I digress). I forsee it'll just be another slider in the RAW processor - 'Focus'.

Their thesis was interesting - using a 40MP sensor with a 292x292 microlens array... essentially achieving an f/22 DoF with the objective lens at around f/4. The sensor had a 9 micron pixel. Resulting images were basically .09MP (yikes!)

Issues:
1) They need a very very high MP sensor because each "effective" pixel (microlens) needs a number of pixels at the sensor on which to project the image. Compromises here reduce the effect
1b) Even with, say a 50MP sensor, they may be able to squeeze out a 1MP image... maybe 2 if there is a trick up their sleeve?
2) Smaller pixels will be diffraction-limiting and impair the effect - but they aren't cramming medium format sensors into a consumer-focused camera, right?
3) The ability to "project" virtual cameras or change image perspective is generally limited to images that are quite close-up
4) The system appears to be optimized to one lens aperture for the main lens


The commercial barriers are ridiculous. So - they present a very low-resolution image that you can mess around with using software (which may be fun for a minute). That's why they are targeting the consumer market and "social sharing" vs. the professional market. Now they have to cram a lot of tech, a novel high-res sensor, and a slick microlens array into a camera at an attractive price point... in a market where everyone's cell phone has a huge DoF and 5+ MP resolutions and cheap digicams are $99.

Here's a riddle:
What sound does $50million make when it goes down the drain?

The comments here are all spot on, you've got some astute readers. People are excited because it sounds as if they can finally get shallow DoF effects on a tiny little pocket camera. They'll be disappointed -- while it can simulate a narrower aperture, it cannot simulate a wider one. This is because it captures in image-side light field only. It cannot synthesize missing light. So yes, you'll have to take dramatically posed shots to get a cool DoF effect, otherwise you can still see everything perfectly clearly, which is just so un-chic.

Well, thank you. Adds another chapter to my book: 'all about the unnecessary' (hardcover).

People will love this thing - and the Facebook app is the perfect complement to it.

The final image is low resolution, so you're not going to be making prints. But how many of those 50 billion images on Facebook have been printed.

Of course consumers won't want to mess with the images in post to produce a JPG, but that's what the Facebook app will be for.

This technology is going to have a very interesting long term effect.

I have seen these papers a couple of years ago, and they are neat in a proof-of-concept kind of way, but I can't quite imagine that the thing I love most about my camera systems, the magic lenses, can be retained in this kind of a system. Will Dr. Ng be able to make a plenoptic Noctilux or ZF.2 100 MP or 180mm Summicron?

Potential users: Sports photographers, wedding photographers and photojournalists, where you can't redo a shot and getting the focus right is hit-or-miss. Just like having extra dynamic range makes it easier to rely on auto exposure for fast-changing scenes.

Macrophotographers who can now get a shot even if their target moves a bit, and who want greater DOF without the light loss.

Landscape photographers who want extra DOF during low-light shots without sacrificing shutter speed.

Anyone with an interest in depth imaging.

Non-users:

Studio photographers who have the time to set everything up perfectly.

Casual compact camera users who already have large enough DOF anyway.

Enthusiast photographers who care more about the process than the end product (*waves*).

But perhaps most importantly: Interested parties:

Sensor makers who wil have a market for even higher MP sensors.
Storage makes for the larger images.
Computer makers for the extra CPU power needed.
Software makers for new programs that can do this well.

I'm interested, for sure.

"The idea of just pointing a camera at a scene then figuring out what should be in focus later horrifies me."

Really? Horrifies? Sounds kind of awesome to me.

"This is ridiculous. Now anyone will be able to take photographs."

Best comment yet. :)

If it is possible to choose a focus point after the fact, why not have a default setting where everything is in focus to start with and one can selectively defocus in post-processing? Since it is aimed squarely at the consumer market, I think that'd be the smarter way to go - as the consumers (who likely don't understand or care about selective focus) would have everything sharp as they're likely to want and hobbyists can selectively "unfocus" parts of the photo later. Plus, you could "unfocus" a distracting background or unwanted an ex-spouse out of a picture after the fact ;)

p.s. SRay: Segways are actually commonly sighted around the National Mall here in DC - they are rented to tourists, but yeah, hardly anyone else uses them!

With my 7 year old standing next to me, I was checking out the online pictures with various focus points.

Her comment? "Why do you have to choose what becomes sharp? Why don't they make the pictures come out sharp all over?"

I just had a thought: if you can choose which parts of the image are in focus, could you perhaps also do other things to them?

For example, you could choose a your depth of field, then make everything outside it darker, or B&W, or lower contrast. I'm sure folks with more imagination than I could come up with some really interesting ideas.

This report really made my imagination go wild about the future of image making. Imagine an image where the viewer chooses the point of focus (as we do with our eyes); now imagine that image in 3D and panoramic. Dispose of the flat screen and replace with a spherical room or even a helmet. Still it’s only a static image; add video, the sounds, smells and air movement recorded along with the scene.
Star Trek Holodeck anyone?

I've found that the best way to recapture the magic of originally getting involved in photography is to:

1. Get more involved in photography.

2. Learn something completely new, like say digital. That puts me right back where I was in 1968-1970, learning a whole new skill. (The argument applies just as well to somebody who started in digital, got good at it, and needs to kick their interest, trying film, of course.)

A lot of the comments (here and elsewhere) seem to be missing what I think is the key difference that this technology might make. What is the main user complaint about most small cameras? Shutter lag. What is the main time waster that causes lag? Autofocus. Without any autofocus delay it will be straightforward to make small cameras with virtually no lag. Now that is something all fans of the decisive moment can cheer about :)

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007