« Leica Goes Mirrorless | Main | Open Mike: Beautiful Day »

Friday, 25 April 2014

Comments

Yet another question:
I know that physical bokeh is [supposed] to be a much better approach to depth of field than the software approach.

But algorithyms being proved to be so succesful, and taking into account that 98% of the photos worldwide end up in Flickr [at 2k resolution if lucky], isn´t the HTC One M8 approach the more sensible, and most probably, the better allrounder?

It's hard running a small business.

unrelated to the Lytro but related to the first part of your post today. Allow me to evangelise a little. I hope you have a backup plan in place, which at a minimum would require an onsite backup (perhaps through Time Machine/Capsule) and offsite. There are several cloud based service that offer piece of mind at a reasonable cost. Backblaze is one of those (I am unaffiliated with them except as a user).

You may want to get all those files onto a backup drive and off your main drive. I don't know much about Apple OS, but I'm sure that many files are slowing your machine. A terabyte backup drive can be had at a reasonable price and I doubt that you will exceed that capacity for some time.

Believe it or not, I actually went and read Ng's thesis about the light-field camera he first built at Stanford! The math in his thesis reminded of the work I used to do back in my (brief) days with geophysics. As far as I can tell, light-field cameras move the sensor along an axis perpendicular to the lens (using a piezoelectric crystal) during the exposure so the focal plane with respect to the lens changes. This data is then processed by a Fourier transform (got that!?) to re-create the optical data we're used to seeing when we take a photo. Twiddling the transform allows the photographer to select a plane of focus from the data.

Hmmm.
5 effective megapixels, a nice f/2 zoom, $1500 intro price, and that rocket-launcher exterior design...

Yes, it's the Sony F-707 reborn.

I wonder if it will have the same cult following?

Never understood the technology behind the Lytro system. Shoot first, focus later,nice slogan...but sort of like shooting everything at f22, then selecting the virtual DOF with software. Seems to me that would require plenty of good exposure light at capture.

”I think I understand it when I delve into reading about it, but then it evaporates out of my gray cells as time passes.” As we get older our brains develop a Teflon coating and nothing sticks.

The new Leica T yesterday and this Lytro today.
Best camera designs in years, coming in at the same time, from the smallest two(?) companies in the business. (Black Magic Design is small too and their cameras are almost as beautiful)

The Lytro looks nicer than the Leica T. (I'm a Leica owner...MP/film camera)

Joe

This Lytro business isn't for me..... yet!

Man, this is promising. It seems gimmicky at first, but imagine what it could be down the line when resolution, etc is up to par. Imagine shooting and not worrying about AF at all. Imagine controlling depth of field in a preciseand meaningful manner... could you get the look of large format depth of field out of a smaller sensor?

It's easy to sling mud at this stuff in its infant stage, but Lytro is already making progress from what we saw just a year or two ago. Definitely a potentially powerful tool that's worth watching. Certainly more interesting than Existing Camera Mk. IX

I'm thinking that this is getting very interesting. The first thing that springs to mind is the interesting possibilities with focus stacking. And the ability to have the same image multiplied with different focal points of view. I must say that I am tempted by this.

It really is quite an intriguing device.

And now with 5Mp effective. That's not bad at all. I mean, that's really quite good enough for small prints and books and pretty much any screens.

And once they can do 12 or 15 Mp effective - can't be too far away.

I think I might have to get one of these. Just need a wide angle lens for my A7R first then this little marvel is next on the list.

If your looking to move your web site why not use SquareSpace.com. Their software is able to import from TypePad so migration is pain less :-)

I only know about them from Leo Laporte of TWIT TV but anything they recommend is normally very good.

The Lytro website is simply abominable. A riot of unreadable text overlaying images, confusing - nay, incomprehensible - navigation. A triumph of "design" run riot and completely trumping comprehension. Just awful.

In sharp contrast with the Leica T featured in the precious post Lytro's industrial design is wilfully bad... again. Last time it was a small, pocket size device that was all sharp edges, this time it's a crazy angle to the body.

Not only does the slash styled body make the camera longer, the user expectation of a camera/phone/tablet is that the back of the device is vertical. You'd better have a pretty good reason to go against that in a fixed body.

You can see that they've already had a problem, probably from user feedback and introduced a ridge on the bottom of the screen which, matched with the slope of the body allows you to set the screen vertically.

I think that in the future when we will be able to shoot 20 mp images at 500 frames per second (or faster), cameras will simply shoot a burst using all apertures and maybe a dozen focus points as well. ISO will be varied automatically (and maybe even shutter speed) to adjust exposure. Of course you would need a shutter speed fast enough to allow for the frame rate, but with noise free ISOs up to 10,000,000, that wouldn't really matter. A computer would sort it all out to one's preference afterwards, so one would not have to personally deal with thousands of frames from an afternoon shoot.

I believe this is a much more likely scenario than the Lytro method. This is also the way improved video will eventually impact still photography. In the future, most images will more than likely be sub second bursts. And, everything would be lightning fast cause one would no longer need auto exposure or auto focus.

Some of this can already be done now.

Edward Taylor

When the first Lytro was launched I thought "great so you will also be able to not specify a focus point and have everything in focus", but it didn't seem like that was an option.
I think this is a fairly rational - but perhaps not mathematically possible - expectation.
Though the fact that it doesn't shoot 4k video, bake biscuits and fry eggs are deal breakers for me.

I'm wondering:

Technology-dependant? Software lock-in? Compatible with anything else?

Can I bring the files into Lightroom? Work with them there?

Might just turn out to be a niche product.

In the sour grapes department, I note that my Nokia camera has a Lytro-like app available called Nokia Refocus, available for free for any phone running Windows Phone 8. That isn't the sour grapes part, though. That is the fact that iOS is supposedly going to get this 'breakthrough' feature in iOS 8. The mind-share will make people think Apple was first with this. Of course, who had it first means nothing. Use it or don't. I've experimented briefly with it and like the flexibility, but I'm not sure that in the phone implementation it can handle grab-shots. More trials are required.

I am in general excited by the possibilities that (I think the term Ctein said these fall under is) Computational Photography will enable. Sort of like how ultra-high ISO performing sensors as in the Sony A7S will allow photography opportunities that simply didn't exist before. Computational Photography will create new photo aesthetics. I see that as a good.

Patrick

Mike, Here's my attempt at an explanation of the Lytro. This is based on a bit of extrapolation (not too much) from the somewhat in complete technical descriptions on the web. Let's start with the light coming from the subject. Each point on the subject emits light rays traveling, among other places, towards the camera lens. In a conventional camera, when the light ray hits the lens, it gets bent towards the right location to form an image. That is, it gets bent like that IF it originated in the plane you were focused on. If it came from somewhere else, it gets bent as though it came from that plane, and winds up in the wrong place for forming a sharp image. The individual pixel sensor has no idea where the ray came from, just that it arrived.

Now a light field system is different. The sensor is divided up into blocks, each of which is covered by a microlens. The microlenses make an image of the lens aperture on the underlying blocks. So when an individual pixel in the block "sees" a light ray, it knows two things. It knows that the "main" lens sent the ray its direction, _and_ it knows the direction that ray came from. Knowing those two pieces of information allows you to backtrace the ray out into the subject space, with a bit of computation. With a little more computation, you can place an imaginary plane anywhere you like in subject space, and pretend all the rays originated at that plane. If the plane corresponds to the real source of the rays, then you can figure out what the image should be. So this is the basic idea. You figure out the endpoint _and_ direction of each incoming ray, and backtrace it out into subject space. Selecting a subject distance to focus on, you can find out where all those rays reach that subject plane, and reconstruct an image. .

You can "easily" build a after focus system. The software to process this is available for free. You just need 5 or more identical cameras setup exactly equal distance from each other and setup to simultaneous exposure, same settings of course.

All of this was available long before Lytro introduced their first camera. but I'm not trying to knock Lytro, it's a interesting push forward for computational photography which is the future of photography or digital imaging.

But personally I find lenticular photography and telecentric lenses much more interesting.

Robert

From the Lytro home page (linked in Mike's text above):


"Special introductory price of $1,499"

How much was that Leica T again, ex lens?

I suspect that the Plenoptic technology will eventually be used for something we cannot yet imagine.

For example, and I know it's an old comparison, who would have suspected that we would one day watch films at home on a disc scanned by a laser? We thought space battles, not home entertainment.

+1 for Bill Tyler's very lucid explanation.

To add to that: to represent both the endpoint _and_ the direction of the incident ray of light, one needs 4 numbers: for instance, two to represent the location of the endpoint on the sensor (roughly speaking, the row and column of the pixel catching the ray), and two angles to represent the incident direction. These need not be the exact 4 numbers used in practice, but such a representation always requires 4 numbers. Thus, the captured data is a sampling of a four-dimensional space: the space of all possible rays hitting all possible locations on the sensor. A normal sensor, which ignores the direction of incident light, is only sampling a two-dimensional space: the space of possible locations on the sensor.

Densely sampling the 4D space requires many more samples than the 2D space, which is why the Lytro imaging sensor has to have such high resolution (they originally -- in 2005 -- used a 16mp medium format sensor in a Contax 645 body) yet outputs a relatively low resolution image.

(Re Dave Kosiur's comment: no, Ren's lightfield cameras do not move the sensor at all. The data is captured in a single exposure with a static image sensor.)

OK, now let me make this plain: IMHO, this technology, differently than autofocus and AWB, doesn't do any service to the creative part of photography. While using AF or AWB to take care of the technical aspect of the photo can make me concentrate on the compositional aspect (what is the subject, how to balance/unbalance it with the rest etc.), this "shoot now focus later" can only give birth to a worse generation of "sprayers and prayers". And I'm not a Luddite, I assure you...

Dear Folks,

Some modest technical observations about selective focus.

First off, you can't introduce it into a sharp photo after the fact via software. You can fake it, with various kinds of blurs, applied as gradients or painted into a mask, but that's it. There's no way for the software to know what's near and far. Boke though, isn't the issue-- a Gaussian blur makes a nice creamy background.

On the other hand, you can sharpen up a blurry photograph. Most of the data is there, even if it isn't visually accessible. Software exists to do this. The hard part is doing it without annoying visual artifacts.

For that reason, the Lytro doesn’t need to collect a lot more data, at least nowhere as much as a simplistic analysis might lead you to think. It needs extra data to determine distance information, which it gets by having lots more pixels. Don't know the layout of the sensor, so I can't tell you exactly, but it's something like four times as many as a 'flat' camera would use.

I'd be surprised if exposure is any kind of problem. The zoom is a constant f/2. (It has to be a fast lens or else you don't have selective DoF information -- see the first point.)

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

security cameras.

The comments to this entry are closed.