« Recent UPDATES | Main | Open Mike: Scott and Marlene Out on the Salar »

Saturday, 10 October 2015

Comments

I'm kind of excited by this technology. I've been shooting professionally for over twenty five years, and I see the The DSLR as a throwback to the days of film. I think the DSLR form factor needs to go, and this is a step in an interesting direction.

One downside: I have cabinets full of lenses and cameras that will become obsolete. With the right disruptive innovation this might happen faster than expected.

To paraphrase an exchange from the movie PATTON ...

REPORTER: "We've been told about these wonder cameras the techies are working on - 16 individual lenses, computational fusing, depth-mapping technology, cameras that , when you just push a button, it does the rest ….".

PATTON: "Wonder cameras, my God, I don't see the wonder in them. Picturing without heroics, without Photoshop wizardry, nothing is glorified, nothing is reaffirmed? No Masters of the Medium, no snapshots, no bad pictures? I'm glad I won't live to see it …… God, how I hate the twenty-first century."

Dear Mike,

A mutual friend put me in touch with one of the chief engineers at Light just a few days back. They thought the iPad view camera article was pretty good prognostication. Since we're in the same neighborhood (Daly City vs Palo Alto) I imagine we'll stay in touch. I'm sure I will have a review unit to report on at some point.

Me, I am gobsmacked anyone actually built the thing.

pax / Ctein

Is this like the Lytro camera everybody heard about but nobody use?

Pierre

This will change the internet forever. With the smart-phone delivering paper-thin DOF, what will they talk about on photo fora?

I have no doubt that Light will be bought by Apple, or maybe Google. Both have the deep pockets to pull this magic off.

I also think the fact that someone appears to have made this work is going to be a big shift. That said, it's also a (maybe?) obvious evolution from current digital cameras, especially those in smart phones. Cameras have become a combination of capture and software devices. This kind of device just starts to emphasize the software part of the equation to an even larger extent than (say) the iPhone does.

Some clever stuff going on in there.

Very impressive prediction by Chris... Hats off to you sir!

Kind regards

Brian

If I were Ctein I would be on the phone to those folks for his research paycheck. But, now that he's a successful author, I suppose he'll let them go;~))

As CPU power escalates, combined images are going to do some miraculous things. Take for example the stuff that Olympus is getting as a side effect of having sub-pixel positional control over its sensor for image stabilization -- this lets them double the camera resolution. But it's not free. The new Olympus M-5.2 tricks take about 30 seconds of processing time after the exposures, probably because the data paths needed to do this unobtrusively are not yet part of a standard camera. So let's take a deep breath before pre-ordering the Light L16 as well.

Maybe several breaths. I read through Dave Etchell's very excited telephone interview and my BS detector went off at several critical points. First, all that he is looking at is one or more patent filings. These may get actually made, or they may get horse-traded. It did not seem that actual prototypes have been shown. Second, the two startup stars who are behind the new company make exaggerated claims about their past technical accomplishments -- one says that he (in an earlier startup) single-handedly invented LTE telephony, and the other claims that he (in an earlier startup) single-handedly got Siri's voice reco technology started. Both final products have incorporated the engineering efforts of hundreds of people, and probably have drawn upon the early trials of ten times as many. This kind of boasting may be expected in the VC community, but it makes me check for the continued presence of my wallet.

scott

For what it's worth, earlier this year, Apple purchased an Israeli company called LinX Imaging. http://www.macrumors.com/2015/04/14/apple-acquires-linx-imaging/ LinX has developed small multi-sensor imaging modules. The largest that I've seen any information about is a 2 x 2 array. I doubt that Apple would produce a dedicated, stand-alone camera, but I'm sure they could do it very well if they wanted to. I suspect that would be Light's worst nightmare.

Very interesting, but still some unanswered questions, like:


  • What if you want more focal length options?
  • Exactly what size sensor will this match - it seems to be (in quantum terms) roughly the same as a 1" sensor, not a full-frame one.

While I appreciate the potential flexibility and convenience, is this not really just another option as opposed to a replacement? Sort of a 'super-phone' camera that costs as much as some FF cameras.

Phone cameras already come with 1" sensors and are not any bigger. However, they are available on standard phone contracts.

If I am going to carry this in addition to my phone, the extra inconvenience of an EM5 is minor, but the ergonomics is arguably better.

Phone cameras are popular because they are also phones. Single use devices are always popular as soon as the quality is 'good enough'. They have also more or less eliminated the low-end hi-fi market.

Years ago I asked on DPR why this sort of thing couldn't be possible and was told in no uncertain terms that I was a complete twit.
Well, now....

OK, I'll admit it. I don't understand this at all. Which particular problem is it trying to solve? Do we have to choose from 16 different options for every shot? Is the information from all 16 modules kept for each shot? How do you even chimp that to see if you got it right?

The first "computational photography" products --this and the Lytro-- are interesting gadgets. But I think that's all that they'll be, notwithstanding whatever commercial success they achieve. (The deliverable, practical benefits of both of these products escape me after seeing the spiels.)

Personally I think we've already seen the "next big thing" in photography: it's conversion to a digital medium and its ongoing integration into world-wide communications networks. That's probably more than enough for a lifetime and enough to carry photography into its next acts, as it already is doing.

Some interesting comments. But many people want to view the future as if it was the past.

I don't think that this is a typical product launch. I doubt that many/any Light L16 cameras will ever be built for sale. What is for sale is the technology.

No Apple will NOT build cameras. What they will build is an iPhone with Nikon 810 image quality and software lenses—just click-on the 14-24, 24-70 and 70-200 buttons. If you haven't read Ctein's article, you should http://theonlinephotographer.typepad.com/the_online_photographer/2011/01/tablet-view-camera.html

Smart-phones are nothing but computers. Computers that become more powerful with every iteration. How long until they are ready for computational photography?

For me, the really interesting aspect of this is how much the processing leverages the 3D information provided by the imaging system. I was always surprised that Lytro didn't push this aspect of their cameras harder.

There are already applications in surveying and environment capture which use depth-sensing, but having it available at all times, and in a convenient device, will open up others. Google-glass like recognition and annotation of surroundings becomes much more reliable and (potentially) sophisticated, for example.

One example mentioned in the interviews: wholly synthetic background blur, unrelated to the optical characteristics of the lenses taking the image. Optimists will embrace the always-on beautiful bokeh. Luddites will see this as producing an image of how the world is supposed to look, rather than of what was actually seen. Cue yet more philosophical arguments about what is a photograph.

The only real technical downside I can see is diffraction. Small true apertures and high pixel densities will necessarily limit angular resolution. Not an issue for regular everyday photography, but almost certainly so for long-lens work.

It will also be interesting to see how the characteristics of the still images produced vary with the processing done. There are many wonderful computational imaging tricks available, but they can't all be used at once. Combining sub-images to reduce noise, for example, means you can't use the same data to increase resolution - or, at least, there is a tradeoff between the two benefits.

Still, a fascinating device, and almost certainly the way that consumer imaging is going to go in future.

Tech moves so quickly, it seems we're already witnessing the L17 superseding the L16... if you count the optics on the front, that is.

I think the biggest question is whether it will result in ever finer images or just more of the same old crap?

Hey, if it works well, I like it, spider eyes and all, though I wouldn't mind a grip. I was thinking that all those lenses and sensors could lead to reliability problems, but with few moving parts maybe not. If you've ever seen a cutaway of a modern zoom lens it's a wonder they work longer than a few weeks out in the field.

A dissenting view: it looks like a Rube Goldberg contraption that makes overly complex what is an inherently simple process.

Is this another misplaced April 1st post or is it just another step towards removing any notion of art from our photographs?

35-150 at f1.2 where do I queue!

Don't worry. It doesn't matter how smart the camera technology is, there will still be plenty of bad pictures. Start worrying when they automate the ability to 'see',

This concept appears to be a collection of smart-phone type lenses and sensors, of various focal lengths, packaged in a thick smartphone package. We all know the image making qualities of smartphone cameras and they are good-enough for making snapshot photos.

The question is: How much can you improve the image if you take 16 of them and process them to achieve some end result, and will that end result achieve, or surpass, what we are getting today with DSLR products?

Some things that you can do include: reducing the noise and improving the dynamic range of the image. Also, if the images are made at different focus points then an extended depth of field is possible. Other possibilities include stereo imaging and correction of lens flaws including pincushion and barrel distortions.

One thing for sure, after the 16 images are captured there will be a lot of computationally intensive processing required which is slow and will deplete your batteries in no time at all.

In the meantime there is quite a bit of this that you can do with the equipment that you already have. For example: the Nikon V1 will take 60 frames in 1 second, and if your subject is not moving, you can image-stack them to get lower noise and higher dynamic range.

This product has potential but I think will require several iterations before the technical vision becomes reality.

Some folks obsess over a few spots on their single sensor camera. 16 sensors is going to drive them to drink:)

"Small true apertures and high pixel densities will necessarily limit angular resolution"

But it's the synthetic aperture that counts. I was going to say it's the virtual apertures, but that would be probably confusing to a lot of people.

What you are really getting is the the diffraction limitation of an apeture the diameter of a circle bound by the array of lenses.
Think of as an interferometer rather than a camera, and it all makes sense.

A 100mm synthetic aperture could equal a 50mm f0.5 or a 200mm f/2.0

The biggest limitation on this is the air between the camera and the subject if you get more than 100 meters away on anything other than a windless clear sub freezing day. Maybe these people have plans for twinkle removal, if threw plan on using this tech for reading road signs from a moving vehicle in real world weather.

Anyway this describes what I'm talking about better than I can one thumb type on my iPhone.

https://en.m.wikipedia.org/wiki/Astronomical_interferometer

Seems a relatively limited prototype, but a fascinating technology.

It's not so much about photography, as enabling machine vision vastly more flexible than our own.

This is interesting technology for sure. Looks like Apple has the 'folded optics' technology as well under patent. My guess is we'll eventually have this technology inside cell phones and inside point and shoots. I will however continue to enjoy shooting a traditional camera no matter what new technology arises.

Brilliant idea. But needs an eye-level viewfinder.

@ hugh crawford: my 2014 i3 car already does the reading roadsigns stuff, at least speed limit signs, and does it rather well (camera-based, not GPS), also automatic cruise control (drops out when the seeing is bad). Emergency braking for pedestrians and cyclists is happily not yet tested seriously.

Hugh, the Light-16 camera does not measure phase. Aperture synthesis of the kind that has come into use in optical astronomy in the last 15 or so years requires the sub-images to be combined with their phase information intact.

Amplitude-only combination can achieve a great deal, but it cannot re-interfere the pattern formed by diffraction off the aperture, and so cannot improve on the diffraction blur from the individual sub-apertures in this camera. Superresolution and improved dynamic range do allow for lower noise deconvolution, but the limit is still fixed.

There are other differences between what this camera does and traditional aperture synthesis (even at optical frequencies, which isn't really 'traditional'), but the preservation of phase is the biggie.

Dear Pierre,

No, this has nothing to do with the Lytro technology. An entirely different approach.

I strongly recommend people read my column about the “iPad view camera” that Mike linked to. This is very much the approach that Light is using. If you're having trouble understanding how Light is doing it, this column will help you a lot.

~~~~

Dear Scott,

Your characterization of the founders' statements goes far beyond inaccurate and well into the misleading territory. They did not claim what you say they do. Furthermore, their actual claims for their past history are backed up by the public record. For a 25-word summary of what their previous companies did, it's pretty accurate.

If it doesn't meet your standards for discourse, really, that's your problem. Your veiled implication that they are some kinds of con artists is entirely inappropriate.

~~~~

Dear Steve,

It will do intermediate focal lengths.

It does not correspond to any existing sensor size. Simply combining the areas of the sensors in the 16 internal cameras wouldn't be accurate, because it's merging the data in various and different ways depending on what the goal is. It can't do everything at once–– for example, it doesn't provide an HDR, 50 megapixel, 150 mm-focal-length photograph. What you can do is choose some subset of the features depending on what you're after, but you can't maximize all of them at the same time. The camera does contain something like 130 megapixels worth of cameras, so they have a lot of data to play with, but still there are limits. For instance, if I've read the information right, in maximum telephoto mode, you get “only” 13 megapixels worth of high-quality information.

People who call this a “DSLR killer” are indeed engaging in hype. What it does give you is a remarkable amount of versatility and quality in a pocketable point-and-shoot camera.

Whether people will pay $1600 for that is a whole other matter.

~~~~

Dear c.d.,

You could very well be right about Light's business strategy. This is how both of the founders got rich–– by their previous companies developing new technologies to the point where they were useful and then selling them to much larger companies that had the financial resources to properly commercialize them at mass market scale. They may be planning the same strategy again here.

On the other hand, since they both made a passel of money doing that already, this time they might be planning on hanging onto the company and hoping that the large-wallet, leading-edge customers will buy enough of the $1600 cameras that they can move on to further generations of equipment that are more affordable. That's also a common strategy with new electronics (think of flatscreen/large-screen/HDTV's). Time will tell.

I would note that neither strategy is guaranteed to succeed.

And, of course, there's always the possibility the camera will turn out to be total crap. We will know that after I get a review unit. Assuming they don't get bought up before they even have review units to offer. It's been known to happen.

~~~

Dear David,

It is not “an inherently simple process.” Read my column on the iPad view camera. The Rube Goldberg look at the innards is simply a packing issue; squeezing the maximum number of cameras with varying focal lengths into the minimum sized package. Since they aren't moving parts (with respect to each other), really it's no more Rube Goldberg than any other tightly packed electronics.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================

@Pierre: Is this like the Lytro camera everybody heard about but nobody use?

The Lytro is a light field camera. That is a very different form of imaging.

The Light L16 uses more conventional cameras modules but many of them with each pair with different focal length and different spacings. Some of those camera modules have quite a long focal length for a "phone-sized" enclosure. All have folded optics too so there's not much room for a phone in there too.

The onboard computation takes all the images data and calculates a depth map (3D info) and an single image with superresolution (i.e. more resolution from combing multiple images at different focal lengths). So you can "zoom in" to the image and maintain high resolution (and not just crop a single sensor image). All of this should be tied to a simple, regular camera phone UI (one would hope as we know how to design those). The end user shouldn't have to worry about there being 16 sensors.

The output image will probably contain the depth map info and the image info so you can do "fake DoF" ("teh bokey") at a later time by selective blurring based on the depth map.

You're going to see multiple camera modules (though perhaps not the Light idea) in other phones doing similar sorts of things. Apple is very interested in this area.

Like many others here I think that computational imaging is the future. I also predict (as with other forms of art) as technology changes artists will find new uses for it.

But will Light be the ones to make this change? I'm with Thom Hogan on this on. It's priced too high. The delivery date is too far out in the future. They aren't showing working models to buyers. It all feels like an innovative small start up is just about to get out of its depth in trying to put this thing into production.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007