« '...A Shameful Saga By Any Stretch of the Imagination' | Main | Two New Canons »

Tuesday, 18 October 2011

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Re BTO: It looks like Yu Wei (?) and his Adobe colleagues have been taking care of the blur business. . .

Would have liked to see an example with motion blur, especially on different axis or camera shake and motion blur

For me though, it's usually focus errors more than camera shake that ruin my pictures.

Good lord, that truly is impressive. Fast glass - who's gonna need that any more?

Oh and btw, are we paying too much for Photoshop or what? How much does a conference like that take to put on?? :)

>>I fail to see the need to have Dwight from "The Office" chirping along like a demented Ed McMahon

As I don't watch American TV over here in Germany I didn't recognize the guy - but demented he seems...

Adobe is collaborating with two many people (sometimes working on similar subjects).

I should have added this to my previous message.

This particular "Fast Motion Deblurring" demo is from SIGGRAPH 2009 paper from reasearchers in Korea.

http://cg.postech.ac.kr/research/fast_motion_deblurring/

http://cg.postech.ac.kr/research/fast_motion_deblurring/fastdeblur.pdf

But I think both SIGGRAPH papers give you some ideas where Adobe is going with this.

That's been talked about for a while... pretty cool to see it in an algorithm. Somehow though, getting the image right in camera seems like the right answer though? (and yes it's not always going to be possible)

Boy do I need that for my 30-year collection of blurry Kodachromes. Who is the guy in the red shoes and why was he there?

Larger images here:
http://blogs.adobe.com/photoshopdotcom/2011/10/behind-all-the-buzz-deblur-sneak-peek.html

with explanations about when it won't work.

Yup, Rainn Wilson was a complete embarrassment. You can almost hear the eyes rolling and the teeth clinching in the audience. Too bad we had to wade through all the stupid hype to witness some impressive technology...

Looks like this will be the "cool feature" hook to get us all to upgrade again, just as "content aware fill" was last time around. And, just like "content aware fill", it will likely prove useful but not quite so magical as the initial demos would make it seem.

On the other hand, I can't help but wonder more and more these days if someday good technique won't become mostly irrelevant to photography, with every problem being addressed by algorythms, either in post or in the camera itself.

At that point I might just put down my cameras and learn how to paint.

Of course, I'm a bit of a hypocrite: if some day I'm editing my work and I run across a shot I just have to have, and it's ruined by motion blur, don't think for a moment that I'll resist using this tool. Sigh.

Back in 1976 in my graduate optics class we debated if such a thing would ever be possible and decided you could never do this deblurring trick. Primative computers filled large refrigerated rooms at the time attended to by small armies of technicians in lab coats.
Of course, TV shows have been showing license plates and faces reconstructed to crystal clarity from a few pixels for years. I know people who think that the FBI is keeping this technology from the public. I wonder how they make it to work and back...

Very cool, it's not right, but very cool.

Is BTO there to add appeal to pre-GenY viewers? That whole dialogue is so fake and staged it is sickening to a vintage baby-boomer.

W

I read where they faked the blur on one of the images for the purpose of the demonstration, but say that 2 of the 3 images were actually blurred from camera shake. I thought it looked too good to be true.

cfw

this is amazing. also thanks for the adobe update.

WOW, this means I can shoot in low light at f1.4 and ISO100 with .5 to 1 sec shutter speed! Or wait, I think I can even ditch all my heavy 1.4 lenses now!

Does this mean the doom of ultra fast primes?? I certainly hope not.

Hi Mike,

I saw the same video last night. Amazing, but check out the update from dpreview at http://www.dpreview.com/news/1110/11101813adobeclarifies.asp. A bit of shenanigans I must say.

Cheers,

Chris

Dear Mike,

Oh, sweet!

(and about friggin' time-- I've been wanting someone to build exactly this filter for at least 15 years)

Can't wait to see it rolled into Photoshop.

I'd even pay for it.

pax / Ctein

Adobe seems to be getting a bit of stick for using some artificial blur in one or more of the demo images, causing some people not to trust the results. Then again, it is supposed to be a sneak and not production-ready. I wonder how this combines with the change-focus-point-later (plenoptic?) camera? Now you don't need to even have a sharp picture at all, then make it sharp, then change where it's sharp, then maybe a little content-aware fill, then just paste in a better picture over the top of it...

"I'd even pay for it."

Ctein,
Since you're not a minor television celebrity, right?

Mike

@Kevin Purcell: The NRDC paper is extremely interesting (I'm annoyed I missed this session at SIGGRAPH this year), but as you point out it needs a sharp reference image of a similar scene to be applicable to deblurring. Blind deconvolution for removing camera shake with only a single image as input has been around for a while (with progressively better results as kernel detection improves), and the specific project that Adobe is building upon is probably this one: http://cg.postech.ac.kr/research/fast_motion_deblurring/

Dear Kevin,

The NRDC paper is most interesting to me, and I thank you for the pointer.

I think it needs to be made clear to TOP readers that the Adobe and NRDC papers are describing two different attacks on the problem. E.g., the NRDC method seems to require a reference image but the Adobe does not.

Even non-technical readers will get something from the Adobe paper-- it's written in ordinary English [grin] and it includes full res versions of examples from the video, so one may pixel peep in a most satisfying way.

pax / Ctein

WOW! CSI technology at last for the public. :)

I think that "faked" is a misleading word.

No, it was not the shake from the image used. But as they say, the shake info was applied to the pic from a shaken image and the plugin had to calculate the amount of movement just like it would have if the processed photo itself was shaken.

It's not as if they just used Gaussian blur from Photoshop.

Exciting! But what awful presentation.

So which is better:
- lens stabilization?
- sensor stabilization?
- post processing stabilization?

And was it very difficult for Adobe to prepare a real world example for the demo? How lame...

I can't help but wonder what the `settings files' that are loaded with each image contain...

Dear Mike,

Yeah, heheh. But I'm a software reviewer, so I don't have to pay for any of this *&*(! either, if I don't want to. Haven't paid street price for Photoshop since v4 (that's v4, not CS4), haven't paid at all since CS1.

But this, I'd pay for.

Personally, I like Rainn Wilson, but then I am so much a GenZ kinda guy...

~~~~~~~~

Dear Tyler,

As Struan pojnted out, this will not work if the blur isn't uniform across the image. It can't, for example, correct for camera shake and subject movement at the same time. One or the other, but not both. That's one of the several problems with the photo that Adobe showed that the software couldn't correct.

~~~~~~~~

Dear Kevin,

I've got a LOT of photos in my files that would be in my portfolio, except I didn't get them quite sharp. In fact, I'm spending some fair mount of time massaging one into shape because I have someone who wants a print of it. It'll be about half a day's pretty clever (if I say so myself) Photoshophacking on my part to get to a semblance of what a plug-in like this would do better.

~~~~~~~~

Dear Beuler,

Lens and body stabilization are equally good (nitpickers, go away) and both will work better than this algorithm, in that they produce cleaner results. But sometimes, despite your best efforts, you DON'T get a sharp photo. Well, then this is a savior.

They didn't use a fake-- they used a laboratory test image. SOP, and entirely legit. The mistake was not just saying that instead of turning it into an inside joke. The lay audience (you, et.al.) didn't get it.

~~~~~~~~

Dear Bernard,

There's a bunch of parameters you need to tweak to make something like this work well. Examples: One is deciding how big the kernel should be (if you watch the video carefully you can even see a control for that). Too small and it misses some of the blur; too big and computation time becomes intractable and there's an increasing chance to conflate different edges. Another is a threshold setting that determines how edgy an edge has to be to pay attention to it. Otherwise the algorithm might chase after every pixel-to-pixel fluctuation in noise level.

A third, and important one, are threshold settings that are applied to the massaged FFT data to separate out the significant blur from all the random stuff. Hard to explain in words, but by analogy think about what happens when you apply "Find Edges" or "Glowing Edges" in Photoshop. You not only get the real subject edges, but any minor fluctuations in the image, so the result usually looks really noisy.

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

Ctein/MikeJ: see my follow up post about the Korean work that the Adobe demo is based on (that should be added to the featured comment to not mislead people who only read "above the fold"). I wanted to show there was more than one way to do it" but hit the post button a bit two quick.

Before people go off on the "now I won't need IS or wide aperture lenses". Or craft. Yes, you will :-)

TANSTAAFL.

Deconvolving an image with a appropriate PSF function (i.e. deblurring) will increase the noise in the image so you can never get to parity with a perfect photo. Though you may try to deal with that noise in later NR steps. You can never increase the amount of information in the image just change its form.

Same applies to defocusing blur which I think the method in the first paper I cited will deal better with.

I can imagine a future PS using multiple techniques to try to deal with all forms of "blurred" images.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007