I don't care for getting information from videos or presentations...the production culture seems alienating to me. In this one, for instance, I fail to see the need to have Dwight from "The Office" chirping along like a demented Ed McMahon—and what's the premise for that brief snippet of ancient Bachman Turner Overdrive? Were we supposed to reflect briefly back on blue-collar workers of the 1970s? But whatever: I think you'll agree that the substantial content of this makes it worth watching. You might want to hit full screen.
If you can't see the video here, here's a link to use.
Mike
(Thanks to Carsten Bockermann and others)
UPDATE: As several readers, including cfw and Christopher Lane, have pointed out, Adobe has admitted that one of the images used in this demo was faked. Dpreview has the details.
Send this post to a friend
Please help support TOP by patronizing our sponsors B&H Photo and Amazon
Note: Links in this post may be to our affiliates; sales through affiliate links may benefit this site. More...
Original contents copyright 2011 by Michael C. Johnston and/or the bylined author. All Rights Reserved.
Featured Comment by Kevin Purcell: "More technical details online (in written form :-) http://blogs.adobe.com/photoshopdotcom/2011/10/behind-all-the-buzz-deblur-sneak-peek.html and the paper that kicked this off seems to be this one at this year's SIGGRAPH http://www.cs.huji.ac.il/~yoavhacohen/nrdc/. From the abstract:
This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds.
"Note the 'correspondance between two images' bit. You need a matching image for this magic to work...but that might be a sharp but poorly composed one in a putative further product. Note also that this works not just for getting the PSF for deblurring but can also be applied to tone (color) correction and trasferring a known mask to a new image (the latter is less interesting for still photographers, I think). Currently it's research work. Not product work (AFAICT) though I'm sure Adobe are working on that. The paper is in light version (3MB) http://www.cs.huji.ac.il/~yoavhacohen/nrdc/nrdc.pdf or best quality (47MB!) version http://www.cs.huji.ac.il/~yoavhacohen/nrdc/nrdc_siggraph11.pdf."
Featured Comment by struan: "Twenty years ago I very nearly did my PhD with a group which developed one of the first fairly robust ways to deblur images in a reasonable time without having to handhold the algorithm—important, because it is easy to push the deconvolution to give the highly biased answer you first thought of. As it is, I became more of an experimentalist, but still did a lot of image processing.
"'Faked' is too harsh. Synthetic blur is a standard procedure when you are testing algorithms. It makes the test more controlled, and is really no different from using light from a collimated test target when testing lenses.
"An expert can get amazing results out of these sorts of algorithms. The problems come when real-world pictures meet anumerate users looking for a single button to push. If you have a blur shape (kernal) which is substantially smaller than the whole image and which, crucially, is constant across the whole frame, then even automated systems can do a good job. I used programs which could impressively deblur things like number plates on speeding cars in about fifteen minutes of chugging on an IBM AT.
"For photographic applications the biggest obstacle is that there can be multiple sources of blur, and that they vary across the image. Motion blur with long lenses isn't too bad, even with short depth of field. However, with wide angles the splodge that results from moving the camera is different for objects close to the corners than for those in the middle. You can start trying to construct different kernals for the different parts of the image, but then your kernal is roughly the same size as the sub-image which leads to all sorts of problems with artifacts, false positives and good old signal-to-noise.
"I can see a use for this counteracting the effect of tiny dim apertures at the long end of a handheld superzoom, but how many of those users buy Photoshop? The technology really needs to be built into the self-print stations they now have in photo shops, or the web-interface of an online printer."
Re BTO: It looks like Yu Wei (?) and his Adobe colleagues have been taking care of the blur business. . .
Posted by: D.C. Wells | Tuesday, 18 October 2011 at 03:21 PM
Would have liked to see an example with motion blur, especially on different axis or camera shake and motion blur
For me though, it's usually focus errors more than camera shake that ruin my pictures.
Posted by: Tyler Provick | Tuesday, 18 October 2011 at 03:25 PM
Good lord, that truly is impressive. Fast glass - who's gonna need that any more?
Posted by: Patrick Dodds | Tuesday, 18 October 2011 at 03:47 PM
Oh and btw, are we paying too much for Photoshop or what? How much does a conference like that take to put on?? :)
Posted by: Patrick Dodds | Tuesday, 18 October 2011 at 03:48 PM
>>I fail to see the need to have Dwight from "The Office" chirping along like a demented Ed McMahon
As I don't watch American TV over here in Germany I didn't recognize the guy - but demented he seems...
Posted by: Carsten Bockermann | Tuesday, 18 October 2011 at 03:49 PM
Adobe is collaborating with two many people (sometimes working on similar subjects).
I should have added this to my previous message.
This particular "Fast Motion Deblurring" demo is from SIGGRAPH 2009 paper from reasearchers in Korea.
http://cg.postech.ac.kr/research/fast_motion_deblurring/
http://cg.postech.ac.kr/research/fast_motion_deblurring/fastdeblur.pdf
But I think both SIGGRAPH papers give you some ideas where Adobe is going with this.
Posted by: Kevin Purcell | Tuesday, 18 October 2011 at 03:51 PM
That's been talked about for a while... pretty cool to see it in an algorithm. Somehow though, getting the image right in camera seems like the right answer though? (and yes it's not always going to be possible)
Posted by: marek | Tuesday, 18 October 2011 at 03:58 PM
Boy do I need that for my 30-year collection of blurry Kodachromes. Who is the guy in the red shoes and why was he there?
Posted by: Tom | Tuesday, 18 October 2011 at 04:05 PM
Larger images here:
http://blogs.adobe.com/photoshopdotcom/2011/10/behind-all-the-buzz-deblur-sneak-peek.html
with explanations about when it won't work.
Posted by: Mrten | Tuesday, 18 October 2011 at 04:06 PM
Yup, Rainn Wilson was a complete embarrassment. You can almost hear the eyes rolling and the teeth clinching in the audience. Too bad we had to wade through all the stupid hype to witness some impressive technology...
Posted by: David Johnson | Tuesday, 18 October 2011 at 04:06 PM
Looks like this will be the "cool feature" hook to get us all to upgrade again, just as "content aware fill" was last time around. And, just like "content aware fill", it will likely prove useful but not quite so magical as the initial demos would make it seem.
On the other hand, I can't help but wonder more and more these days if someday good technique won't become mostly irrelevant to photography, with every problem being addressed by algorythms, either in post or in the camera itself.
At that point I might just put down my cameras and learn how to paint.
Of course, I'm a bit of a hypocrite: if some day I'm editing my work and I run across a shot I just have to have, and it's ruined by motion blur, don't think for a moment that I'll resist using this tool. Sigh.
Posted by: Kevin | Tuesday, 18 October 2011 at 04:18 PM
Back in 1976 in my graduate optics class we debated if such a thing would ever be possible and decided you could never do this deblurring trick. Primative computers filled large refrigerated rooms at the time attended to by small armies of technicians in lab coats.
Of course, TV shows have been showing license plates and faces reconstructed to crystal clarity from a few pixels for years. I know people who think that the FBI is keeping this technology from the public. I wonder how they make it to work and back...
Posted by: Malcolm E. Leader | Tuesday, 18 October 2011 at 04:23 PM
Very cool, it's not right, but very cool.
Posted by: Wayne Pearson | Tuesday, 18 October 2011 at 04:32 PM
Is BTO there to add appeal to pre-GenY viewers? That whole dialogue is so fake and staged it is sickening to a vintage baby-boomer.
W
Posted by: Walter Glover | Tuesday, 18 October 2011 at 05:02 PM
I read where they faked the blur on one of the images for the purpose of the demonstration, but say that 2 of the 3 images were actually blurred from camera shake. I thought it looked too good to be true.
cfw
Posted by: cfw | Tuesday, 18 October 2011 at 05:11 PM
this is amazing. also thanks for the adobe update.
Posted by: g carvajal | Tuesday, 18 October 2011 at 05:35 PM
WOW, this means I can shoot in low light at f1.4 and ISO100 with .5 to 1 sec shutter speed! Or wait, I think I can even ditch all my heavy 1.4 lenses now!
Does this mean the doom of ultra fast primes?? I certainly hope not.
Posted by: David.W | Tuesday, 18 October 2011 at 05:39 PM
Hi Mike,
I saw the same video last night. Amazing, but check out the update from dpreview at http://www.dpreview.com/news/1110/11101813adobeclarifies.asp. A bit of shenanigans I must say.
Cheers,
Chris
Posted by: Christopher Lane | Tuesday, 18 October 2011 at 05:47 PM
Dear Mike,
Oh, sweet!
(and about friggin' time-- I've been wanting someone to build exactly this filter for at least 15 years)
Can't wait to see it rolled into Photoshop.
I'd even pay for it.
pax / Ctein
Posted by: ctein | Tuesday, 18 October 2011 at 06:07 PM
Adobe seems to be getting a bit of stick for using some artificial blur in one or more of the demo images, causing some people not to trust the results. Then again, it is supposed to be a sneak and not production-ready. I wonder how this combines with the change-focus-point-later (plenoptic?) camera? Now you don't need to even have a sharp picture at all, then make it sharp, then change where it's sharp, then maybe a little content-aware fill, then just paste in a better picture over the top of it...
Posted by: Marshall | Tuesday, 18 October 2011 at 06:14 PM
"I'd even pay for it."
Ctein,
Since you're not a minor television celebrity, right?
Mike
Posted by: Mike Johnston | Tuesday, 18 October 2011 at 06:32 PM
@Kevin Purcell: The NRDC paper is extremely interesting (I'm annoyed I missed this session at SIGGRAPH this year), but as you point out it needs a sharp reference image of a similar scene to be applicable to deblurring. Blind deconvolution for removing camera shake with only a single image as input has been around for a while (with progressively better results as kernel detection improves), and the specific project that Adobe is building upon is probably this one: http://cg.postech.ac.kr/research/fast_motion_deblurring/
Posted by: expiring_frog | Tuesday, 18 October 2011 at 09:52 PM
Dear Kevin,
The NRDC paper is most interesting to me, and I thank you for the pointer.
I think it needs to be made clear to TOP readers that the Adobe and NRDC papers are describing two different attacks on the problem. E.g., the NRDC method seems to require a reference image but the Adobe does not.
Even non-technical readers will get something from the Adobe paper-- it's written in ordinary English [grin] and it includes full res versions of examples from the video, so one may pixel peep in a most satisfying way.
pax / Ctein
Posted by: ctein | Tuesday, 18 October 2011 at 11:43 PM
WOW! CSI technology at last for the public. :)
Posted by: David Vatovec | Wednesday, 19 October 2011 at 12:59 AM
I think that "faked" is a misleading word.
No, it was not the shake from the image used. But as they say, the shake info was applied to the pic from a shaken image and the plugin had to calculate the amount of movement just like it would have if the processed photo itself was shaken.
It's not as if they just used Gaussian blur from Photoshop.
Posted by: erlik | Wednesday, 19 October 2011 at 02:37 AM
Exciting! But what awful presentation.
Posted by: marten | Wednesday, 19 October 2011 at 03:22 AM
So which is better:
- lens stabilization?
- sensor stabilization?
- post processing stabilization?
And was it very difficult for Adobe to prepare a real world example for the demo? How lame...
Posted by: beuler | Wednesday, 19 October 2011 at 03:34 AM
I can't help but wonder what the `settings files' that are loaded with each image contain...
Posted by: Bernard Scharp | Wednesday, 19 October 2011 at 08:03 AM
Dear Mike,
Yeah, heheh. But I'm a software reviewer, so I don't have to pay for any of this *&*(! either, if I don't want to. Haven't paid street price for Photoshop since v4 (that's v4, not CS4), haven't paid at all since CS1.
But this, I'd pay for.
Personally, I like Rainn Wilson, but then I am so much a GenZ kinda guy...
~~~~~~~~
Dear Tyler,
As Struan pojnted out, this will not work if the blur isn't uniform across the image. It can't, for example, correct for camera shake and subject movement at the same time. One or the other, but not both. That's one of the several problems with the photo that Adobe showed that the software couldn't correct.
~~~~~~~~
Dear Kevin,
I've got a LOT of photos in my files that would be in my portfolio, except I didn't get them quite sharp. In fact, I'm spending some fair mount of time massaging one into shape because I have someone who wants a print of it. It'll be about half a day's pretty clever (if I say so myself) Photoshophacking on my part to get to a semblance of what a plug-in like this would do better.
~~~~~~~~
Dear Beuler,
Lens and body stabilization are equally good (nitpickers, go away) and both will work better than this algorithm, in that they produce cleaner results. But sometimes, despite your best efforts, you DON'T get a sharp photo. Well, then this is a savior.
They didn't use a fake-- they used a laboratory test image. SOP, and entirely legit. The mistake was not just saying that instead of turning it into an inside joke. The lay audience (you, et.al.) didn't get it.
~~~~~~~~
Dear Bernard,
There's a bunch of parameters you need to tweak to make something like this work well. Examples: One is deciding how big the kernel should be (if you watch the video carefully you can even see a control for that). Too small and it misses some of the blur; too big and computation time becomes intractable and there's an increasing chance to conflate different edges. Another is a threshold setting that determines how edgy an edge has to be to pay attention to it. Otherwise the algorithm might chase after every pixel-to-pixel fluctuation in noise level.
A third, and important one, are threshold settings that are applied to the massaged FFT data to separate out the significant blur from all the random stuff. Hard to explain in words, but by analogy think about what happens when you apply "Find Edges" or "Glowing Edges" in Photoshop. You not only get the real subject edges, but any minor fluctuations in the image, so the result usually looks really noisy.
pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================
Posted by: ctein | Wednesday, 19 October 2011 at 01:19 PM
Ctein/MikeJ: see my follow up post about the Korean work that the Adobe demo is based on (that should be added to the featured comment to not mislead people who only read "above the fold"). I wanted to show there was more than one way to do it" but hit the post button a bit two quick.
Before people go off on the "now I won't need IS or wide aperture lenses". Or craft. Yes, you will :-)
TANSTAAFL.
Deconvolving an image with a appropriate PSF function (i.e. deblurring) will increase the noise in the image so you can never get to parity with a perfect photo. Though you may try to deal with that noise in later NR steps. You can never increase the amount of information in the image just change its form.
Same applies to defocusing blur which I think the method in the first paper I cited will deal better with.
I can imagine a future PS using multiple techniques to try to deal with all forms of "blurred" images.
Posted by: Kevin Purcell | Wednesday, 19 October 2011 at 02:59 PM