Written by Ctein
Please bear with me, Faithful Readers...this is a photography column. It'll just take me a bit to get us there.
I am irritated by the success that corporate/marketing droids have had in co-opting the term "A.I." That stands for "artificial intelligence," and has, in the past, commonly meant some kind of general intelligence that deals with the real world in all its perplexity. The ultimate goal may be (trans)human level, but it's gradable down—you could have parrot-level AI, or cat-level or even goldfish-level (currently, we've reached the some-but-not-all-insects level of AI).
Now it's been degraded to refer to any system that uses one of the deep learning networks (there are many different flavors of those) to learn how to do a sophisticated task. They're popping up all over, and they do amazing things. But Artificial Intelligence? Nah, I don't think so.
(What are those promoters going to do when we get genuine AI? Call it AIFRTTH—Artificial Intelligence, For Real This Time, Honestly? Maybe that's the first question we should ask real AI, if we ever get it.)
I think there's a better descriptor for what these programs do: Artificial Common Sense. I'm not being flippant. I think it's right and significant, and it lets us take these programs seriously. Believe me, we should be taking them seriously.
Common sense is something that we've thought of as being a fairly sophisticated function, a way of distilling down the otherwise unfathomable complexity and intricacy of the world into something we can manage almost intuitively. "Sophisticated" isn't the same as intelligent; dogs and cats exhibit a great deal of common sense, but down at their level of mentation.
At the human level, common sense isn't necessarily either smart or correct. Everyone here has likely had the experience of getting into an argument with somebody who, backed into a losing corner, utters those fateful words—"but, it's just common sense that...." What then comes out of their mouths is invariably wrong. Common sense is not, in fact, good at dealing with the esoteric. But it's pretty damned good at dealing with everyday life and figuring out what's important and what's ignorable.
At the forefront
Which brings me to photography software. Consider the sorts of manipulation and filter programs we use routinely, like noise reduction, enlargement, and sharpening routines. As humans, we look at a photograph and we can easily figure out what is noise and what is detail, because we just know what the world looks like. Sure, there are the edge cases, where the subject detail is getting so similar to the grain/noise in the photograph that we're not sure if the feature we're looking at is real or an artifact of the media. But by and large, we know the difference. When we sharpen or enlarge a photograph, we want the real detail improved and enhanced, not the grain and the artifacts. When we run noise reduction, we don't want it to degrade the real image detail and tonality. We know and can see the difference because, well, we just know. 'Cause we're smart, sophisticated real-world data processing machines.
Until recently, we could imagine that this kind of discrimination was also a sophisticated function, something that would require a pretty high level of real AI to achieve. Sure, we've got algorithms that do a fair job of brute-force faking it, but it's pretty obvious that they're faking it with frequency filters and threshold discriminators and the like. They don't really understand a photograph, not the way we do.
Now we've got software, based on some deeply learnt neural networks, that appears to be doing the real thing, making the same kind of common sense decisions about a photograph that we make when we look at it. These programs don't work by some programmer handing them a finished set of rules and equations. Instead, they throw problems and solutions at them and say, "Okay, now figure out how to get from A to B." The more cases the networks have to chew on, the cleverer the solutions they invent. After chewing on a few million comparison photographs, they've got some pretty clever ideas.
At the forefront is Topaz Labs. For several years now they've been turning out, in my opinion, the best Photoshop filters that money can buy. Yeah, that's sometimes a matter of taste. For example, I've got at least a half-dozen different noise-reduction programs, because no two of them work the same way and each works better on some kinds of images and noise than the others.
In the past year, Topaz Labs have leapt ahead of everyone else by harnessing these buzzword-AI-tools to develop standalone programs and plug-ins that are beyond good. They are unbelievably good. That's not hyperbole—I look at some of the results and I simply don't know how it's possible for a "dumb" program to achieve them.
This column I'm going to talk about Topaz Sharpen AI, a standalone program currently in version 1.4.0 and on sale for $59.99. (These programs are frequently updated, not to fix bugs or add features, but to incorporate the very latest cleverness that the neural net has figured out as it chews on ever more real-world cases.)
Sharpen AI is already scary clever. The photograph below (figure 1) is the very first photograph I threw at Sharpen AI. I did not cherry-pick it out of hundreds of trials. It sold me on the software.
Figure 1
(Note: Even if you click through to the enlarged versions of the figures, you're not seeing them at 100% and there may be compression artifacts—the limitations of the host publishing software. I've put a full-resolution version up on my website here with no compression artifacts, for any who want to pixel-peep.)
This is a photograph I'd normally throw away—I was on a moving boat and it was late in the day. It's the best of a pretty poor sequence, if I'm being honest. I figured it would be a good test case. The first program I tried was Photoshop's Shake Reduction filter. This algorithm is supposed to analyze the photograph, calculate what's causing the blur, and undo it. It's called deconvolution. The tricky part is coming up with exactly the right point blur function and applying it without generating artifacts. As you can see in figure 2, Shake Reduction still isn't very good at that. In fact, I've yet to find any but the simplest and most minimal cases of subject blur or camera shake where it produced an artistically-usable result.
Figure 2
Then I handed figure 1 off to Sharpen AI, selected the "Stabilize" processing mode and let it rip at its auto-default settings. Two minutes later, I got back figure 3.
Whoooaaa...
Yup, two minutes. Sharpen AI is not only clever but a hard worker. On a Retina iMac with a 4GHz i7 processor, it takes that long using seven threads to process a 20-MP image. That is a heckuvva lot of gigaflops, on the order of 50,000 calculations per pixel. Not your everyday sharpening routine, nosiree. In fact, just updating the preview within the application can take as much as 30 seconds. Not surprisingly I have "auto update" turned off; otherwise every time I moved one of the adjustment sliders the program would want to take another 30 seconds to update the preview. (Almost always, Sharpen's automatic settings produce the best results, so I'm not sure why I bother. I guess I'm just a fiddler.)
Once you've gotten past being awed at the overall improvement, take a close look at the thin twigs and branches along the lower sides and the bottom of the picture. Sharpen AI has brought them back into an amazing focus without any ringing artifacts and while leaving the background largely untouched.
If you look closely at the edges of the birds and the nest in the sky areas, you can see some small artifacts—thin vertical lines that extend a few pixels away from the subjects. These are much more evident on the screen than they would be in a print. In fact, I printed out the full image on 17x22" paper and they were completely invisible. The birds and the nest looked perfectly sharp with no evidence that any enhancements had been run on the original photograph.
Other modes
Sharpen AI has three processing modes—Sharpen, Stabilize, and Focus. You've seen what Stabilize can do. Sharpen does more ordinary sharpening and is recommended for photographs that are otherwise in-focus and un-blurred. The effect is modest, but it's clean. So far, I haven't found an out-of-focus photograph where Focus improves things. Unlike Sharpening, which is understated, Focus seems to go way overboard. Perhaps I haven't found the right image to apply it to, but so far it's been useless.
On every photograph I've tried, Stabilize does the best job, better than Sharpen even on photographs that are in focus and don't have any visible camera shake or subject movement. I haven't reached the point of applying it to every photograph, even the ones that don't look like they need any sharpening, but I'm starting to wonder if maybe I should try that.
Sharpen AI will process all kinds of original files—TIFF, JPEG, PSD, and even RAW. I don't recommend using it on RAW, though, if you like the rendition of your usual RAW converter. Me, I like what Adobe Camera RAW does and Sharpen's raw converter doesn't produce the same results, in terms of tone, color or geometric correction. I'm not saying it's bad, but it's different, and even though Sharpen will output a new raw file, the new one won't render the same as the original. My recommendation is to make your usual corrections in your preferred RAW converter, generate a 16-bit TIFF from that and put Sharpen to work on the TIFF.
In past years, I've deleted photographs that were artistically appealing but unacceptably blurry, knowing that I'd never be able to pull prints from them that met my technical and aesthetic standards. I'm thinking I may have made a mistake.
Topaz Sharpen AI—recommended without reservation.
Next column (in a couple of weeks) I'll talk about Topaz Labs' Gigapixel, which is absolutely, positively the best upsampling program ever. It does all those image-improvement miracles that other upsampling programs claim to do... and don't.
Ctein
Ctein, pronounced "kuh-TINE," rhymes with fine, is one of the most experienced and accomplished photo-writers alive. He was TOP's Technical Editor before leaving for a new career as a science fiction novelist. He has written two books of photo-tech, Digital Restoration from Start to Finish and Post-Exposure. This is his 343rd column for TOP; older columns can be found under the "Ctein" Category in the right-hand sidebar.
Original contents copyright 2019 by Ctein. All Rights Reserved. Links in this post may be to our affiliates; sales through affiliate links may benefit this site.
Please help support The Online Photographer through Patreon
(To see all the comments, click on the "Comments" link below.)
Featured Comments from:
Aashish Sharma: "I haven't even started reading the article, but I'm writing to let you know how delighted I am to see Ctein back. I have always enjoyed his writing on everything and missed it while he was away. An article by him is even more welcome as it is my birthday! Thanks for writing, Ctein. I'll now go back and read it."
Michael Elenko: "Thank you for this post. I've had this pano HDR I created eight years ago. It was in the mountains, and the mosquitoes were the worst, even getting in my eyeballs. So I could not provide enough stability for the brackets to line up right, plus dusk was coming on, so my shutter speed was too slow for effectiveness. But the shot was special, just too mushy for my standards and a plethora of tried tools. Until I used Sharpen AI at the settings recommend by Ctein. Wowie zowie, this finally looks right. Thank you, I've waited for this."
Joe Holmes: "I tried a demo copy of Sharpen AI on a couple of my images and immediately paid for the software. I find that it can work miracles on certain images, though there are times when it gets everything wrong, when it simply cannot handle an image and leaves crazy artifacts. And then there are times when it does a brilliant job on certain portions of an image but creates bizarre and ugly artifacts on other parts, like the surface of a river or clouds. In those cases I can make the best use of it by bringing a Sharpen AI image into Photoshop as a layer and applying a layer mask to apply the best parts on top of the original image. Even with that extra work, it's worth it, because the results are way better than what I can do with other tools."
Marcelo Guarini: "First, thanks for the initial part of this post, about artificial intelligence. I'm a professor of Electrical Engineering at a university and I am tired of repeating the same to my colleagues, especially those who work in the so-called field of artificial intelligence. I will show them your comments so they can see that I am not the only one with the same opinion. Thanks for the term 'artificial common sense.' Artificial intelligence is too high-flown a term for what deep learning is.
"I have been using Topaz Labs Gigapixel from its inception, and have been very surprised by its excellent results. As you say, 'positively the best upsampling program ever.' Almost every time I open the program to use it, there is an upgrade available. I have been a beholder of its evolution through cumulative training and refinements. Today its results are much better than at the beginning; you can easily see the improvements. It is just a very cool tool. I will try Sharpen AI and will follow your indications regarding the different modes and auto-update.
"Thanks much for your post, and happy to see your reviews again."
Soeren Engelbrecht: "'Since it is often hard to distinguish common sense from equally common nonsense, professional advice is useful.' —Leslie Lamport."
Tom R. Halfhill: "I'm equally uncomfortable with the buzzphrase 'artificial intelligence'—and I write about microprocessors for a living. (I'm a senior technology analyst for Microprocessor Report, the leading newsletter in this field since 1987.) I prefer 'machine learning,' which I think is more accurate and less grandiose, but the 'AI' label is unavoidable because it has become so widespread. Essentially, it's sophisticated pattern matching and sorting. But at a high level, sometimes we don't understand how these neural networks achieve their results. We turn them loose for training on a huge data set, let them chew on it, then evaluate their performance. It's almost a black box, although programmers can delve deeply into the structure to see what's happening. Of course, we don't fully understand how the human brain works, either, so even 'natural intelligence' is mysterious.
"The Topaz Sharpen AI tool is indeed impressive. What I need is a dust-and-scratch removal tool that's equally good. I've scanned tens of thousands of old family photos that need more fixing than I can handle. The automated tools I've tried often can't tell a dust spot from a pearl earring or a scratch from a crease in clothing."
Ken Lunders: "Whoa!! I downloaded Topaz Labs Sharpen AI for a 30-day trial. Well, the trial has ended after less than hour. I'm now $59.99 poorer, but 100% happy with the purchase. I'm delighted to see Ctein providing content here once again."
eliburakian: "Damn, I just spent $300 on the whole Topaz AI suite after reading this and trying out the software. Thanks a lot, now I have some explaining to do to the spouse."
Great article by Ctein and I’m looking forward to the next one. It seems Mike and Ctein have moved on past their previous differences which is good news for TOP readers! :-)
Posted by: Richard Michalski | Monday, 23 September 2019 at 07:42 AM
Time passes and things change. I am very happy to see Ctein return as a guest contributor to The Online Photographer. Looking forward to his unique insights and perspective in future columns !
Posted by: John Berger | Monday, 23 September 2019 at 08:11 AM
Great to see Ctein back, and looking forward to his great columns.
Posted by: steven ralser | Monday, 23 September 2019 at 08:59 AM
Welcome back, Ctein.
It is maybe worth noting that the actual scientists involved in these deep learning network things seem to have quietly standardized on "Machine Learning" rather than A.I. though the marketing types have not budged.
Also worth noting, at 50,000 calculations per pixel is it I think fair to characterize the results of these tools not as "photographs" as such, but rather as a computer generated painting based on a photograph.
Whether this minor quibble has any impact in the real world remains unknown. Some people are hollering that the sky is falling, and the underlying essence of the very idea of "photograph" is being lost (on about every 3rd or 4th day I might be one of them), quite a few people don't see what the issue is, and some of us (me on the other days) are interested to see what happens.
Posted by: Andrew Molitor | Monday, 23 September 2019 at 09:28 AM
I bought the Topaz Suite several years ago, and they give you free updates for life. So I got some of the AI products as free upgrades. And they are that good. I only had a one-month trial version of Gigapixel (there was no enlarge module in the original suite), it left Photoshop in the dust... well, results-wise, not speed-wise, because this thing is processor-intensive. A 4GB GPU is recommended, and but mine has only 1GB.
Posted by: toto | Monday, 23 September 2019 at 09:49 AM
1) How nice to have a new article from Ctein. I read a paragraph and recognized the voice before noting the byline.
2) I am in complete agreement with the AI buzzword silliness.
3) That stabilize plug-in is astounding.
Dave
Posted by: David Fultz | Monday, 23 September 2019 at 10:09 AM
Great to see Ctein back on TOP (no pun intended) as I've always enjoyed his writings. This post is very timely for me as I've just started looking into AI sharpening and up-rezzing software.
Posted by: PaulW | Monday, 23 September 2019 at 10:18 AM
I call it “computational photography,” and while it may not really be intelligent, it has a metaphorical mind of its own: even the designers of the neural network software typically are unable to explain precisely what the network is doing as it modifies the source image to produce the target.
Posted by: Chris Kern | Monday, 23 September 2019 at 10:35 AM
Welcome back Ctein!
Posted by: John Igel | Monday, 23 September 2019 at 10:36 AM
"Is it live, or is it Memorex?"
Posted by: Speed | Monday, 23 September 2019 at 10:43 AM
Will have to check it out, as I've learned to respect Ctein's opinion in these matters. His previous advocacy in TOP of using printer-managed colors instead of photo-paper profiles in Photoshop changed my print results from iffy to near-perfect every time. Thanks, Ctein!
Posted by: Richard Barbour | Monday, 23 September 2019 at 10:54 AM
I've successfully saved a couple of photos with the focus sharpening mode. It works well if you slightly missed focus. I believe what it is doing is telling the algorithm to deliberately sharpen areas of the photo it would ignore otherwise (normally it doesn't seem to apply excessive sharpening to OOF areas).
But I agree that stabilize does a better job most of the time if you like somewhat aggressive sharpening.
However I'm even more impressed with Topaz's AI NR software. It's really magical.
Posted by: Kelly Graham | Monday, 23 September 2019 at 12:18 PM
At last, someone else who thinks calling it Artificial Intelligence is Genuine Stupidity!
Posted by: Andrew | Monday, 23 September 2019 at 12:46 PM
I agree wholeheartedly with Ctein’s assessment. I’ve used the Topaz blur reduction on old scans from 35mm and medium format negatives and slides and it works miracles on those as well. I’ve now incorporated this function into my post processing of almost all my photos unless I absolutely know they are as sharp as possible. I just restored an old and faded scanned print from an Instamatic and it was astounding.
Posted by: William Cook | Monday, 23 September 2019 at 12:47 PM
Thanks for this. I'll buy on this demo/recommendation. That said, I trialed and now have purchased their Gigapixel AI, which I have wanted for myself but now actually need for a project I am doing for work. It's a mixed bag, frankly. Yes, it works, but it also does some very strange things with sharpening---halos galore. You have to feed it a somewhat under sharpened file, and also turn down output sharpening in LR to low. Too soon to make a final judgement, but so far it's tricky to work with. And the AI bit? I wish there were more manual controls....
Posted by: tex andrews | Monday, 23 September 2019 at 12:53 PM
I have always hung onto digital images that weren't hopelessly flawed thinking that the technology and/or my skills might catch up. I'm glad that I have been a pack rat.
Posted by: James Bullard | Monday, 23 September 2019 at 01:44 PM
Topaz Gigapixel is a extremely good upsizing program for day time images. It however has a lot of trouble with Dusk or night shots - massive number of artifacts here. And it has a really stupid size limit of 22K pixels output. According to their support to allow larger files would take too long. This is one of the dumbest excuses I think I've ever hear from a software company.
Posted by: Robert Harshman | Monday, 23 September 2019 at 02:31 PM
One fervent wish granted, Ctein has written again on TOP.
Posted by: Jim Freeman | Monday, 23 September 2019 at 04:04 PM
Wow.
Posted by: Robert Roaldi | Monday, 23 September 2019 at 04:14 PM
My father, a genuine "good ole boy" of the South used to say "The only thing about common sense is that it isn't."
When I first heard the term used in a meeting at MIT's Media Lab decades ago, I cynically said I’d believe in “artificial intelligence” when someone proved to me that “natural (human) intelligence” existed.
Unfortunately, what gets called "artificial intelligence" or "AI" today is more hype than real intelligence. Apply an algorithm to something that is complicated to understand and call it "AI" for marketing purposes. High level AI uses iterative processes to "learn" and apply that to future situations, but that often is not successful.
Ford's AI partner for autonomous vehicles, one of the top focuses of AI today, said recently that AI could handle only about 80% of the driving situations - keep a car in lane, make turns, etc. But random events like bikers, pedestrians, etc. were beyond its capability today.
That's obvious from Tesla's AI which keeps running into stationary vehicles on roads - it cannot react fast enough - or Uber's that ran over a pedestrian pushing a bicycle.
I have a section on AI in my book "Delusional Management" that starts with:
Isaac Asimov, renowned science fiction author, in the 1950 collection of short stories called “I, Robot” described the three laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
and
Quite a few high profile techies have expressed concern about AI, including Stephen Hawking, Elon Musk and dozens of AI researchers. Steve Wozniak was one of them but flip-flopped, seemingly because he came to the conclusion that AI was still incompetent. Perhaps that was premature.
and
The other thing that scares me about AI is how it depends on who is programming it or how it learns on its own. MIT researchers built an AI they called Norman (norman-ai.mit.edu), named for Norman Bates in Hitchcock’s Psycho. Norman was programmed to perform image captioning, a popular method of generating a description of an image. They trained Norman on a Reddit forum dedicated to document the reality of death.
Then they compared Norman to a standard image captioning neural network. MIT described Norman as “AI-Powered Psychopath.” Perhaps the best description of what Norman learned was the title of an article on theverge.com: “MIT fed Norman data from Reddit, and now it only thinks about murder.”
The story about Norman is worrisome. If the techies in Silicon valley developing AI are ethically-challenged, maybe from spending too much time playing video games where the goal is to kill or destroy, can they be trusted with developing AI? Do they know Asimov's 3 laws? Or care?
I think we should be afraid of AI. Really afraid. Especially if engineers train them on Internet news. But then again, can AI be any worse than humans?
BTW, welcome back Ctien! Missed you.
Posted by: JimH | Monday, 23 September 2019 at 04:15 PM
Thanks, Ctein. Welcome back!!!
Posted by: McD | Monday, 23 September 2019 at 04:41 PM
Crain,
Great recommendation.
I just tried Gigapixel last month and look forward to your recommendations.
I was working on an image from a now ancient Canon 1Ds from 2007 that I had last printed huge on canvas ten years ago using the canvas to hide that I was doing an over enlargement. A client now in 2019 wanted the same size 54”x36” but on paper. Gigapixel saved the day. It was shot from a two man ultralight, so it’s not like I could go back and reshoot with a modern camera. So Gigapixel was like a time machine for me.
Posted by: Jack | Monday, 23 September 2019 at 04:42 PM
So great to have a column by Ctein back on TOP again, even if it is going to cost me.... I already know the image I need this software for! I work in a NHS hospital here in London and AI is all the rage in our sector too... But generally really means clever predictive analytics. It is clever though....
Posted by: Jonathan Schick | Monday, 23 September 2019 at 05:32 PM
Thanks Ctein; I bought it ! Amazing !!
Posted by: Kenneth Voigt | Monday, 23 September 2019 at 06:02 PM
Thanks for that.
I've just begun exploring Gigapixel AI, and I'll be interested in your review of that program and, particularly, in how that software can interact in landscape photographs with the Sharpen AI program that you reviewed here.
Posted by: brian | Monday, 23 September 2019 at 07:08 PM
Agree with earlier posts, great to see Ctein back on TOP.
Posted by: Mike Potter | Monday, 23 September 2019 at 07:12 PM
I’m delighted to see Ctein writing for TOP again. Thanks for a very useful column.
Posted by: Bill Tyler | Monday, 23 September 2019 at 07:19 PM
Also happy to see Ctein back on TOP
Posted by: Edward Taylor | Monday, 23 September 2019 at 07:33 PM
So questions: Does the software correct "focus" ie: missed focal plane? Or shake/movement/slow shutter speed? Or both? How does it handle varying DOF -- f1.8 vs f16? And does this mean I can now use non-stabilized, old (cheap) MF adapted lenses and correct my old-eyesight focus/steadiness errors? Great article, and sounds like a great leap in processing software.
Posted by: Ken Hulick | Monday, 23 September 2019 at 07:36 PM
Welcome back Ctein! There has been a post it note on your book reading “Ask Ctein if there is a Chinese in Nixon joke that got edited out”
Is that image cropped showing the left side of the original image, and if so did you use the Topaz software before or after you cropped?
The reason I ask is that to me it looks like a crop and I was wondering if the Topaz software is trained to know “good images” look like or if it’s trained to know what “unwanted artifacts” look like or both.
Making the assumption that the optical system’s axis passes through the center of the image would be a head start for a non AI tool, but if the training set of images were cropped, then I guess the neural net could learn to work on cropped images. My experience with non AI image correction has been that it works best on unmanipulated images.
I’ve been playing with some AI image enhancing software and I notice that it does a really good job of guessing what something should look like until it abruptly gives up. The pine needles in you image for example. This is particularly apparent when the image includes signs or packaging with text that is too small or blurred to be resolved. The software makes everything look really great except for the blocks of text which are either still blurred or rendered into a sharp random pattern.
I am concerned when this gets included by default in new cameras and software.
You could get some interesting errors if for instance all Porsches in the training set had California license plates, and consequently the software put nice sharp California plates on all blurry images of Porsches.This is going to put the final nail in the photos = truth coffin.
I expect the new camouflage will look like camera artifacts. If I paint my car to look like a piece of lint surrounded by jpeg artifacts will AI software retouch it out of the image? Would “Common sense” dictate that if it looks like dust and noise it is dust and noise, or that if it’s the size of a car it’s probably not lint?
Tesla seems to have had a problem a while back with its software deciding that on the highway all cars are moving, and if something isn’t acting like a car by not moving, then it is not a car, maybe a shadow or something, and the Tesla just ignores it. When the not moving thing that might be a shadow turns out to be a fire truck, as happened a year ago when a Tesla ran into a fire truck perked in the middle of the lane on a freeway, it is problematic. Slamming on the brakes for shadows can be problematic too.
Posted by: hugh crawford | Monday, 23 September 2019 at 08:04 PM
This makes me happy.
Posted by: Rick Popham | Monday, 23 September 2019 at 08:20 PM
Topaz offers a 30 day trial. It's worth seeing for yourself. I was curious to see how it compared to my own sharpening approach in Lightroom. I gave Topaz an unsharpened file and processed it using "sharpen" with the settings determined by the "auto" button. The results are mixed -- in some areas, perhaps a bit better, in others clearly worse. Sometimes it kills detail that Lightroom preserved. Other times it brought out details better than Lightroom could. It handles sensor artifacts in my Fuji GDX 50R files a bit better than Lightroom. But it also occasionally creates false colour where there is none in the Lightroom version (normally one or two pixels in width, along linear features). Interestingly, it quite happily makes areas that are slightly out of focus look like they are in focus (or less out of focus). If you actually wanted those areas to be slightly out of focus, then you're out of luck because it's not selective. In terms of the "sharpen" setting, I prefer the results I can get with Lightroom.
The "stabilize" tool that Ctein describs is another level entirely. Supplied with the same unsharpened Fuji GFX 50R file from Lightroom I used to test "sharpen", Topaz createed details that simply weren't there. It's quite remarkable in rather unsettling way. Parts of the picture that were out of focus now look in focus at 100%. Parts that were in focus are much sharper and more detailed than sharpening alone (with Topaz or Lightroom) could produce. Where previously there was only a hint of blurry detail or texture, now there is detail and texture. If you selected a specific aperture because you wanted to gently blur the foreground or background, Topaz isn't having any of it; your gentle blur is now resolved into "details".
Is this good or bad? It is a tool in the tool kit -- and potentially an incredibly useful one. However, if you use it, you have to be willing to accept that the file created using Topaz's "stabilize" setting is not what the camera recorded. Or perhaps more accurately, it's even less what the camera recorded than the pictures you were creating before you started using it!
Welcome to computation photography.
Posted by: Rob de Loe | Monday, 23 September 2019 at 08:53 PM
The best definition I heard of A.I. that is used in marketing is statistical inference based on very large data sets.
Posted by: Frank B | Monday, 23 September 2019 at 09:45 PM
AMAZING! Ctein is a treasure. It's great to see him back and the software is amazing too.
Posted by: Andrew Chalsma | Monday, 23 September 2019 at 10:30 PM
It's a good thing Cartier-bresson considered sharpness a bourgeois concept, as he's probably a blur spinning in his grave like that.
Posted by: TC | Tuesday, 24 September 2019 at 12:04 AM
I tried it, and it did not work for me at all. It was very slow on my old AirMac, and the results were still blurry. Maybe I should try it again, following Ctein's procedure.
Posted by: Rube39 | Tuesday, 24 September 2019 at 04:36 AM
Great article. I had issues with the "vertical lines" artifacts but they only occur when using GPU. Try the CPU setting and verticals have gone. Hope that helps. Pete
Posted by: Mark Hutch | Tuesday, 24 September 2019 at 05:19 AM
Ctein
good to see you returning to TOP. Welcome back.
And as for this bit o' software, good grief!
Roger
Posted by: Roger Bradbury | Tuesday, 24 September 2019 at 05:21 AM
Just as a data point. Processing a 20 Mb mft file output as a 16 bit tif processing time using stabilise was around 70 sec on a 6 year old Win 10 Pro Desktop.
i7-3820 cpu / 16Gb ram / Nvidia 1060 (6Gb vram)
CPU was at 50% and the GPU at 100%.
Posted by: Ian Seward | Tuesday, 24 September 2019 at 06:43 AM
Two fantastic surprises here; one is the return of Ctein with a guest post, the other being coverage of a software line I had recently been researching. Of all the 3rd party image enhancement options available, Topaz seems to be the most intriguing. Looking forward to the Gigapixel post. As a m43 user, I'll take all the help I can get.
Posted by: Keith | Tuesday, 24 September 2019 at 08:07 AM
Welcome back Ctein.
I'm going to sell all my expensive lenses and use cheaper ones....why not. LOL.
Seriously this sounds like a really useful tool to have.
Can I ask if Ctein or anyone here has a really good B/W conversion plugin.
I used to have one which was brilliant but it was never updated but it gave options like mimicing Pan /TriX/etc. Now all I see are options for fairly useless weird conversions eg vintage/antique etc.
LOng LIVE the Black and White print.
Posted by: louis mccullagh | Tuesday, 24 September 2019 at 09:17 AM
Maybe they just send the file to some sweatshop in India where they use a human to do the actual processing with PhotoShop. 8^)
Posted by: KeithB | Tuesday, 24 September 2019 at 01:33 PM
That's a fuzzy photograph, but not irreplaceable. How about trying again whith what was learned from the failure? Am I alone in thinking that's what photography is about?
[Illustrations are just examples. It wouldn't make very much sense to test this application with a sharp photograph, after all! --Mike]
Posted by: Andrew J. | Tuesday, 24 September 2019 at 01:46 PM
A question to Ctein. Going in the opposite direction to upsizing, is there a preferred method to downsizing? Turning a file from a 24MP sensor down to a 12MP file that uses all the info from all those tiny receptors and turns it into a file that has the quality of a sensor with fewer but much larger receptors? "Fewer but better pixels". This may be an old hat, or foolish, or obvious --- forgive my ignorance.
Posted by: Martin D | Tuesday, 24 September 2019 at 01:47 PM
I'm afraid AI is not the only term that has been misappropriated. How about drones? Drones just fly off in a more or less predetermined direction. Unmanned Air Vehicles (UAV) are what is now called a drone. Price point is the general range that a product's price puts it in, but now it's used to mean just price.
Our tilting against the wind mills won't do any good.
Posted by: Greg | Tuesday, 24 September 2019 at 04:07 PM
Welcome back!
I'm a fan of Topaz' recent "AI" apps, and use them regularly.
"Sharpen does more ordinary sharpening and is recommended for photographs that are otherwise in-focus and un-blurred. The effect is modest, but it's clean."
My experience, as well, no better than Focus Magic, in at least some cases.
So far, I haven't found an out-of-focus photograph where Focus improves things. Unlike Sharpening, which is understated, Focus seems to go way overboard. Perhaps I haven't found the right image to apply it to, but so far it's been useless.
I have found cases where it is quite good. Here's a sample, @ 100%:
And after Topaz Sharpen AI Focus, setting 40:
YMMV \;~)>
Posted by: Moose | Tuesday, 24 September 2019 at 04:44 PM
Pretty slick.
I wonder how long until Topaz-styled technology is automatic in a smartphone as the technology progresses with "computational photography?"
And I wonder, does the computational ability of software and computers take some of the fun out of capturing images?
Posted by: SteveW | Tuesday, 24 September 2019 at 05:33 PM
Dear Joe,
That's the way I use the various Topaz AI tools, and I should've explained my workflow in the review. I pull the raw file into Photoshop, making whatever conversion adjustments I find appropriate for the photograph. I save the photograph as a flat TIF, pull it into the Topaz tool, generate a resultant TIF and layer that on top of the photograph in Photoshop.
Now I can pixel peep at my leisure. If the resultant TIF is free of problems and better overall than the starting one, I flatten the result and move forward from there. If not, I add a layer mask and paint out the bits that are a problem and then flatten and save.
Important note — As a rule, don't do significant noise reduction or sharpening before applying the AI tools. For the most part, it appears they've been trained on un-manipulated images, so they can be confused by artificialities introduced by other software.
That doesn't mean they couldn't be trained on manipulated and artifact-laden images — in a future column I will review Topaz JPEG To RAW AI, which is just plain spooky. But so far as I can tell, Sharpen AI isn't one of those programs.
At least, not yet.
~~~~
Dear Tom,
AI Clear, which is a component of DeNoise AI (to be reviewed in the future) and Topaz Studio was Topaz's first product in this line and attempts to do what you want, both cleanup and detail preservation and enhancement at the same time. It is far from perfect (and ofttimes it fails entirely), but using it in a masked layer, as described above, frequently produces much better results a lot more quickly than you could do it with simpler tools.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 24 September 2019 at 06:16 PM
Dear Andrew, and others...
With Mike's permission, I am somewhat arbitrarily shutting off any discussion about whether these constitute "photographs" or "paintings" or "illustrations." I respectfully request that people not continue in this vein.
I have two reasons for doing that — one is that I find such arguments to indeed be quibbling and to be annoying, so I don't want to deal with them. The more significant one is that they are factually incorrect in this case. The huge number of calculations being done by Topaz's AI programs, ranging from tens of thousands to millions per pixel, are not because it is massively repainting the image. It is because it is analyzing a huge amount of data, a very large number of pixels, to figure out "context."
Let me give a human analog example. Imagine someone has made a photograph of birds sitting on telephone wires against a bare sky (because no one has ever done that before). They hand it to you to clean up and sharpen, but they don't tell you what the photograph is of.
Unfortunately, you are artificially constrained to look at the picture through a 3 x 3 pixel window. The window happens to overlap one of the telephone lines. You see a string of pixels that may or may not be connected that are faintly darker than the surrounding ones. But with at most three pixels or four in that "line," you can't really tell whether that's a random congruence of a few noisy pixels or real detail.
(This is the problem most simple noise/sharpening filters face. How do you improve one without making the other worse?)
Make the window bigger. You might have a better guess if you looked at 5 x 5 pixels. If you look at a dozen by a dozen, you can be pretty sure that you're seeing a real line (that needs to be sharpened) as opposed to random noise (that needs to be suppressed).
That involves looking at over 100 pixels and mentally comparing all of them to each other and correlating what is where. Thousands of unconscious mental calculations all to decide whether one particular pixel should be made lighter or darker.
That's what these AI programs have to do.
In a related matter, I would argue that it is not really correct to call this "computational photography" — that term is normally reserved for building images out of visually-non-image data. It's a different discipline and will ultimately lead to some very weird camera and optical designs that will work much better than what we have now. But this is a deep analysis of an isolated visual image.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 24 September 2019 at 06:19 PM
Regarding AI or "no AI", I don't care what they call it, as long as it works.
I'm with Ctein on Topaz' products, I've been using a number of them for some years now and have always been very impressed with them.
I just got an email from Topaz on AI Sharpen, and as I frequently photograph very fast moving subjects (most of whom are traveling well over 100 mph) that have fair risk for not being "tack", I'll be snagging Topaz Lab's Sharpen AI.
"Stabilize" looks to be right up my pit lane.
Posted by: Stephen Scharf | Tuesday, 24 September 2019 at 06:28 PM
Dear Tex (and others),
If you significantly alter the photograph before handing it off to these AI programs (such as with substantial sharpening or noise reduction using other tools), the results are unpredictable and frequently bad. Sharpen and DeNoise should be used pretty early in your workflow; they expect to see images whose fine structure is relatively un-manipulated. You have to get to know these programs very well before you can figure out what is safe or not safe to do before invoking them.
In a related matter, people are asking what happens if you combine the various tools, e.g., running Sharpen before (or after) using DeNoise or Gigapixel. Sometimes you get better results than using just one program. Sometimes you get worse. Sometimes it's just different.
It's really easy to get lost in the weeds with all the possible combinations of programs. I'm happiest when one of them proves sufficient and I don't have to wander down that debris-strewn garden path.
~~~~
Dear Ken,
Yes.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 24 September 2019 at 06:30 PM
Dear TC,
Paying attention to what any photographer says about the merits (or lack thereof) of various sorts of photographic quality is a bourgeois concept.
~~~~
Dear Hugh,
No, there was no "Chinese in Nixon" joke. It would have been a good throwaway line, probably by Crow. We should've thought of it.
Yes, the image is highly cropped down to the salient detail (it's obviously not a 20 megapixel image). The center point in the original frame was above the flying Osprey's head, but not quite as far up as the tip of the wing. The crop you are seeing goes all the way down to the bottom of the original frame.
I ran Sharpen AI on a cropped image that cut out sky above the left pine tree and part of the right pine tree.
I'm not sure what you think is wrong with the way Sharpen AI handled the pine trees. It looks pretty natural to me.
(Note to boke fanatics — NONE of these programs are going to preserve what you like. That's not their point.)
~~~~
Dear Louis,
I do so little black-and-white conversion that I have no idea what is good and what isn't. Not my thing.
~~~~
Dear Andrew,
Along with what Mike said...
(1) I am not David Attenborough (if only!) and...
(2) I was on a boat with my Family, ten other people who I dearly love and who all love me, so I'm not going to tell the one piloting the boat (not me) to stop it for as long as it takes me to get the photograph I want, BECAUSE I WANT THEM TO CONTINUE TO FEEL THAT WAY ABOUT ME!
(3) Wild ospreys do not perform on command. At least not for me (see #1). [grin]
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 24 September 2019 at 06:48 PM
In 1985 Minolta came out with their Maxxum/Dynax/Alpha AF SLRs (7000, etc.). Then in 1988 they came out with their 2nd generation (7000i, etc.) and they added an 'i' to the names which meant 'intelligent' (7000i, etc.). Then in 1991 they came out with the 3rd generation and changed it to 'xi' for "extra intelligent" (7xi, etc.). Then in 1995 they came out with their 4th generation and changed it to 'si' for "super intelligent" (700si, etc.). By the time the 5th generation came out in 1998, I guess they decided they had taken the intelligent thing as far as they could so they dropped it all and called the next models the 9 and 7. :-)
Posted by: HR | Tuesday, 24 September 2019 at 07:02 PM
Ctein is back! This makes me happy. Thanks!
Posted by: Radiopaque | Tuesday, 24 September 2019 at 08:51 PM
Let's not forget to buy Ctein's recommendations using Mike's links :-).
Ctein's back! Hurrah!
Posted by: Sarge | Tuesday, 24 September 2019 at 10:09 PM
I don't understand the work flow. All of the Topaz AI thingies but Gigapixel work well as PS plug-ins.
I just create a copy layer, run the AI plug-in, and have a layer with the effect, to flatten or mask, then flatten.
This skips writing and reading a TIFF, with exactly the same result.
Posted by: Moose | Tuesday, 24 September 2019 at 11:10 PM
A tip for Topaz upgrades problem: Be sure to verify your operating system version and other upgrade requirements are adequate before attempting to install a free upgrade.
Some time ago Topaz offered me a Gigapixel upgrade. The install seemed to complete OK, but the program wouldn't run. No error message. Topaz support staff were unable to identify the problem. Eventually, I noticed the upgrade required a more recent operating system version.
Posted by: auntipode | Wednesday, 25 September 2019 at 12:48 AM
Just to clarify my previous comment. No personal critic to Ctein. Actually I am very pleased for the information provided which I would'nt have known about if not for Ctein's post.
Just saying that with this kind of program (thinking also of false bokeh like in PS, some smartphones), we might become lazy at applying true photographic techniques; not always possible, yes I agree.
That may sound a bit old-fashioned to some but to each his own, we all like different things.
Posted by: Andrew J. | Wednesday, 25 September 2019 at 08:29 AM
Sorry, getting here late. But I did want to add a cent to Ctein's comments regarding Topaz Labs' "Sharpen AI" product. I've long ago stopped using image processing utilities...except for a few of Topaz's products, some of which have proven to be very handy. Their "Detail" utility has been particularly useful.
I began using "Sharpen AI" when it was first introduced and it has proven to be quite useful. I concur with Ctein's general observation that its "Stabilize" mode is the most powerful and unique. I've been looking for a term that best describes the results that Stabilize produces. It's not "sharpness" or "contrast", per se. The best description I can conjure is edge coherence.
Here, for example, is a small detail from a street image I captured last week from a slowly-moving cab in Manhattan (is there any other kind?) with a Sony RX100-VII camera.
Pre-Stabilized
Stabilized
See what I mean? The neon sign's definition is more crisp and coherent, not sharper per se. Ditto the fence.
Good images, particularly good prints of good images, are best served by restraint, subtlety, and a strong concept of where to apply seasoning. That's exactly what Stabilize enables you to do. As Joe Holmes noted much earlier, using this in a Photoshop mask structure where you can spatially control its application can be extremely powerful on an image where you want subliminal attention hot-spots. Viewers will rarely notice they're being had!
But use it only occasionally and carefully...like a food seasoning!
(Welcome back, Ctein.)
Posted by: Kenneth Tanaka | Wednesday, 25 September 2019 at 12:25 PM
Mike,
"Post-Exposure" can be downloaded free as a PDF on Ctein's site
http://ctein.com/booksmpl.htm
Posted by: Kristian Wannebo | Wednesday, 25 September 2019 at 12:41 PM
Mike/Ctein I hope you guys will get some serious commission from all the business Topaz are obviously getting out of this???
Posted by: Ger Lawlor | Wednesday, 25 September 2019 at 04:17 PM
Dear Moose,
That butterfly wing is a lovely example of the improvements that Focus mode can make. Thank you for posting it.
Out of vague curiosity, did you also try Stabilize mode? If so, how did it compare?
And, on your later post, you are absolutely right about my workflow being unnecessarily complicated. I will plead ignorance. I started out with GigaPixel, which does operate solely as a standalone. As I got the later programs, it never occurred to me to look to see if they were also integrated as filters in Photoshop.
Oops.
Well, you just speeded up MY workflow.
(Of course, because all these programs have standalone capabilities, you can use them with the image processing program of your choice. Or even none at all, if you are sufficiently happy with the results.)
~~~~
Dear Andrew,
I fear your "clarification" did not make me more sympathetic to your position. To the contrary.
There are people who consider working harder to be inherently more virtuous. I am not one of them. I can think of only two reasons why people make photographs — to achieve some professional goal or for the sheer enjoyment of it. In neither case does "laziness" enter into the equation; it is an inappropriate moral evaluation.
I am quite happy to do things the "lazier" way if it gets me the results I want. In fact, I seek out those lazy ways.
Your argument is more than merely old-fashioned, it is positively ancient — it is the same complaint that were raised when light meters first appeared, when auto exposure first appeared, when autofocus appeared, and so on. it has been thoroughly answered and refuted in decades past. It does not need to be revisited. Please don't try to argue the point further.
Please don't misunderstand —I would never gainsay anyone who wishes to be "old-fashioned" about their photographic practices. They do what works for them and gives them enjoyment. But, in turn, they should not gainsay those of us whose boat is floated differently.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Wednesday, 25 September 2019 at 05:26 PM
Dear Ger,
Heh, no sales commission for me, but I get all the free software I want from Topaz Labs. The perks of being a reviewer.
~~~~
Dear Ken,
That's a great example of what Sharpen AI can do. But — at the risk of descending into po-tay-to po-tah-to — I would say that result definitely is sharper. A great deal of what makes it sharper is what you nicely described as "edge coherence." (I'm going to steal that.) But there's more...
There is genuinely better and sharper fine detail if one looks closely. For example, the "W 55th"in the background on the left side. In the original, I can't read the "th" but letters are very clear after Sharpening. Similarly, at the bottom center there are closely spaced diagonal chain links that are separated and resolved in the Sharpened image that aren't in the original.
There's an especially interesting bit of detail extraction in the fine crosshatched fencing. In the original, it fades in and out in bands due to aliasing effects. In the Sharpened image, the aliasing has disappeared and the fence pattern restored.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Thursday, 26 September 2019 at 01:24 AM
"That butterfly wing is a lovely example of the improvements that Focus mode can make. Thank you for posting it.
Out of vague curiosity, did you also try Stabilize mode? If so, how did it compare?"
Out of my own curiosity, I did go back and try it. I does give a sightly better result than Focus Mode. Makes sense, as shots of moving critters at really long FLs are likely to have motion blur subtle enough to be mistaken for other causes.
That's what it looks like on shots from this camera and the 1" sensor ZS200. What seems like just softness proves to be slight displacement during exposure, and correctable using Sharpen AI.
I won't post it, due to the results of the Law of Should Have Been Expected Complications. I grabbed something I'd just done. It happened to be from a tiny, 1/2.3" sensor Panny camera, and PS tends to generate small artifacts in the Raw conversion of Panny files (and possibly Oly).
Stabilize Mode makes them even more obvious. OTOH, of course, the DxO module for the ZS80 isn't due until next month. I haven't yet tried it on the smaller sensors; it does avoid the artifacts on MFT Panny bodies.
I'm working up another example, partly done already, but we're on the road 'til Nov 2, and about to leave a longer term visit for some shorter ones, so it won't be instantaneous.
Posted by: Moose | Thursday, 26 September 2019 at 10:22 PM
Here's a vote for Topaz' customer service. In 2010 I bought InFocus from them, a predecessor program which never really worked and just got abandoned. To my surprise, it was eligible for an update to Sharpen AI although this had to be done manually by their support staff. Looking forward to trying it out.
Posted by: Ed | Friday, 27 September 2019 at 04:37 AM