« Sunday Support Group: Older You, Younger You | Main | Thanks (Blog Note) »

Monday, 23 September 2019

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Great article by Ctein and I’m looking forward to the next one. It seems Mike and Ctein have moved on past their previous differences which is good news for TOP readers! :-)

Time passes and things change. I am very happy to see Ctein return as a guest contributor to The Online Photographer. Looking forward to his unique insights and perspective in future columns !

Great to see Ctein back, and looking forward to his great columns.

Welcome back, Ctein.

It is maybe worth noting that the actual scientists involved in these deep learning network things seem to have quietly standardized on "Machine Learning" rather than A.I. though the marketing types have not budged.

Also worth noting, at 50,000 calculations per pixel is it I think fair to characterize the results of these tools not as "photographs" as such, but rather as a computer generated painting based on a photograph.

Whether this minor quibble has any impact in the real world remains unknown. Some people are hollering that the sky is falling, and the underlying essence of the very idea of "photograph" is being lost (on about every 3rd or 4th day I might be one of them), quite a few people don't see what the issue is, and some of us (me on the other days) are interested to see what happens.

I bought the Topaz Suite several years ago, and they give you free updates for life. So I got some of the AI products as free upgrades. And they are that good. I only had a one-month trial version of Gigapixel (there was no enlarge module in the original suite), it left Photoshop in the dust... well, results-wise, not speed-wise, because this thing is processor-intensive. A 4GB GPU is recommended, and but mine has only 1GB.

1) How nice to have a new article from Ctein. I read a paragraph and recognized the voice before noting the byline.

2) I am in complete agreement with the AI buzzword silliness.

3) That stabilize plug-in is astounding.

Dave

Great to see Ctein back on TOP (no pun intended) as I've always enjoyed his writings. This post is very timely for me as I've just started looking into AI sharpening and up-rezzing software.

I call it “computational photography,” and while it may not really be intelligent, it has a metaphorical mind of its own: even the designers of the neural network software typically are unable to explain precisely what the network is doing as it modifies the source image to produce the target.

Welcome back Ctein!

"Is it live, or is it Memorex?"

Will have to check it out, as I've learned to respect Ctein's opinion in these matters. His previous advocacy in TOP of using printer-managed colors instead of photo-paper profiles in Photoshop changed my print results from iffy to near-perfect every time. Thanks, Ctein!

I've successfully saved a couple of photos with the focus sharpening mode. It works well if you slightly missed focus. I believe what it is doing is telling the algorithm to deliberately sharpen areas of the photo it would ignore otherwise (normally it doesn't seem to apply excessive sharpening to OOF areas).

But I agree that stabilize does a better job most of the time if you like somewhat aggressive sharpening.

However I'm even more impressed with Topaz's AI NR software. It's really magical.

At last, someone else who thinks calling it Artificial Intelligence is Genuine Stupidity!

I agree wholeheartedly with Ctein’s assessment. I’ve used the Topaz blur reduction on old scans from 35mm and medium format negatives and slides and it works miracles on those as well. I’ve now incorporated this function into my post processing of almost all my photos unless I absolutely know they are as sharp as possible. I just restored an old and faded scanned print from an Instamatic and it was astounding.

Thanks for this. I'll buy on this demo/recommendation. That said, I trialed and now have purchased their Gigapixel AI, which I have wanted for myself but now actually need for a project I am doing for work. It's a mixed bag, frankly. Yes, it works, but it also does some very strange things with sharpening---halos galore. You have to feed it a somewhat under sharpened file, and also turn down output sharpening in LR to low. Too soon to make a final judgement, but so far it's tricky to work with. And the AI bit? I wish there were more manual controls....

I have always hung onto digital images that weren't hopelessly flawed thinking that the technology and/or my skills might catch up. I'm glad that I have been a pack rat.

Topaz Gigapixel is a extremely good upsizing program for day time images. It however has a lot of trouble with Dusk or night shots - massive number of artifacts here. And it has a really stupid size limit of 22K pixels output. According to their support to allow larger files would take too long. This is one of the dumbest excuses I think I've ever hear from a software company.

One fervent wish granted, Ctein has written again on TOP.

Wow.

My father, a genuine "good ole boy" of the South used to say "The only thing about common sense is that it isn't."

When I first heard the term used in a meeting at MIT's Media Lab decades ago, I cynically said I’d believe in “artificial intelligence” when someone proved to me that “natural (human) intelligence” existed.

Unfortunately, what gets called "artificial intelligence" or "AI" today is more hype than real intelligence. Apply an algorithm to something that is complicated to understand and call it "AI" for marketing purposes. High level AI uses iterative processes to "learn" and apply that to future situations, but that often is not successful.

Ford's AI partner for autonomous vehicles, one of the top focuses of AI today, said recently that AI could handle only about 80% of the driving situations - keep a car in lane, make turns, etc. But random events like bikers, pedestrians, etc. were beyond its capability today.

That's obvious from Tesla's AI which keeps running into stationary vehicles on roads - it cannot react fast enough - or Uber's that ran over a pedestrian pushing a bicycle.

I have a section on AI in my book "Delusional Management" that starts with:

Isaac Asimov, renowned science fiction author, in the 1950 collection of short stories called “I, Robot” described the three laws of robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

and

Quite a few high profile techies have expressed concern about AI, including Stephen Hawking, Elon Musk and dozens of AI researchers. Steve Wozniak was one of them but flip-flopped, seemingly because he came to the conclusion that AI was still incompetent. Perhaps that was premature.

and

The other thing that scares me about AI is how it depends on who is programming it or how it learns on its own. MIT researchers built an AI they called Norman (norman-ai.mit.edu), named for Norman Bates in Hitchcock’s Psycho. Norman was programmed to perform image captioning, a popular method of generating a description of an image. They trained Norman on a Reddit forum dedicated to document the reality of death.

Then they compared Norman to a standard image captioning neural network. MIT described Norman as “AI-Powered Psychopath.” Perhaps the best description of what Norman learned was the title of an article on theverge.com: “MIT fed Norman data from Reddit, and now it only thinks about murder.”

The story about Norman is worrisome. If the techies in Silicon valley developing AI are ethically-challenged, maybe from spending too much time playing video games where the goal is to kill or destroy, can they be trusted with developing AI? Do they know Asimov's 3 laws? Or care?

I think we should be afraid of AI. Really afraid. Especially if engineers train them on Internet news. But then again, can AI be any worse than humans?

BTW, welcome back Ctien! Missed you.

Thanks, Ctein. Welcome back!!!

Crain,
Great recommendation.
I just tried Gigapixel last month and look forward to your recommendations.
I was working on an image from a now ancient Canon 1Ds from 2007 that I had last printed huge on canvas ten years ago using the canvas to hide that I was doing an over enlargement. A client now in 2019 wanted the same size 54”x36” but on paper. Gigapixel saved the day. It was shot from a two man ultralight, so it’s not like I could go back and reshoot with a modern camera. So Gigapixel was like a time machine for me.

So great to have a column by Ctein back on TOP again, even if it is going to cost me.... I already know the image I need this software for! I work in a NHS hospital here in London and AI is all the rage in our sector too... But generally really means clever predictive analytics. It is clever though....

Thanks Ctein; I bought it ! Amazing !!

Thanks for that.

I've just begun exploring Gigapixel AI, and I'll be interested in your review of that program and, particularly, in how that software can interact in landscape photographs with the Sharpen AI program that you reviewed here.

Agree with earlier posts, great to see Ctein back on TOP.

I’m delighted to see Ctein writing for TOP again. Thanks for a very useful column.

Also happy to see Ctein back on TOP

So questions: Does the software correct "focus" ie: missed focal plane? Or shake/movement/slow shutter speed? Or both? How does it handle varying DOF -- f1.8 vs f16? And does this mean I can now use non-stabilized, old (cheap) MF adapted lenses and correct my old-eyesight focus/steadiness errors? Great article, and sounds like a great leap in processing software.

Welcome back Ctein! There has been a post it note on your book reading “Ask Ctein if there is a Chinese in Nixon joke that got edited out”
Is that image cropped showing the left side of the original image, and if so did you use the Topaz software before or after you cropped?
The reason I ask is that to me it looks like a crop and I was wondering if the Topaz software is trained to know “good images” look like or if it’s trained to know what “unwanted artifacts” look like or both.
Making the assumption that the optical system’s axis passes through the center of the image would be a head start for a non AI tool, but if the training set of images were cropped, then I guess the neural net could learn to work on cropped images. My experience with non AI image correction has been that it works best on unmanipulated images.

I’ve been playing with some AI image enhancing software and I notice that it does a really good job of guessing what something should look like until it abruptly gives up. The pine needles in you image for example. This is particularly apparent when the image includes signs or packaging with text that is too small or blurred to be resolved. The software makes everything look really great except for the blocks of text which are either still blurred or rendered into a sharp random pattern.
I am concerned when this gets included by default in new cameras and software.

You could get some interesting errors if for instance all Porsches in the training set had California license plates, and consequently the software put nice sharp California plates on all blurry images of Porsches.This is going to put the final nail in the photos = truth coffin.

I expect the new camouflage will look like camera artifacts. If I paint my car to look like a piece of lint surrounded by jpeg artifacts will AI software retouch it out of the image? Would “Common sense” dictate that if it looks like dust and noise it is dust and noise, or that if it’s the size of a car it’s probably not lint?
Tesla seems to have had a problem a while back with its software deciding that on the highway all cars are moving, and if something isn’t acting like a car by not moving, then it is not a car, maybe a shadow or something, and the Tesla just ignores it. When the not moving thing that might be a shadow turns out to be a fire truck, as happened a year ago when a Tesla ran into a fire truck perked in the middle of the lane on a freeway, it is problematic. Slamming on the brakes for shadows can be problematic too.

This makes me happy.

Topaz offers a 30 day trial. It's worth seeing for yourself. I was curious to see how it compared to my own sharpening approach in Lightroom. I gave Topaz an unsharpened file and processed it using "sharpen" with the settings determined by the "auto" button. The results are mixed -- in some areas, perhaps a bit better, in others clearly worse. Sometimes it kills detail that Lightroom preserved. Other times it brought out details better than Lightroom could. It handles sensor artifacts in my Fuji GDX 50R files a bit better than Lightroom. But it also occasionally creates false colour where there is none in the Lightroom version (normally one or two pixels in width, along linear features). Interestingly, it quite happily makes areas that are slightly out of focus look like they are in focus (or less out of focus). If you actually wanted those areas to be slightly out of focus, then you're out of luck because it's not selective. In terms of the "sharpen" setting, I prefer the results I can get with Lightroom.

The "stabilize" tool that Ctein describs is another level entirely. Supplied with the same unsharpened Fuji GFX 50R file from Lightroom I used to test "sharpen", Topaz createed details that simply weren't there. It's quite remarkable in rather unsettling way. Parts of the picture that were out of focus now look in focus at 100%. Parts that were in focus are much sharper and more detailed than sharpening alone (with Topaz or Lightroom) could produce. Where previously there was only a hint of blurry detail or texture, now there is detail and texture. If you selected a specific aperture because you wanted to gently blur the foreground or background, Topaz isn't having any of it; your gentle blur is now resolved into "details".

Is this good or bad? It is a tool in the tool kit -- and potentially an incredibly useful one. However, if you use it, you have to be willing to accept that the file created using Topaz's "stabilize" setting is not what the camera recorded. Or perhaps more accurately, it's even less what the camera recorded than the pictures you were creating before you started using it!

Welcome to computation photography.

The best definition I heard of A.I. that is used in marketing is statistical inference based on very large data sets.

AMAZING! Ctein is a treasure. It's great to see him back and the software is amazing too.

It's a good thing Cartier-bresson considered sharpness a bourgeois concept, as he's probably a blur spinning in his grave like that.

I tried it, and it did not work for me at all. It was very slow on my old AirMac, and the results were still blurry. Maybe I should try it again, following Ctein's procedure.

Great article. I had issues with the "vertical lines" artifacts but they only occur when using GPU. Try the CPU setting and verticals have gone. Hope that helps. Pete

Ctein

good to see you returning to TOP. Welcome back.
And as for this bit o' software, good grief!

Roger

Just as a data point. Processing a 20 Mb mft file output as a 16 bit tif processing time using stabilise was around 70 sec on a 6 year old Win 10 Pro Desktop.

i7-3820 cpu / 16Gb ram / Nvidia 1060 (6Gb vram)

CPU was at 50% and the GPU at 100%.

Two fantastic surprises here; one is the return of Ctein with a guest post, the other being coverage of a software line I had recently been researching. Of all the 3rd party image enhancement options available, Topaz seems to be the most intriguing. Looking forward to the Gigapixel post. As a m43 user, I'll take all the help I can get.

Welcome back Ctein.
I'm going to sell all my expensive lenses and use cheaper ones....why not. LOL.
Seriously this sounds like a really useful tool to have.
Can I ask if Ctein or anyone here has a really good B/W conversion plugin.
I used to have one which was brilliant but it was never updated but it gave options like mimicing Pan /TriX/etc. Now all I see are options for fairly useless weird conversions eg vintage/antique etc.
LOng LIVE the Black and White print.

Maybe they just send the file to some sweatshop in India where they use a human to do the actual processing with PhotoShop. 8^)

That's a fuzzy photograph, but not irreplaceable. How about trying again whith what was learned from the failure? Am I alone in thinking that's what photography is about?

[Illustrations are just examples. It wouldn't make very much sense to test this application with a sharp photograph, after all! --Mike]

A question to Ctein. Going in the opposite direction to upsizing, is there a preferred method to downsizing? Turning a file from a 24MP sensor down to a 12MP file that uses all the info from all those tiny receptors and turns it into a file that has the quality of a sensor with fewer but much larger receptors? "Fewer but better pixels". This may be an old hat, or foolish, or obvious --- forgive my ignorance.

I'm afraid AI is not the only term that has been misappropriated. How about drones? Drones just fly off in a more or less predetermined direction. Unmanned Air Vehicles (UAV) are what is now called a drone. Price point is the general range that a product's price puts it in, but now it's used to mean just price.

Our tilting against the wind mills won't do any good.

Welcome back!

I'm a fan of Topaz' recent "AI" apps, and use them regularly.

"Sharpen does more ordinary sharpening and is recommended for photographs that are otherwise in-focus and un-blurred. The effect is modest, but it's clean."

My experience, as well, no better than Focus Magic, in at least some cases.

So far, I haven't found an out-of-focus photograph where Focus improves things. Unlike Sharpening, which is understated, Focus seems to go way overboard. Perhaps I haven't found the right image to apply it to, but so far it's been useless.

I have found cases where it is quite good. Here's a sample, @ 100%:

And after Topaz Sharpen AI Focus, setting 40:

YMMV \;~)>

Pretty slick.

I wonder how long until Topaz-styled technology is automatic in a smartphone as the technology progresses with "computational photography?"

And I wonder, does the computational ability of software and computers take some of the fun out of capturing images?

Dear Joe,

That's the way I use the various Topaz AI tools, and I should've explained my workflow in the review. I pull the raw file into Photoshop, making whatever conversion adjustments I find appropriate for the photograph. I save the photograph as a flat TIF, pull it into the Topaz tool, generate a resultant TIF and layer that on top of the photograph in Photoshop.

Now I can pixel peep at my leisure. If the resultant TIF is free of problems and better overall than the starting one, I flatten the result and move forward from there. If not, I add a layer mask and paint out the bits that are a problem and then flatten and save.

Important note — As a rule, don't do significant noise reduction or sharpening before applying the AI tools. For the most part, it appears they've been trained on un-manipulated images, so they can be confused by artificialities introduced by other software.

That doesn't mean they couldn't be trained on manipulated and artifact-laden images — in a future column I will review Topaz JPEG To RAW AI, which is just plain spooky. But so far as I can tell, Sharpen AI isn't one of those programs.

At least, not yet.

~~~~

Dear Tom,

AI Clear, which is a component of DeNoise AI (to be reviewed in the future) and Topaz Studio was Topaz's first product in this line and attempts to do what you want, both cleanup and detail preservation and enhancement at the same time. It is far from perfect (and ofttimes it fails entirely), but using it in a masked layer, as described above, frequently produces much better results a lot more quickly than you could do it with simpler tools.


- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

Dear Andrew, and others...

With Mike's permission, I am somewhat arbitrarily shutting off any discussion about whether these constitute "photographs" or "paintings" or "illustrations." I respectfully request that people not continue in this vein.

I have two reasons for doing that — one is that I find such arguments to indeed be quibbling and to be annoying, so I don't want to deal with them. The more significant one is that they are factually incorrect in this case. The huge number of calculations being done by Topaz's AI programs, ranging from tens of thousands to millions per pixel, are not because it is massively repainting the image. It is because it is analyzing a huge amount of data, a very large number of pixels, to figure out "context."

Let me give a human analog example. Imagine someone has made a photograph of birds sitting on telephone wires against a bare sky (because no one has ever done that before). They hand it to you to clean up and sharpen, but they don't tell you what the photograph is of.

Unfortunately, you are artificially constrained to look at the picture through a 3 x 3 pixel window. The window happens to overlap one of the telephone lines. You see a string of pixels that may or may not be connected that are faintly darker than the surrounding ones. But with at most three pixels or four in that "line," you can't really tell whether that's a random congruence of a few noisy pixels or real detail.

(This is the problem most simple noise/sharpening filters face. How do you improve one without making the other worse?)

Make the window bigger. You might have a better guess if you looked at 5 x 5 pixels. If you look at a dozen by a dozen, you can be pretty sure that you're seeing a real line (that needs to be sharpened) as opposed to random noise (that needs to be suppressed).

That involves looking at over 100 pixels and mentally comparing all of them to each other and correlating what is where. Thousands of unconscious mental calculations all to decide whether one particular pixel should be made lighter or darker.

That's what these AI programs have to do.

In a related matter, I would argue that it is not really correct to call this "computational photography" — that term is normally reserved for building images out of visually-non-image data. It's a different discipline and will ultimately lead to some very weird camera and optical designs that will work much better than what we have now. But this is a deep analysis of an isolated visual image.


- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

Regarding AI or "no AI", I don't care what they call it, as long as it works.

I'm with Ctein on Topaz' products, I've been using a number of them for some years now and have always been very impressed with them.

I just got an email from Topaz on AI Sharpen, and as I frequently photograph very fast moving subjects (most of whom are traveling well over 100 mph) that have fair risk for not being "tack", I'll be snagging Topaz Lab's Sharpen AI.

"Stabilize" looks to be right up my pit lane.

Dear Tex (and others),

If you significantly alter the photograph before handing it off to these AI programs (such as with substantial sharpening or noise reduction using other tools), the results are unpredictable and frequently bad. Sharpen and DeNoise should be used pretty early in your workflow; they expect to see images whose fine structure is relatively un-manipulated. You have to get to know these programs very well before you can figure out what is safe or not safe to do before invoking them.

In a related matter, people are asking what happens if you combine the various tools, e.g., running Sharpen before (or after) using DeNoise or Gigapixel. Sometimes you get better results than using just one program. Sometimes you get worse. Sometimes it's just different.

It's really easy to get lost in the weeds with all the possible combinations of programs. I'm happiest when one of them proves sufficient and I don't have to wander down that debris-strewn garden path.

~~~~

Dear Ken,

Yes.


- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

Dear TC,

Paying attention to what any photographer says about the merits (or lack thereof) of various sorts of photographic quality is a bourgeois concept.

~~~~

Dear Hugh,

No, there was no "Chinese in Nixon" joke. It would have been a good throwaway line, probably by Crow. We should've thought of it.

Yes, the image is highly cropped down to the salient detail (it's obviously not a 20 megapixel image). The center point in the original frame was above the flying Osprey's head, but not quite as far up as the tip of the wing. The crop you are seeing goes all the way down to the bottom of the original frame.

I ran Sharpen AI on a cropped image that cut out sky above the left pine tree and part of the right pine tree.

I'm not sure what you think is wrong with the way Sharpen AI handled the pine trees. It looks pretty natural to me.

(Note to boke fanatics — NONE of these programs are going to preserve what you like. That's not their point.)

~~~~

Dear Louis,

I do so little black-and-white conversion that I have no idea what is good and what isn't. Not my thing.

~~~~

Dear Andrew,

Along with what Mike said...

(1) I am not David Attenborough (if only!) and...

(2) I was on a boat with my Family, ten other people who I dearly love and who all love me, so I'm not going to tell the one piloting the boat (not me) to stop it for as long as it takes me to get the photograph I want, BECAUSE I WANT THEM TO CONTINUE TO FEEL THAT WAY ABOUT ME!

(3) Wild ospreys do not perform on command. At least not for me (see #1). [grin]

- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

In 1985 Minolta came out with their Maxxum/Dynax/Alpha AF SLRs (7000, etc.). Then in 1988 they came out with their 2nd generation (7000i, etc.) and they added an 'i' to the names which meant 'intelligent' (7000i, etc.). Then in 1991 they came out with the 3rd generation and changed it to 'xi' for "extra intelligent" (7xi, etc.). Then in 1995 they came out with their 4th generation and changed it to 'si' for "super intelligent" (700si, etc.). By the time the 5th generation came out in 1998, I guess they decided they had taken the intelligent thing as far as they could so they dropped it all and called the next models the 9 and 7. :-)

Ctein is back! This makes me happy. Thanks!

Let's not forget to buy Ctein's recommendations using Mike's links :-).

Ctein's back! Hurrah!

I don't understand the work flow. All of the Topaz AI thingies but Gigapixel work well as PS plug-ins.

I just create a copy layer, run the AI plug-in, and have a layer with the effect, to flatten or mask, then flatten.

This skips writing and reading a TIFF, with exactly the same result.

A tip for Topaz upgrades problem: Be sure to verify your operating system version and other upgrade requirements are adequate before attempting to install a free upgrade.

Some time ago Topaz offered me a Gigapixel upgrade. The install seemed to complete OK, but the program wouldn't run. No error message. Topaz support staff were unable to identify the problem. Eventually, I noticed the upgrade required a more recent operating system version.

Just to clarify my previous comment. No personal critic to Ctein. Actually I am very pleased for the information provided which I would'nt have known about if not for Ctein's post.
Just saying that with this kind of program (thinking also of false bokeh like in PS, some smartphones), we might become lazy at applying true photographic techniques; not always possible, yes I agree.
That may sound a bit old-fashioned to some but to each his own, we all like different things.

Sorry, getting here late. But I did want to add a cent to Ctein's comments regarding Topaz Labs' "Sharpen AI" product. I've long ago stopped using image processing utilities...except for a few of Topaz's products, some of which have proven to be very handy. Their "Detail" utility has been particularly useful.

I began using "Sharpen AI" when it was first introduced and it has proven to be quite useful. I concur with Ctein's general observation that its "Stabilize" mode is the most powerful and unique. I've been looking for a term that best describes the results that Stabilize produces. It's not "sharpness" or "contrast", per se. The best description I can conjure is edge coherence.

Here, for example, is a small detail from a street image I captured last week from a slowly-moving cab in Manhattan (is there any other kind?) with a Sony RX100-VII camera.


Pre-Stabilized


Stabilized

See what I mean? The neon sign's definition is more crisp and coherent, not sharper per se. Ditto the fence.

Good images, particularly good prints of good images, are best served by restraint, subtlety, and a strong concept of where to apply seasoning. That's exactly what Stabilize enables you to do. As Joe Holmes noted much earlier, using this in a Photoshop mask structure where you can spatially control its application can be extremely powerful on an image where you want subliminal attention hot-spots. Viewers will rarely notice they're being had!

But use it only occasionally and carefully...like a food seasoning!

(Welcome back, Ctein.)

Mike,
"Post-Exposure" can be downloaded free as a PDF on Ctein's site
http://ctein.com/booksmpl.htm

Mike/Ctein I hope you guys will get some serious commission from all the business Topaz are obviously getting out of this???

Dear Moose,

That butterfly wing is a lovely example of the improvements that Focus mode can make. Thank you for posting it.

Out of vague curiosity, did you also try Stabilize mode? If so, how did it compare?

And, on your later post, you are absolutely right about my workflow being unnecessarily complicated. I will plead ignorance. I started out with GigaPixel, which does operate solely as a standalone. As I got the later programs, it never occurred to me to look to see if they were also integrated as filters in Photoshop.

Oops.

Well, you just speeded up MY workflow.

(Of course, because all these programs have standalone capabilities, you can use them with the image processing program of your choice. Or even none at all, if you are sufficiently happy with the results.)

~~~~

Dear Andrew,

I fear your "clarification" did not make me more sympathetic to your position. To the contrary.

There are people who consider working harder to be inherently more virtuous. I am not one of them. I can think of only two reasons why people make photographs — to achieve some professional goal or for the sheer enjoyment of it. In neither case does "laziness" enter into the equation; it is an inappropriate moral evaluation.

I am quite happy to do things the "lazier" way if it gets me the results I want. In fact, I seek out those lazy ways.

Your argument is more than merely old-fashioned, it is positively ancient — it is the same complaint that were raised when light meters first appeared, when auto exposure first appeared, when autofocus appeared, and so on. it has been thoroughly answered and refuted in decades past. It does not need to be revisited. Please don't try to argue the point further.

Please don't misunderstand —I would never gainsay anyone who wishes to be "old-fashioned" about their photographic practices. They do what works for them and gives them enjoyment. But, in turn, they should not gainsay those of us whose boat is floated differently.


- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

Dear Ger,

Heh, no sales commission for me, but I get all the free software I want from Topaz Labs. The perks of being a reviewer.

~~~~

Dear Ken,

That's a great example of what Sharpen AI can do. But — at the risk of descending into po-tay-to po-tah-to — I would say that result definitely is sharper. A great deal of what makes it sharper is what you nicely described as "edge coherence." (I'm going to steal that.) But there's more...

There is genuinely better and sharper fine detail if one looks closely. For example, the "W 55th"in the background on the left side. In the original, I can't read the "th" but letters are very clear after Sharpening. Similarly, at the bottom center there are closely spaced diagonal chain links that are separated and resolved in the Sharpened image that aren't in the original.

There's an especially interesting bit of detail extraction in the fine crosshatched fencing. In the original, it fades in and out in bands due to aliasing effects. In the Sharpened image, the aliasing has disappeared and the fence pattern restored.

- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com 
-- Digital Restorations. http://photo-repair.com 
======================================

"That butterfly wing is a lovely example of the improvements that Focus mode can make. Thank you for posting it.

Out of vague curiosity, did you also try Stabilize mode? If so, how did it compare?"

Out of my own curiosity, I did go back and try it. I does give a sightly better result than Focus Mode. Makes sense, as shots of moving critters at really long FLs are likely to have motion blur subtle enough to be mistaken for other causes.

That's what it looks like on shots from this camera and the 1" sensor ZS200. What seems like just softness proves to be slight displacement during exposure, and correctable using Sharpen AI.

I won't post it, due to the results of the Law of Should Have Been Expected Complications. I grabbed something I'd just done. It happened to be from a tiny, 1/2.3" sensor Panny camera, and PS tends to generate small artifacts in the Raw conversion of Panny files (and possibly Oly).

Stabilize Mode makes them even more obvious. OTOH, of course, the DxO module for the ZS80 isn't due until next month. I haven't yet tried it on the smaller sensors; it does avoid the artifacts on MFT Panny bodies.

I'm working up another example, partly done already, but we're on the road 'til Nov 2, and about to leave a longer term visit for some shorter ones, so it won't be instantaneous.

Here's a vote for Topaz' customer service. In 2010 I bought InFocus from them, a predecessor program which never really worked and just got abandoned. To my surprise, it was eligible for an update to Sharpen AI although this had to be done manually by their support staff. Looking forward to trying it out.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007