By Ctein
In my last column on this topic I talked about how we see tones. This time it's about detail.
For a start, let's consider the canonical 8–10 line pair per millimeter (lp/mm) that folks like to toss out as the limit of human visual resolution. That number is simultaneously misleadingly high and low.
To begin with, those experiments were run on young viewers with good eyes under optimal viewing conditions and illuminance, at a standard close focusing distance of about 1/2 meter. Change any of those conditions and the numbers go down. Poor lighting in particular makes a big difference. So does an inability to focus that close. Under many conditions that are normal for viewing photographs, you really can't see more than five line pair per millimeter. (Conversely, higher visual acuity or closer focusing ability can up those numbers.)
But...
There's a big difference between our ability to resolve closely spaced line pairs and our ability to detect edges. You can easily see a distant phone line that is several times smaller than the resolution limit or stars that are thousands of times smaller. Add in "vernier acuity:" that's what lets you use a caliper or micrometer and tell when the index lines are out of alignment by even a small fraction of the engraved linewidth. We're very good at picking up edge qualities.
Take two resolution bar targets, one of which has sharp-edged bars and the other of which has very fuzzy-edged bars. One can resolve about the same amount of fine detail in both sets of targets, but even at the limit of visual resolution most people will readily be able to pick out the former target as "sharper" than the latter. Even though they're not seeing any more fine detail, they are responding to the transition at the edges—acutance, not resolution.
That means eyes are sensitive to spatial frequencies about three times finer than what they can actually resolve, and experiments bear that out. Take two matched 8x10 prints, one of which resolves 10 lp/mm and the other of which resolves 30 lp/mm. Ask folks to pick out the sharper one. Invariably they can, although if pressed they'll be unable to point out any specific detail that is different.
Visual resolution versus acuity. These two sets of radial bars (you might want to click on the image first to open up a larger version, and pardon
the jaggies) have very different edge characteristics. Human vision
picks up on that, even at the limits of resolution. Step across the
room from your monitor and look at this image. You should be able to
resolve about the same level of detail in both targets (the point at
which the bars merge into a gray central disk), but the target on the
right should appear a little crisper even near the finest level of
visible detail. That's your eye picking up edge characteristics well
beyond the resolution limit.
-
This is why a print from a sheet film negative looks sharper than one made from a well-made 35mm negative, even though that 8–10 lp/mm number says the big negative shouldn't have a marked advantage. Similarly, I have tested inkjet printers that can reproduce in excess of 15 lp/mm and ones that reproduce 8–10. The naive number says there shouldn't be a noticeable difference in side-by-side comparison prints. Experiment proves otherwise.
On the other hand, small differences in readily visible detail (coarser than 8 lp/mm) are not detectable even in side-by-side comparisons. You can't see a 10% improvement in resolution; around 15% is the plausible limit. The difference won't become particularly noticeable until you hit 25–30%.
This has bearing on the pixel horsepower race in digital cameras. There's little point to trading in your camera for one that has 25% more pixels unless the overall image quality is better for other reasons (tonality, color, dynamic range, better optics or signal processing, or proven substantially-better resolution). You would be hard-pressed to see a sharpness difference because of the raw pixel count. If you're not making at least a one third to one half jump in the number of pixels, save your money.
_____________________
Ctein,
Given the importance of edges - which is probably rooted in visual processing and the brain's edge detection systems - I wonder if sharpening is more important than pure resolution in the perception of sharpness in a print?
Posted by: Ed Richards | Monday, 14 July 2008 at 01:00 PM
Ctein, There was a small paper back book published by Scientific American from 1964 to 74 (ISBN# 0-7167-0505-2) or (0504-4 [pbk]) that every photographer should read.
"Image, Object, and Illusion". It really gets into how we see things and why we see them the way we do. Some of the material my be dated but it get right to the point of many of our misconceptions on how we see things or don't.
Posted by: Carl Leonardi | Monday, 14 July 2008 at 02:46 PM
"I wonder if sharpening is more important than pure resolution in the perception of sharpness in a print?"
I think it is. I remember in the late seventies in my old photo club. A guy was suddenly showing prints, about 8x10 from 35mm, which were just... wonderfully sharp. They just had a special quality. I tried to emulate him for months in vain, until I found out that he used Agfa Rodinal film developer. That one gives big grain, but it really enhances acutance. And my prints got the same "sharpness" quality after I started using Rodinal.
Posted by: Eolake | Monday, 14 July 2008 at 04:08 PM
How far do pixel counts have to increase in order to make a definite improvement in perceived quality (ignoring acutance / edge issues)?
Secondly, what is this increase given in linear pixels (rather than in overall total pixel counts)?
Thirdly, I assume that we ignore IQ / Pixel Density issues as well (ie stick with a common sensor size). This, I assume, would also be a factor in perceived image quality?
Posted by: Les Richardson | Monday, 14 July 2008 at 07:04 PM
Ed, Eolake: exactly. Our vision system doesn't really see edges or surfaces; it sees _transitions_. And the quality and clarity of those transitions go a long way towards creating our perception of sharpness and detail (there's some complications relating to the statistical distribution over smaller patches but that's somewhat secondary).
In fact, you can trade resolution and accutance to some degree. If you have an image that's high resolution but somewhat fuzzy, you can deliberately downscale it with a less than great scaling algorithm, and end up with a somewhat badly scaled, lower resolution image that will end up looking sharper and more detailed (at least until you start comparing details side by side of course). For similar reasons, adding a bit of luminance noise to an image will increase the perceived sharpness and detail.
Posted by: Janne | Monday, 14 July 2008 at 10:03 PM
Dear Ed and Eolake,
Yes, acutance trumps resolution and detail. That's why so many folks are fond of the upscaling programs like Genuine Fractals or Blow up. They all sacrifice some small measure of real detail in exchange for markedly sharper edges.
A VERY crude rule of thumb is that a photograph with really clean, sharp edges will overall look about as good as one with twice the resolution but really lousy edges.
Of course, producing such a photo is another matter. This is a problem that will eventually be solved computationally, but getting those clean, sharp edges without introducing visible artifacts is probably going to require throwing teraflops at it. On the other hand, we now have teraflop GPU's so maybe we're just waiting for someone to write good enough algorithms.
~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================
Posted by: Ctein | Monday, 14 July 2008 at 10:13 PM
Dear Les,
That is an unanswerable question, because there is no agreed-upon qualitative meaning for "image quality," let alone a quantitative one.
"Image quality" is just a polysyllabic way of saying "I like the way it looks."
~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================
Posted by: Ctein | Monday, 14 July 2008 at 10:17 PM
Ctein, thanks for another great article. I always like the ones where you break things down to the biological level and smack us around the head with some physics and big words.
Maybe I'm a masochist. Or a nerd. Or both. Who cares, keep writing 'em!
Peace back,
M.
Posted by: Miserere | Tuesday, 15 July 2008 at 12:44 AM
Anyone who's spent any time printing marginally acceptable focus prints digitally will know first hand that the ability to sharpen optimally is almost more important than the ability to focus optimally.
Posted by: Alec Myers | Tuesday, 15 July 2008 at 07:22 AM
By "resolution" do you mean linear resolution? I suspect you do. If you do then, if it's not worth upgrading for less than around 25% increase in resolution, this means it's not worth upgrading for anything with less than about 50-60% more pixels. (1.25^2 = 1.5625). So, probably it is not worth upgrading without a factor of 1.5 to 2 increase in pixels.
Posted by: Tim Bradshaw | Tuesday, 15 July 2008 at 09:27 AM
Thanks Ctein,
Now I'll be seeing those test patterns superimposed over all the cute girls I'm scoping on this wonderful summer day. A nice 60's hair-doo on 'em, to boot, and I'm rocking.
Posted by: David | Tuesday, 15 July 2008 at 10:13 AM
Dear David,
If you're seeing those test pattern superimposed on the photos on your CRT monitor, then I'm afraid it's because you left the image up on the screen too long and you burned it into the phosphors. Our corporate attorneys inform you that TOP is not responsible for consequential damages [s].
If, on the other hand, you are using an LCD monitor, then I recommend cutting back on the use of recreational pharmaceuticals shortly before or during photographic editing sessions. [g]
If neither of these conditions applies, then I'm afraid the answer is that you just get high on life. There is no cure for that.
[VBG]
~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 15 July 2008 at 01:33 PM
Dear Tim,
You've got that right! No knowledgeable writer talks about anything but linear resolution; it doesn't even need to be specified. "Areal resolution" doesn't have any meaning except as a partial measure of the total amount of information in an image file; it does not correlate in any way with visual characteristics. Anyone who talks about it as if it's meaningful to photography is probably the kind of person who went around saying that 645 film format was "three times bigger" than 35mm film format. Take anything they say on the subject with a very large grain of salt.
Okay, climbing down from the soapbox...
Your rule of thumb for pixel counts is the same as mine. I don't even pay attention if it isn't at least a 50% increase. A good example of this was brought up by one of our members ( I'm afraid I can't remember who) who commented that they had moved from a 16 megapixel to a 21 (?) megapixel camera and they could see an improvement in sharpness but it was slight. That's just what you'd expect from 'theory.'
Practice more often deviates from theory. No two cameras process pixels exactly the same way; some of them extract a lot more detail from a given number of pixels than others do. For example, comparing the flock of 10 megapixel cameras with 8 megapixel ones, on average the improvement in fine detail was insignificant. But, if you compared a specific 8 megapixel camera to a specific 10 megapixel camera, you might see anything from no improvement in sharpness to a readily perceivable improvement in sharpness. As a class, though, 10 megapixel cameras were not anything that 8 megapixel camera owners should have gotten excited about (if resolution and sharpness were overriding considerations).
~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 15 July 2008 at 01:50 PM
In practical terms, my sharpening work flow - for scanned 4x5 black and white negatives - is to sharpen at 100% in photoshop to the level where it is as sharp as I get it without artifacts anywhere. (Which might include some selective desharpening if there is a mix high and low frequency detail.)
I then use Qimage to process the image for printing. It does a pretty good job of sharpening to the right level for the output size. This deals with both the different sharpening levels for different print sizes and with the differential sharpening for edges and texture. Not perfect, but better than I can do in PS, and most folks who see my prints seem impressed. If you look at the final images in PS, esp. those for smaller prints like 8x10, they look horribly oversharpened, but printed they look great. This might be the biggest problem with sharpening - what looks good in a print is not what looks good on the screen, at least it looks different on the screen than on the paper.
Posted by: Ed Richards | Tuesday, 15 July 2008 at 02:35 PM