By Ctein
We don't see things the way our cameras do. The qualities of human vision can be summarized by saying that we're lousy at absolutes, pretty good at relatives, and brilliant at differences.
Consider tonality. Although human vision has an extremely long "dynamic range," on the order of 10,000,000,000:1, we don't see a lot of tonal steps. That's something we can measure. Set up two equally-illuminated squares next to each other. Start turning up (or down) the brightness of one of those squares and make note of the point at which you can see the demarcation line between them. That's the smallest tonal difference you can see at that brightness level.
You can continue stepping your way up (or down) the brightness scale until you hit pitch-blackness or blinding nova-white. Count up all the steps, and that's the total number of distinct tones that you can see. For the average human eye, it's about 650. In the middle of that brightness scale we can distinguish tonal differences of a mere 1%, but towards the extremes it can require a half-stop difference in brightness for us to be able to see it. This has some interesting consequences.
First, this explains why we have trouble seeing shadow detail in prints under dim lighting but see it so clearly in the sunlight; the lower the luminance, the poorer our tonal discrimination. Conversely, in sunlight, it's hard to see the subtle tonal differences in the highlights.
Second, there is an optimum illuminance for viewing a print that will maximize the number of distinct tonal steps we can see in it. For a typical high quality photographic or inkjet print with a brightness range around 200:1 that's around 200–300 foot candles, and the maximum number of tonal steps we can see in that print is 250–300.
Third, this is why 8-bit printer output looks as good as it does. It's close to our visual capabilities. It's not really there, because the 256 gray levels in the print don't match up with the visual discrimination steps. 16-bit output would produce visibly better tonality. It would not be dramatically better.
Under many circumstances, we see far fewer tonal steps than that. As I said, we're good at differences; the human nervous system is excellent at performing differential analysis. We can pick up on that sharp boundary between two regions of slightly different brightness. Blur out the boundary and our abilities drop dramatically. Anyone who was ever tried to uniformly illuminate a backdrop by eye knows what I'm talking about. I'm certain there is nobody here who can do that to even 1/4 stop—a 20% variation. Most folks can't do it to even half a stop.
Many will see this as a uniform light gray field, but on a calibrated gamma 2.2 monitor there's about a half stop difference in brightness from the center to the edge.
Here the same brightness variation is in four well-defined steps. Now it's pretty easy to see that the brightness isn't uniform. Our vision is good at evaluating brightness differences but not absolutes.
How we see fine detail is another matter; in that regards, we're better than most people realize. That's a topic for another day.
____________________
Ctein
The more you get into visual perceptions the more you realize that cameras (digital or film) do not capture what we see despite what most people think.
Posted by: bob wong | Friday, 02 May 2008 at 03:14 PM
Ctein has reminded me of one of my favorite book purchase of late (and definitely one of my kid's favorites), "Optical Allusions" by Jay Hosler. It is a fun look at the evolution of the eye and has all sorts of neat eye facts in it. It's also really funny, just like his other books.
Posted by: Will Sadler | Friday, 02 May 2008 at 04:24 PM
"Second, there is an optimum illuminance for viewing a print that will maximize the number of distinct tonal steps we can see in it. For a typical high quality photographic or inkjet print with a brightness range around 200:1 that's around 200–300 foot candles, and the maximum number of tonal steps we can see in that print is 250–300."
And just to keep things interesting, the museum standard for display of works on paper (including photographs) is around 5 foot candles...
Posted by: Greg Heins | Friday, 02 May 2008 at 04:39 PM
Dear Mike,
Re: your footnote. Absolutely right. In fact, the best way for people to judge the effect would be to hold up their hands and make a little frame out of them to look through so that they're only seeing the image and not the border. If you do that, it's really hard to see any difference from left to right in the smooth gradient.
Dear Bob,
Preaching to the Choir! Sometimes I talk about "plausible photography" as opposed to "realistic photography." In other words, Does it pass the believability test?
I've often described my life's artistic work as an effort to make prints that show people what I *saw*. Fantastically difficult and not remotely the same as making technically accurate photographs. Sometimes it involves real cheats to produce something that nonetheless looks visually correct.
~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================
Posted by: Ctein | Friday, 02 May 2008 at 04:42 PM
Cameras do not capture what we see, indeed! That is why, in my book, straight photography needs editing and post-processing, as it needed hours of darkroom, to get a bit closer of what we saw (not to mention what we felt).
Posted by: Nicolas | Friday, 02 May 2008 at 04:46 PM
Dear Ctein, thanks for feeding the outer geek in me. I used to think--like most folk--that photography was a way to record a scene in a visual medium. When I first read about the differences between human sight and how film and CCDs record light I immediately realised I had been wrong all along. Since then, I no longer worry about trying to "capture reality", I just try to recreate what I see, or more often than not, what I would *like* to see!
Reality is overrated, anyway.
M.
Posted by: Miserere | Friday, 02 May 2008 at 04:56 PM
"Sometimes it involves real cheats to produce something that nonetheless looks visually correct."
Cf. the ancient Greeks' various tricks with architecture, if you're lucky enough to have seen one of those wonderful documentaries about building the Parthenon or something similar. To name just two, columns in the middle of a row were placed just slightly further apart to enforce the illusion of perfect regularity, and the base of the temple was bowed slightly upwards to counteract the effect of "visual weight." Neither were literally correct but both made the results LOOK more correct.
Personally, I believe that I can sometimes tell, in galleries or museums, when a B&W darkroom printer used a viewing light that was too strong or too weak to evaluate his or her intermediate prints while printing. Shadows that seem too dark and that "infect" higher values show the use of a viewing light that was too strong, and weak, unsupported blacks that don't reach Dmax can mean a viewing light that was too dim. Sadly, what Greg says about museum viewing light is true--it's based on solid conservation principles, and is done to protect the artwork, but it distorts proper viewing conditions somewhat for modern prints, and caused me, when I was doing custom exhibition printing, to use a viewing light that was slightly too weak when I was doing printing for museums.
The "worst" viewing light I ever saw was probably for a very early (1839) Hippolyte Bayard direct positive print that was shown at the National Gallery's "Art of Fixing a Shadow" show in the Spring and Summer of 1989. The print was in a very dimly-lit room covered by a weighted piece of black velvet. I can't remember if there was a time limit for how long you were allowed to lift the velvet and look at the print, but I think there was. It was obviously neither a good picture nor a good print, but it had an obvious "unique object" property that made it special to see. I remember it vividly, despite the dim light!
Mike J.
Posted by: Mike Johnston | Friday, 02 May 2008 at 05:05 PM
Actually, once you are aware of what you are supposed to see in the first picture, gradient is clearly visible (ok, maybe not clearly, but visible, nonetheless). But I just skimmed across the perceived white blank space at first.
As for photos showing what was actually seen, I've had some completely opposite experiences - I shot a concert in a dimly lit space, so I had to use the flash (something I usually avoid), and now when I remember that show, I remember it as it is in my photos - brightly lit.
Posted by: Ante | Friday, 02 May 2008 at 05:09 PM
One of the problems with striving for what you saw is, as Ante points out, subject to visual memory, which is a wonderful trickster.
The painter J.S. Sargent, superb recorder of tone and light, in my estimation the finest "painter"of all time( though he didn't always have much to say) , seems to have painted from the shade, looking out at the light. William DeMerit Chase, was right out there in the sun, and his paintings are ...flat. Something to do with what Mike J was talking about.
Bron
Posted by: Bron Janulis | Friday, 02 May 2008 at 06:30 PM
I just started reading Goethe's "Theory of colours" (Eastlake Translation). I feel like I am mainlining information and it makes my head swim when I go out shooting. I like that.
Posted by: Nigel Robinson | Friday, 02 May 2008 at 07:26 PM
As Bron and Ante allude to, the even more major difference between vision and cameras is that our perceived images are partially or largely reconstructions, not just recordings. The signals we get from our eyes are pretty bad, with lots of noisy or plain missing areas. So the brain does what it does best: finds patterns and fills the image in according to its expectations on what it "should" see. If we should see something not there, we add it. If something unexpected and incongruent is better explained away as a glitch then *poof*, it's gone. Our brain is manipulating our every waking moment in a way that would give a press photography ethics committee a collective brain haemorrhage.
You don't expect a gradient in the first picture, so you don't see it. You see the second picture, and return to the first, and now you suddenly see it. The problem is, did you in fact "see" the gradient at the beginning and the brain overrode that info, deciding that it must be flat; or is it really too subtle an effect for your vision system to pick up, but your brain is now making up a gradient to fulfil its changed expectations? Most likely the first for most people, in this case, but make the gradient more subtle (or use an worn out, low-contrast monitor) and you'll get to the second.
Every waking moment we lie to ourselves about thirty times a second.
Posted by: Janne | Friday, 02 May 2008 at 08:31 PM
I thought I was seeing the gradient right at first; it looked like significant "vignetting" on the right edge. But on further consideration I'm pretty sure I was seeing the difference relative to the background.
Of course, I'm just *assuming* that the background color is in fact uniform, I haven't checked that either.
Posted by: David Dyer-Bennet | Saturday, 03 May 2008 at 11:12 AM
If there's one thing that I've learnt today, it's how badly my monitor needs calibrating.
Posted by: Sean | Sunday, 04 May 2008 at 08:47 AM
Note for those who may be confused by some of the preceding comments: the original illos used a left-to-right gradient that made it easy to see the difference in tone against the surround. I replaced those figures with illos that have a symmetric gradient to eliminate that unintentional visual cue.
pax / Ctein
Posted by: Ctein | Wednesday, 07 May 2008 at 04:04 AM