By Richard Howe
I've recently made a whole bunch of uncontrolled and statistically invalid experiments (but fun and, to me, anyway, informative) to try to get a handle on these issues of fidelity, "true to lifeness," for myself. The following condenses and simplifies what I actually did by quite a bit, and is more methodical in the telling than I actually was in the doing, but since what I want to share is mostly the results, it's enough (I hope) just to recount the outlines of the process. After all, it's not a lab report, and when I was making the experiments it never occurred to me that I might be communicating the results to anyone but a few photographer friends.
I photographed the stuff on my work table just to the left of my monitor, with the camera about where my head is when I'm looking at the screen, at a focal length that more or less matched my eyes' field of view, in interior (indirect) daylight (but with a good gray card and calibrated white and black), and varied both depth of field and, especially, the overall focus (do my 16.7 mp really matter? Are eight enough? etc.).
I then brought these up in Photoshop, and looked at the images life size as seen on the 30" monitor in front of me, and then, just turning my head and eyes a little bit, looked at the reality they were supposed to represent. (By the way, I covered one eye for this, so as to exclude the contribution of stereo vision to the sense of reality. And then, of course, I tried to adjust the images in Photoshop to make them, or at least one of them, match the reality as close as possible.
I then made two 13x19" prints (Epson 2200 on Epson Enhanced Matte paper and on the new Exhibition Fiber paper) of the "best" one, i.e., most realistic (to me), and propped them up vertically (but only one at a time) to the right of the monitor.
Now I could look a little to my left and see "reality," look straight ahead and see the monitor version, and look a little to my right and see the print.
Here's what I found out:
It's impossible (at least for me) to exactly match the image, either on the monitor or as a print, to the reality. No surprise there.
It is possible to get close enough to make the effort interesting. This actually was a surprise.
Resolution/sharpness mattered, of course, but a 1 pixel radius Gaussian blur didn't make enough difference to matter (remember, these were pretty hi-res images to begin with).
A very modest artificial selective focus achieved using a circular layer mask and a slight blurring of one of the layers, so that the center was perceptibly sharper than the periphery, helped, as long as the images were more or less life size. At smaller sizes, this didn't matter at all.
Color mattered more than anything else (if everything else was okay to begin with, of course). But here it got more interesting.
Matching the luminosities mattered a lot. This meant, among other things, substantially reducing the brightness of my monitor (and recalibrating accordingly) as well as getting more light onto the print (this introduced some white balance problems, but not enough to matter that much, or so it seemed).
Saturation was really complicated, and the most difficult of all, really: it was difficult (actually impossible) to match the subjective impression of the highly saturated colors of the reality without overcooking the less saturated colors—is there a "saturation dynamic range" akin to the luminosity dynamic range?
White balance mattered a lot, possibly more than anything else, and how closely the white balances of the images matched reality had a large effect on how well the whole range of saturations could be matched. Matching hues, especially in the highly saturated range and in the near true gray range contributed more than matching hues in between. Incidentally, adjusting the color balance so that the gray card was truly neutral (0, 0 in Lab color space) by no means produced the most realistic looking results.
I couldn't tell any difference between the two papers. This was the biggest surprise, especially since I'm something of a nut about papers.
Although I work in a thoroughly color calibrated environment, what worked for the screen and what worked for the prints were so different that they couldn't be exchanged. A print from the version that looked best (most realistic) on the monitor looked horribly bland as a print, and the print version looked horribly overcooked on the screen. The prints required "turning up the volume" substantially on everything, especially saturation, sharpening, and "local contrast"; the screen versions (even with the monitor dimmed way down) had to be much more subdued.
What surprised me most of all was that I really liked what I judged to be the most realistic images, on screen or as prints. I'd always assumed that I had to at least slightly (if carefully) overcook an image to compensate for the disparity vis-a-vis reality.
The lessons learned (mine, anyway):
1. My expectations and ideas about what "looks realistic" have more of an effect on how I judge the fidelity of a photograph than anything else, including its "true" fidelity or lack of it;
2. Getting a reasonably high fidelity result means balancing many trade-offs among all the available parameters, and can only be achieved if the reality is right there in front of me for immediate comparison (or maybe I just have a lousy visual memory, though there is some evidence that I'm okay on that score).
3. If the intent is to convey one's response to what one has seen, rather than to make the truest possible image of it, then we're in the realm of art, or one of its realms, anyway, and all bets are off.