From time to time on the Internet, including several times in the comments to Ctein's post yesterday, I hear people wish that digital SLRs had an automatic "expose to the right" mode that would put the histogram right up to the right edge, meaning no highlight pixel would be saturated and the maximum value would be achieved for all the rest. Theoretically, for any given scene—even those that exceeded the dynamic range of your sensor—this would give you the most to work with in software, maximizing shadow detail and contrast and minimizing noise.
Sounds good, right? For many scenes it would be. Maybe it would even be useful, overall.
The reason it wouldn't work all the time is that there are plenty of occasions when you want your highlights to overexpose. In the kind of shooting I do, I encounter them often: A dusk scene sprinkled with pinpoint streetlights and house lights; a lonely winding road with a single car's headlights coming at you in the middle distance; backlit bright afternoon summer sunlight sparking on windswept water.
These are examples of the many scenes that have in them what are called "specular highlights," a generalized way of referring to accent highlights that are off the scale—the kind that in traditional black-and-white photography would be purposefully rendered as "paper white." If a camera's meter could detect these and put them within the scale of recordable values (tones)—"expose them to the right"—then the rest of the exposure might be so far wrong as to not be salvageable. The most common example is probably just the sun in the picture, assuming your lens is good enough not to flare out.
(Ansel Adams once performed an unusual virtuoso trick with the sun. Utilizing partial solarization, a peculiar property of film whereby extreme excess exposure not only triggers a failure of the reciprocity law but also results in less negative density, he created a picture (in Portfolio V, 1970) called "The Black Sun, Tungsten Hills, California," in which the disk of the sun is rendered near-black and the rest of the picture is exposed normally and not solarized. I suppose that's not really pertinent here, but it's a nice anecdote. Incidentally, Portfolio V was the first of Adams's portfolios he printed large, on 16x20 paper, and for which he used older negatives, and he committed to never printing any of those negatives again. Despite this limitation, few of them are now among the most valuable Adamses.)
A picture of Adams you've never seen before: Ansel Adams, Photo Booth Self-Portrait, c. 1930, from the collection of the Archives of American Art, in the Katherine Kuh papers. If he could have obtained the negative from the machine, he probably would have tried to print it better!
Last night I snapped the "record shot" shown above of our house all dolled up for Halloween (handheld at 1/5th of a second and dead sharp, O ye who doubt the usefulness of IS—but I digress. Again). In this shot, would you really want the camera to "place" the luminances of the electric pumpkin next to the steps "to the right," so that none of its pixels were saturated? If it had, I would have lost an unacceptable amount of shadow information off the left side of the histogram—much more than I actually already did lose. The picture would have been useless. There are many such situations in real-life shooting that would require operator intervention if such a feature existed. There's a reason why "averaging" metering is so effective.
Incidentally—I digress yet again—we had a nice Halloween, with about 250 trick-or-treaters. Generally, we have an exceptionally attractive cohort of rug rats in these precincts, and the little chuppers are very polite. For a while I had to stand out front, because a fair percentage of four- and five-year-olds find our house sufficiently scary that they won't come up to the door. The strangest things I saw were a) a teenager elaborately dressed as a toilet, and b) three middle-aged Hispanic women dressed in black plastic bags over their clothing actually asking for candy. I don't know—recent immigrants who aren't quite clued in to the subtleties of the holiday yet? Beats me. Apart from such anomalies it was a very social evening, a chance to see neighbors seldom seen, with family bands roaming up and down the streets and lots of animated conversation.
But back to our topic. Cameras do an awful lot for us these days—including, now, providing a near-instant "Polaroid" so you can check your composition and focus, and (on some cameras) tricolor histograms so you can check exposure in every color channel. Sci-fi stuff by the standards of, say, Ansel's lifetime. But no matter how much our cameras do for us, there's never—I mean never—going to be a mechanical-electronic means of getting every exposure just the way you want it, if only because "perfect exposure" is forever going to be partially a matter of taste and individual intention, at least occasionally. For better or for worse, the best way of creating the best exposure includes the application of of experience and judgment as well as the use of the measurement device between your ears along with all your other measurement devices...however sophisticated the latter might be.
Featured Comment by Thom Hogan: Obviously, 'it wouldn't work all the time.' Current metering systems don't 'work all the time' either.
But would Mike object to an Ansel Adams Zone System spot meter that gave you a precise indication of where other values were if you were to assign one reading as Zone X or I? Somehow, I think not.
Because this is a highly visible site, I worry about seemingly outright rejection of an idea, as it often causes the camera makers to reject more than the basic idea. Let me see if I can explain.
One problem with digital is that it is indeed linear and so much of the lower half of tonal range is recorded in so few bits. Besides meaning less capable tone ramps, it also means a lower signal-to-noise ratio for the low exposure zones.
The other problem is this: we don't know what our cameras are telling us. That's particularly true if you shoot raw, as the histogram and highlights display isn't based upon the actual data, but an interpretation of the data (the embedded JPEG, demosaiced with the camera settings). Worse still, not a single manufacturer that I know of has revealed at what point their highlights display triggers. Essentially, the camera makers are assuming we're not very smart and they're trying to protect us from ourselves.
We really need several bits of information:
1. How much of the scene has photosite saturation (well overflow) in it.
2. Where that saturation lives in the scene.
3. What channel(s) that saturation lives in.
4. How much of the scene has photosite underutilization*.
5. Where that underutilization lives in the scene.
6. What channel(s) that underutilization lives in.
7. What the underutilization assumption is (and this is one that should be user changeable; some people have more aversion to noise than others).
8. Histograms on well data in the range between saturation and underutilization.
*We don't have a good name for this. I use underutilization as meaning the point at which the SN ratio drops below a certain point, essentially defining the point where pulling out additional detail is impossible to distinguish.
If Mike had all that information, he'd actually be more capable of interpreting whether the blowout his camera is telling him he has is really what he wants and whether the rest of the exposure is working. But we're not likely to get that information if camera makers think that anything relating to expose-to-the-right (ETTR) is going to be rejected by a significant portion of the shooting public.
In the picture he shows as an example, we've got a lot of the frame rendered essentially with no detail. He very well may want it to be that way (after all, Halloween is supposed to be spooky, and low/no detail is supportive of spooky), but let's say there was a face in one of the windows. Does he have it in his exposure in a way that he can work with it in post processing? You wouldn't know with the current camera exposure helpers. And, yes, an Auto ETTR would worsen that problem. More on that in a moment.
The point is that most of us who talk about anything related to ETTR really are asking for more, and more accurate, information about the exposure. Since in digital we can review the exposure immediately, it makes large sense to make sure that the information we have is optimal in that respect. It currently isn't.
The idea of an automatic expose to the right exposure mode is akin to Program exposure mode or (ahem) Scene exposure modes. It would thus obviously be targeted at novices who don't know what they're doing or don't want to think about anything more than where to point the camera (and quite frankly, it would need an automatic post processing companion to bring the exposure back to visually appealing).
Mike is right in that every time we cede control to an automatic control, we give up some of the decision-making that is integral to optimizing the making of a photograph. Sometimes we do that because we're not sure we can make those decisions fast enough (autofocus comes to mind), but most pros think seriously about as many of those decisions as they can for every picture they take.
Still, right now we have to guess as the accuracy of the information we're being given to evaluate our decision-making. I'd rather not guess. Give me the right data, presented correctly. If that means that we also get an Auto ETTR mode that I'll never use but might work for someone else, that's fine with me. I can ignore those (ahem) Scene exposure modes as long as I have P/A/S/M, for example, and ETTR is something along those lines: I can ignore Auto ETTR as long as I have the info that underlies it so that I can manually control it.
So let's not try to talk the camera makers out of exploring ETTR. Let's instead convince them that there are consumer and pro aspects of ETTR and we both want an optimal set of controls for it.