OK, here I go tilting at windmills again. It's just that this one bugs me. Well, they all bug me. Well, but this time I'm really justified being bugged.
Yeah, but I always think that.
Never mind. I've got the donkey, I've got the lance, and I'm riding with both. For truth, justice, and Dulcinea!
Ninety-nine percent (conservatively) of the folks talking and writing about digital photography are misusing the term "dynamic range." They should be talking about exposure range. The two are not the same thing; they can differ by several stops. Both are entirely acceptable terms; in fact, exposure range was well established in traditional photography before some people with a little (as in "a dangerous thing") technical knowledge decided it would be cooler to call it dynamic range. They were wrong. Dynamic range is a preexisting term in electronics and it means something different. Replacing an accurate, understood, established, and entirely adequate term with an incorrect and inaccurate one does not strike me as a service to the craft.
So, let the lecture begin. There may be a quiz tomorrow.
Dynamic range is the range of signal that a sensor can record. As a thought experiment, let's imagine an ideal, noiseless monochrome sensor that converts each incoming photon to a photoelectron, and where each pixel can hold 1000 photoelectrons. Then its dynamic range is 1000:1 because the smallest signal it can record is one electron and the largest is 1000. That's 2 to the 10th power or 10 stops (or 60 power dB, if you're into that kind of thing). You'll find that information in the technical spec sheet from the sensor manufacturer.
That does not mean the exposure range, the range of brightnesses that can be captured in a single exposure, is 10 stops. It's likely to be considerably different, for a whole number of reasons. Real-world noise intrinsic to the sensor+camera system may reduce the exposure range below the sensor dynamic range. Photon counting statistics, which we'll get to next column, increases the exposure range to more than the dynamic range. We'll get to that next week (so hold your questions, please).
So, intrinsic noise. What are we talking about? Real sensors and cameras don't have zero noise, although they get surprisingly close under the right conditions. Some of the noise is constant but some depends on how many photons the sensor collects, some depends on temperature, and some on how long the exposure is. It's complicated
What impact does this noise have? One effect is that it means you can record smaller differences in brightness than an "ideal" analysis says you should, e.g., intermediate levels between one-photon-per-pixel and two-photon illuminances. I covered that at length in the column "Noise is Your Friend."
Seemingly paradoxically, noise also reduces the number of grey levels you can meaningfully distinguish. This paper by Emil Martinec of the University of Chicago analyzes that most beautifully. The text may not be for the mathematically faint-of-heart, but the illustrations are understandable.
(An aside: If you're thinking all this further trashes the arguments of the "expose-to-the-right" crowd, as if I hadn't done that sufficiently already, you'd be right.)
Noise also has another effect that interests us more at the moment—it reduces the contrast in the shadows, and that can affect usable exposure range. In our ideal, frictionless world, black is black and each photon is detected by the sensor proportionately to the signal. The top row of this illustration illustrates that:
Adding a small amount of system noise to the record, though, disproportionately affects the smallest signal levels, that is, the darkest tones. In rows two and three of the illustration above, I've added small but increasing amounts of noise to the signal. The eight-photon-per-pixel zone doesn't change very much but the lesser-exposed zones get progressively lighter, with the greatest difference being visible at the black end of the scale. While the noise hasn't made any difference, directly, in the exposure range of the camera at the dark end of the scale it has drastically lowered the contrast.
Add enough noise and you get something like the fourth row, where you've lost any meaningful distinction between pure black and a one photon per pixel exposure. In principle the difference is measurable, and the more pixels you have (the finer the grain) the easier it will be to make out that difference. But practically speaking, there isn't much to work with.
(Side note: deciding what constitutes the darkest usable tonal difference is one of the vexing parts of measuring exposure ranges. It's not like at the white end of the scale, where there is a clear boundary when you saturate the sensor. At the dark end of the scale, the researcher has to make a judgment call right at the beginning about what tonal difference is considered meaningful.)
At low ISOs, with short exposures and moderate temperatures, system noise doesn't have a very big impact on exposure range. Go to long exposures and high ISOs and it's easy to see the effects. When you open up such Raw photographs in Adobe Camera Raw, it may even tell you that there is no black clipping at all even though you can look at the photograph and see there is no subject detail in the shadows whatsoever. The overall noise level is high enough that you don't have any true black pixels, even those pixels that received no light exposure whatsoever.
No simple answer
In the real world, it gets much more complicated than that. Sensors aren't 100% efficient, and on-sensor noise can actually improve photon capture efficiency under certain circumstances. Color muddles this substantially, which is why I limited these thought experiments to a monochrome sensor. Moreover, sensors don't have to be purely linear in their behavior, where the stored charge is directly proportional to the amount of light. Currently, I think (I could be wrong) that all ordinary cameras do have roughly linear sensors, but there are a number of specialized cameras that don't, and NASA and other astronomical researchers have designed specialized sensors with considerably more than a 20-stop range for their work (and they're not achieving that by storing several million photoelectrons per pixel).
So, what's the simple answer? Well that's the problem! There isn't one. The electronic characteristics of the sensor and the camera can have significant effects on the actual exposure range realized in the world, and the kind and magnitude of those effects depends on the circumstances of the exposures. There are many, many engineering rules of thumb that get used for estimating the relationship between dynamic range and exposure range, but there's no universal rule like, "multiply the dynamic range by 0.95 to get the exposure range."
Things get even more interesting when we include the way light works in the real world instead of in some people's imaginations. The real exposure range of the sensor turns out to be significantly greater than its dynamic range. That will be the topic of next week's column.
Don Ctein de la Daly City mounts his faithful donkey and goes at the windmills on Wednesdays at TOP.
Note: Links in this post may be to our affiliates; sales through affiliate links may benefit this site. More...
Original contents copyright 2012 by Michael C. Johnston and/or the bylined author. All Rights Reserved.