It seems like there's a lot of confusion out there about exactly what "dynamic range" is. It's just a not-very-good term for how much subject luminance range that a given film or sensor can capture at one go.
It gets back to looking at the world. For any given framing of the world in a camera viewfinder—call it a "view"—the subject within the frame will have a certain range of luminances. Let's say your view contains both a tree trunk in shadow as its darkest object and a patch of sunlit grass as its brightest object. Photographers tend to measure what Phil Davis called this subject brightness range (SBR) in stops (as you know, the term refers to a halving or doubling of the amount of light or the exposure). If the subject—your chosen view of the world—has a measured seven stops of difference (two to the seventh, or 1:128) between these darkest and lightest subject areas, then the SBR for that view is 7 stops.
An imager (film or sensor) typically has an engineered-in maximum capacity to record a range of brightnesses. That's what's known as its "dynamic range." (I'm going to go ahead and abbreviate dynamic range as "DR" here, but note that in sensitometry, DR means density range.) The important part of D(ynamic)R is not its absolute specification—although more is generally better and less is generally worse—but how well it matches up to the subject you're trying to record.
Different scenes or subjects can have different brightness ranges. If you put a gray cat in front of a gray concrete wall in open shade, you might have an SBR of only two or three stops. Any currently available film or digital sensor will have no trouble recording that entire range of brightnesses. And note that this doesn't depend on the actual tone of the subject: white cat on white background, black cat on black background, doesn't really matter: each might have an SBR of two or three stops. What we're looking at is the range of relative brightnesses within the view. If you have a high-contrast subject—say, bright sunlit snow and an open barn door showing its dark, unlit interior in the same view—the SBR might be 10, 12, or 14 stops or even more.
The popular way of graphing SBR and DR these days is with the histogram. (Traditionally, it was with curves, originally called "H&D" curves, for Hurter and Driffield, who first devised the method of plotting density against exposure.) In a histogram, an SBR that fits within the DR will show all of the tabulated frequencies contained entirely within the display. As you know, this provides you a bit of leeway in exposure—you can move the "lump" of the histogram a little more to the right or a little more to the left and still have all the information there, enabling easy adjustment in post processing.
Here's a very low-contrast view (i.e., a scene with a low ratio of subject luminances) and its histogram. You can see that all the values of the scene "fit" within the DR of the camera sensor. In fact, I have room on both the left and the right, so I could have given more exposure or less exposure and still captured all the information in the scene. (That's called "exposure latitude.")
Now, typically, as long as the subject brightness range is equal to or less than the dynamic range of the sensor, you don't have problems. It's when the SBR exceeds the DR that you run into problems. That's when you've got a histogram that jams up against both the left and the right sides.
Here's an extremely high contrast view (a scene with a high ratio of subject luminances). This greatly exceeds the capture device's dynamic range, meaning that you can't get adequate detail in the shadows and the highlights simultaneously.
Those are both very extreme cases, of course.
In this pedestrian but more typical situation, the camera is confronting a scene with an SBR that it simply doesn't have enough dynamic range to cope with. So the auto-exposure has done what good AE is supposed to do: it splits the difference and hopes for the best! In this case, it's given a visually realistic value to the shadowed area of the building exterior because that's what takes up most of the central part of the frame. But the highlights are much too light and you can't see much inside the open doorway at all.
You have several ways of dealing with the issue when you're on the scene making the photograph: you can expose more, to get more shadow detail, like so:
This gives you a decent amount of shadow detail through the open doorway, but at a rather heavy cost—you lose half the picture to overexposure! Even the shadowed half of the exterior is much too light.
Or you can expose less to get more highlight detail—that is, expose the brightly-lit parts of the picture properly—and let the shadows fall where they may, like so:
In this case, not only do you lose any detail inside the open doorway, but the shadowed area of the building is much too dark—far darker than the subjective impression the eye might have of open shade.
So in this case, with this scene and this camera in this particular mode (ISO 200 JPEG), you really don't have enough dynamic range to record even just the shadowed and sunlit portions of the outside of the building—never mind the interior of the garage.
Don't get me wrong—dynamic range isn't just a problem with digital. Photographers have been struggling with these issues since Talbot pointed his camera at the sunlit wall of his manor house. It's just that digital is more frustrating because it has less DR than many commonly available films did—and many films had too little, too.
Any technical property can be exploited...
I should mention here, too, that like any other technical property of photography, low DR can be accommodated aesthetically and used to good artistic effect.
Magnum photographer Alex Webb, for one, often photographs in bright tropical locales with transparency (a.k.a. slide or chrome) film that probably has no more than five stops of range. But his pictures don't typically have dynamic range problems per se. Or, I should say, if they do have such problems as a disinterested technical matter, they don't as an artistic matter. Many chrome shooters traditionally had no choice but to let all the low values go to solid black—so they learned to accommodate to this and use the black shapes as graphic elements. An example is the shot below by David Alan Harvey, from the book Divided Soul.
This shows even less DR than the last (lowest) garage shot. But the photographer has anticipated this—he knows what his materials are going to do—and he's used it to good effect.
At the other extreme, virtuoso black-and-white large-format photographers such as Ray McSavaney can create gorgeously nuanced prints from subjects with SBRs of as much as 20 stops. Don't try that at home, grasshoppahs.
Output
A big source of confusion is the range of the display media, whether
it's printing paper or a monitor or anything else. You'll constantly
come across people saying that since a certain range is all you
can display, then that's all the DR you can have, or can use, or whatever. Not so. Any subject
brightness range can potentially be represented accurately and
proportionately within a given display range—as long as you captured the
brightness levels of the subject correctly relative to each other in
the first place.
Where the display's range of values comes into play is in its representation of relative values in smaller areas within the image, also called "local" contrast. On a piece of paper—let's just deal with that for the moment, for simplicity's sake—you can record a range of tones from the "Dmax" (maximum black) to "paper white"—and no more. Everything has to be represented within that scale. As a general rule, paper has a much more limited range of brightnesses than most scenes do. And if your scene had, say, four or five more stops of range than your film or sensor can record, what's your "capture system" going to do with that extra information? Basically, it "dumps" successively more information into nearly featureless black or nearly featureless white (as demonstrated in the garage shots above). But what that does to the information that's left—the information that it does record, typically at only one end or in the middle of the scale—is that it increases its contrast, so that it looks more vivid. Conversely, when you take a scene with a high SBR, record it with a device with a suitably high DR, and compress it into your Dmax-to-paper-white display range, you can show all the tones relative to each other with accuracy, but your local contrast will diminish.
In order to "fit" high-SBR scenes into the image file, you have to have a capture device with high dynamic range. But the greater the SBR you capture, the lower the local contrast will be in any given part of the image once you try to match it to the limited range of the display. Again, that's because you're only recording relative values. And it's why some people can look at an illustration like Fuji's F200 high-DR-mode simulation (shown below again for reference) and actually prefer the left-hand version; it's because it has higher local contrast, even though it has less information in both the highlights and the shadows. (Look at the lit side of the frontmost whole column facing us, for example. See how the contrast of the bricks or stones is higher in the picture on the left that has the lower DR?)
So why, then, if people like higher contrast in the midtones, do photographers want devices with greater dynamic range?
The answer comes down to two things. The first is options. Creative options. Having more information in the file to start with simply gives you more creative and interpretive options for the end result. In the Fuji example, if you start with a file that looks like the one on the right, you can create the picture on the left. But if you start with a file that looks like the one on the left, you can't create the picture on the right. (You can try, using things like Shadow/Highlight in Photoshop and the Recovery slider in ACR or Lightroom. But, as most of us know all too well, those are of limited value if your SBR has exceeded the DR of your camera; you can't recover information that isn't in the file to begin with.)
The second reason is that for those who love photography because of its power to show what the world looks like, adequate DR is a critical tool in the service of realism.
State of the art
Many DSLRs have good but not great dynamic range, and so far the improvements have tended to be slow and incremental. The Sony A900 has limited high-ISO ability, but the very best dynamic range of any digital camera I've used yet. But even the mighty A900 is not perfect. (Although I think it could easily handle the garage test shot scene above without convoluted post-processing antics. At least the exterior.)
The Fuji F200 EXR sensor is interesting simply because Fuji seems to be the only company actively addressing the DR problem with sensor architecture (or at least, it's the only one telling us about it). The company made an earlier try with the Super CCD SR sensor in the S3 and S5, with marked but mixed success. The EXR sensor is another avenue of approach to the same goal. The F200, as the first EXR-sensor camera, will be the first implementation of Fuji's latest ideas on the subject, and as such will be interesting to investigate.
Remember, though, with a camera that has high dynamic range, you don't necessarily have to accept the full-range result—just as you don't have to "overcook" an HDR image that merges two separate exposures until it looks completely unrealistic. When you don't have adequate dynamic range to begin with, though, you're out of luck. From a single file, you can always get less; you just can't always get more.
Mike
Oren Grad contributed to this article and Ctein and Carl Weese read early drafts and helped with useful suggestions. Thanks to all three.
Postscript: There are a fairly large number of issues related to this subject. For instance, how to choose a camera with good DR; how to get better DR from the camera; "fixing" pictures using software; ways of reducing the contrast in the scene to match the sensor you're using, for instance with lighting; the effects of using raw capture; cases in which you want and expect to lose highlight or shadow detail; techniques to restore mid-tone contrast to a high-DR image; selective area contrast enhancement; multi-exposure HDR; and so on. In this particular essay, which is pretty long for a TOP post already, I didn't want to stray too far from the basics—can't write a textbook in a single post. However, I get the feeling this is a general topic we might well revisit. —MJ.
" It's just that digital is more frustrating because it has less DR than many commonly available films did—and many films had too little, too. "
I will probably be the tenth to make the remark, but it may be of interest to state than a raw capture at base ISO followed by an adapted treatment (something easier to write than to do in some cases) gives a few stops (I'd say 2 to 4 depending on the acceptable noise) more DR than a jpeg capture.
Posted by: Nicolas | Tuesday, 17 February 2009 at 01:55 AM
if you look at the Fuji's F200 high-DR , what you want can be the left photography with clean shadows
with Low DR cameras you get terrific shadows, then even if you like the left hand look you need a lot more DR to get it clean
for my taste I like the left one with a little bit more details in the shaddows , but clean
thanks
Posted by: erick | Tuesday, 17 February 2009 at 02:57 AM
I think it's worth mentioning that the reason we've got the "not-very-good" term "Dynamic Range" is because the term was coined, not for photography, but for music and audio, where it describes the range between the softest and loudest parts in a piece, or the range between softest and loudest sounds a system can clearly reproduce -- the modulation of loudness in music being referred to as "dynamics".
Posted by: Andrew Rodland | Tuesday, 17 February 2009 at 04:13 AM
Thanks, Mike, for an interesting essay.
I find myself routinely increasing local contrast over the entire image as a first step in Photoshop (I run an initialisation script as soon as the file is loaded). These days, I really don't like the way things look without it! The histogram is expanded laterally a tiny bit, and the image looks a whole lot clearer.
Posted by: tim-j | Tuesday, 17 February 2009 at 04:54 AM
First time I ever pressed the "Shadow/Highlights" button in CS, I gasped! The image opened up so much and now there aren't many shots it's not used in in my Real Estate work. Must resist temptation to overdo it, however. Great dissertation on HDR- thanks Mike, Ctein and Carl
Posted by: Bruce | Tuesday, 17 February 2009 at 05:27 AM
And Oren, sorry!
Posted by: Bruce | Tuesday, 17 February 2009 at 05:29 AM
Excellent explanation! I can tell that I am going to be linking to this article often as I try to explain dynamic range and histograms to my friends and family.
The one thing that is missing is that even if all the information fits within the dynamic range "limits" of a sensor, not all parts of that range are recorded equally--the sensor has an exposure curve. Today's sensors are able to extract much more information out of the highlight end (the right-most quadrant represented in the histogram) than the shadow end of the range leading to the advice to "expose to the right" of the histogram range without bumping into the right edge of the histogram box (which would clip the highlights) in order to have the most data available for post processing. That is, unless you decide to allow some of the highlights to blow out because that is how you want the photograph to look, an artistic decision.
I suppose that this is another kind of luminance "mapping" in which you are trying to map the tones to the part of the sensor that records the most information (to the right of the histogram display) without pushing them so far right that you lose them entirely (again, unless you want to blow highlights for aesthetic reasons) so that the sensor captures as much information as possible. We then can "re-map" the tones to the display media based upon our intentions. The need to "expose to the right" is likely to change as sensor designs evolve just as exposure decisions were modified as films with different exposure curves were produced. What I have just described is based upon the typical exposure characteristics of the "recording media" (sensors) now in use.
Posted by: Steve Rosenblum | Tuesday, 17 February 2009 at 05:39 AM
Thanks, Mike. In spite of knowing the basics already I savoured especially that small excursion about the creative use of those limits - something to be investigated further. Earlier publications about printing from slides, especially on Cibachrome, mainly dealt with mitigation of that effect, but I don't remember anything written about its possibilities.
Posted by: Markus Spring | Tuesday, 17 February 2009 at 06:29 AM
Excellent!!!thank you to Oren, Ctein, Carl and Mike of course! I'll share this article directly.
It explains easily my choices in function of the subject. Do I use Portra, Reala, new ektar 100, Velvia, provia, the canon G7 or the k20d in function of the subject!!
Posted by: Mine Nicolas | Tuesday, 17 February 2009 at 06:29 AM
Thanks, y'all!
I have read way too many times that the human eye can handle a much larger dynamic range than a camera sensor can. It's just not true! Our irises adjust to handle bright or dark, but NOT all at once. The world really does look like the sample from David Alan Harvey.
I think that's why 99% of the so-called HDR shots I see look so horribly lifeless and flat. Their DR isn't High at all; it's smashed flat.
Since it isn't likely we'll see a printing process with greater DR any time soon, I'm still hoping for a Fine Art Monitor.
Posted by: Luke | Tuesday, 17 February 2009 at 06:49 AM
I am not so sure if the picture at the right (Fujifilm samples) can be converted in the one at the left. You will be stretching mid-tones and gaps might occur in the histogram, resulting in less smooth transitions. A high DR sensor is good for capturing high DR scenes but not for low DR scenes. Fujifilm's solution where you can choose between high and low DR is very interesting.
Posted by: mp | Tuesday, 17 February 2009 at 08:10 AM
I read comments from folks asking for cleaner ISOs, more frames-per-second, and so on...but after graduating from point 'n shoots to DSLRs, I find that almost everything that ruins a picture is either my fault or exposures exceeding the sensor's limit (which can also be my fault!). Blowing highlights - especially on skin - is becoming my primary enemy.
The new Fuji sounds like a great step forward, if for no other reason than to offer a little more flexibility for the photographer.
Posted by: Charles Hueter | Tuesday, 17 February 2009 at 08:28 AM
Even with current standard sensor technology, there are still unexplored dynamic range territories where in-camera processing could take us. We have cameras that can make billions of calculations in an instant to correct exposure, and which take only a few instants more to do miracles like face or even smile detection.
And while we do have very good algorithms already in many cams to increase dynamic range (D-lighting, for example) I'll bet we could all come up with others.
If a $200.00 camera can detect faces, why can't it detect sky blue and assume we want to actually see sky blue, not a blown-out sky? Why not measure the actual DR of a scene, and automatically take two exposures for an instant in-camera HDR?
The Canon G10 has a setting for an internal, electronic up-to-three stop neutral density filter so one can slow down shutter speed. How cool would it be if the camera could map out the over-exposed pixils in a scene, and apply a graduated ND effect (lower the pixil sensitivity)to just those pixils?
I agree with Mike that the new Fuji camera is really interesting. I admit I have a soft spot for cameras from that perplexing little company, as my first camera was a little 2 MP Fuji 2800.
Fuji has a real knack for white balance and color rendition. They also have a maddening propensity for crippling their efforts with poor feature inclusion or implementation. For example, Fuji has marketed increased DR compact cams before, and they really worked. But they came crippled out-of-box with software that only accessed a small portion of the sensor's available DR range! Folks would hack their software to increase the range of the DR sliders, and were able to produce really impressive images.
It makes me happy to see Fuji *finally* putting IS into their compact cams after all these years, and thrilled that this new sensor is generating interest.
Thank you, Mike, for writing about this interesting camera and the enigmatic company which manufactures it. :)
Posted by: Gingerbaker | Tuesday, 17 February 2009 at 08:28 AM
Great article, Mike.
I've tried to train myself to think about DR as I shoot, so that I sense through the viewfinder when the range is just too wide. Then I can either re-frame the shot or abandon the idea altogether.
I recently shot some snowboarders on a bright afternoon. I was expecting deep shadows and blasted-out highlights. Instead, because there was so much sunlight being reflected back off the snow, my DR problems were solved for me.
It was like I had brought a lighting crew along.
Posted by: mikeinmagog | Tuesday, 17 February 2009 at 09:01 AM
Good post Mike.
I agree with Steve Rosenblum on his "Expose to the right" point. It's the single most important thing I learned in using my camera in 2008. Because there is so much more information in the right side you get much smoother tonal transitions. In the case of your low contrast example picture I would have exposed so that the bumps in the histogram would stop just short of the right side. Michael Reichmann explaines it much better than I can. Typepad doesn't let me add a link but of you search on the Luminouse Landscape sight for "expose right" it will be the first hit.
Best, Nick
Posted by: Nick | Tuesday, 17 February 2009 at 09:16 AM
Also, note that Dynamic Range depends on the ratio of the largest signal to the smallest non-noise level. So, noisy small sensors tend to have lower dynamic range than large sensors.
Posted by: Chmoss | Tuesday, 17 February 2009 at 09:18 AM
"as long as you captured the brightness levels of the subject correctly relative to each other in the first place."
This has relevance to painting; John Singer Sargent was a master at this, why one of his contempories fell flat sometimes when painting out doors, William DeMerit Chase.
Good article!
Bron
Posted by: Bron Janulis | Tuesday, 17 February 2009 at 10:18 AM
What a great explanation of a complex concept. Thanks!
Posted by: John Sartin | Tuesday, 17 February 2009 at 10:25 AM
Thanks Michael for your post. I have a couple of comments on your post, though.
Let's assume we have a digital camera. And let me remind that an histogram of the subject is conceptual, that is, we imagine the subject discretized in small squares (matching the pixels of the image) each one having associated a luminance value.
We don't know (through measurement) the histogram of the subject, the only thing we know is the histogram of the output of the camera. But we can “see” the subject histogram, is the impression we get when looking directly at the subject.
In the case where the SBR fits within the DR of the sensor, we shall get a realistic image of the subject if the image histogram has the same range (say 1:32) as the subject histogram. If the response curve of the sensor is steeper (less dynamic range) we'll get a higher range (more contrasted) and if the response curve is flatter we'll get a less contrasted image.
And this has nothing to do with the display device. We can thing of the output of the camera as a voltage (say 1V, after amplification). 1 V is the saturation voltage and 0V is the correspondent to pure black. Of course, on top of this comes the display device.
The output voltage is to be discretized in (let's assume 8 bits) 256 levels. If we have a picture in which the mid-tones span for 64 levels we can make them span for 128 levels but we do not get more information on doing that. The extra levels are either unused or guessed. That's why I disagree with your statement regarding the Fujifilm camera “if you start with a file that looks like the one on the right, you can create the picture on the left”. In which concern the mid-tones you'll not get the same level of discrimination, hence the same information, as on the left image.
In the case of the SBR exceeding the DR of the sensor, I think we are trying to do what the human eye and mind are not capable of. Taken your photo of the garage as an example, I really doubt that someone could see what is inside the garage without protecting his, or her, eyes from the harsh highlights. But that means to get two “human views” of the scene, one of the highlights only and the other of the garage only. To have a photographic device getting the two views in one is more that the human eye can do. Incidentally, I guess that the human eye is more like a center-weighted device than a spot one.
In the Fujifilm images, the one on the left is the one I reckon matches the impression we would get at the spot, a scene under harsh sunshine light. Of course the left image has no information on the highlights or on the shadows.
Posted by: António Pires | Tuesday, 17 February 2009 at 11:15 AM
Dear Folks,
It is possible to overthink this matter.
I can easily give the longer range Fuji photograph the same midtone contrast as the shorter range one by applying an S-shaped characteristic curve to it. Making the characteristic curve more S-shaped is the standard way to get more midtone contrast without clipping the highlights or shadows from a longer subject luminance range scene. One can also do local dodging and burning-in, (and routinely would in fine darkroom printing).
The difference between the two Fuji photos will then be that the short range photo has no detail in the highlights or shadows, while the long range photo will, albeit at lowered contrast. That is not a trivial difference; it's one of the key things we try to teach people in fine B&W printing, which is how to achieve artistically appropriate midtone contrast without blocking up highlights or shadows.
pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================
Posted by: Ctein | Tuesday, 17 February 2009 at 12:41 PM
Dear Steve,
In my judgment, far too much is made of this. While it is technically true that more data is collected at the high end of the tonal scale them below, this rarely translates into visual information. It's an issue that exists more in theory than in practice.
More importantly, the really big difference you see in digital cameras' rendition of different tones is that the signal-to-noise ratio goes down as the tone gets darker. If you "under"-expose one digital photo by two stops relative to another and rebalance them in Photoshop so that your middle grades look the same in both, what will not jump out at you is any lack of data in the darker tones of the former photograph; it will be the overall noisier quality of the photograph. That's not about how much data was collected; it's entirely about the signal-to-noise ratio.
(And to forestall a nitpicking question from some reader, yes, if you hold noise constant and drop the amount of data collected, the signal-to-noise ratio degrades... but noise isn't a constant; it's also a function of exposure level. The important metric is signal to noise ratio, not signal level nor noise level individually.)
~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 17 February 2009 at 12:49 PM
Steve Rosenblum - Two things :
1) Sensors are linear devices, and as such don't apply any curves to the data. More light = a higher value in a linear fashion. Any curves applied are done in software (or firmware in the camera).
2) The Expose To The Right (ETTR) philosophy isn't really about capturing the "most data". Sure, there are more levels toward the highlight end, but that's not the real advantage. The advantage is that you are over-exposing the shadows. This serves to increase the signal-to-noise ratio of the sensor, which gives you cleaner, smoother shadows. Then when you correct the exposure in software, you get a very clean file, because you've maximized signal-to-noise.
There is software out there that will combine two images - one exposed normally, and one at 3-4 stops overexposed (WAY to the right). The shadow areas are taken from the overexposed image after the exposure is corrected. This yields very clean files.
Posted by: David Bostedo | Tuesday, 17 February 2009 at 01:00 PM
Interesting article. I read someplace else that viewing conditions also affect dynamic range -- that is, you will see less or more of it depending on the quality of the light falling on a print.
However, there seems to be a lot of argument about the DR of film vs. digital. Here's one site that will tell you that high-end digital is better than film:
www.clarkvision.com/imagedetail/dynamicrange2/index.html
Being an intellectual, rather than somebody who actually knows anything, I have no way of really judging the quality of the clarkvision arguments. 8-)
JC
Posted by: John Camp | Tuesday, 17 February 2009 at 01:36 PM
Thanks Mike - A very clear summary in relatively few words. As someone who is frequently captivated by scenes with high SBR I've done a fair amount of experimenting with HDR. Another thing that can make HDR images look unrealistic is that loss of local detail that goes along with capturing the overall brightness range. I'm thinking this is analogous to your description of what happens when you compress the information from a high brightness range capture into the dmax to paper-white range of the output device be it monitor or print.
Posted by: John | Tuesday, 17 February 2009 at 02:25 PM
1) Comparing the stops of DR between film and digital is not fair. Nearly every slide film (and negative film) has a smooth pleasing rolloff at the high-end of its range. This does not exist in most digital cameras. Even though a consumer digicam may capture 5 stops, images will show abrupt displeasing clipping when you exceed the DR on the high-end. I find that I need another, almost, 2 stops to create a smooth film-like highlight rolloff in digital (am I the only one with this problem?). (Would Fuji Velvia 50 be an example of "clipping" at the low end?)
2) MP, it is possible to make a high DR camera with smoother tonality across its range than a lesser DR camera. The Fuji S3 and S5 cameras are great examples (when compared to "most" pre-2008 DX cameras).
Posted by: Jeff Hartge | Tuesday, 17 February 2009 at 02:36 PM
Not sure if I understand this correctly, but if you have two cameras capturing 14bit data. One spreads 12 stops over the 14 bits (high DR camera), the other only 6 stops (low DR camera). Suppose you capture a scene with 6 stops contrast with both cameras. It seems to me that the latter will record more fine transitions than the high DR camera because it uses the complete 'data-range' where the other doesn't. Did I miss something?
Posted by: mp | Tuesday, 17 February 2009 at 03:10 PM
Jeff, that's a good point, and it's why the A900 is coming up so much in DR conversations. It has a wonderful highlight rolloff.
Posted by: Douglas | Tuesday, 17 February 2009 at 03:18 PM
Luke said
"I have read way too many times that the human eye can handle a much larger dynamic range than a camera sensor can. It's just not true! Our irises adjust to handle bright or dark, but NOT all at once. "
AFAIK, iris/pupil size is _not_ the only way that the eye adapts. Different parts of the retina exposed to different light levels can adjust their sensitivity independently of each other. So, one part exposed to bright light can reduce its sensitivity; another part exposed to dim light can independently increase its sensitivity. The pupil size is only one factor involved, which controls the maximum brightness.
This is why eyes are (currently) cleverer than film or digital - until Fuji gives each photosite the ability to adjust its sensitivity independently :-)
Posted by: Alan Rew | Tuesday, 17 February 2009 at 03:25 PM
Jeff Hartge,
That's a huge issue for me. Highlight clipping is a very big bugbear when you're coming from B&W film. It's probably *THE* thing I like least about digital.
Not only that, but rolloff in general is something I'm very sensitive to. I had to learn by a long and painful process that I prefer sealed-box (acoustic suspension) speakers because of their more gradual rolloff in the bass. Almost no one makes them any more. I could write a long essay on *that* topic.
As a B&W printer I almost always prefer FDP (film-developer-paper) combinations that had quite low highlight contrast (for those of you who are uninitiated, variable contrast B&W papers tend to have fixed highlight contrast; the shadows are varied with filters).
I suspect this is one reason I like the Sony A900 so much....
Mike
Posted by: Mike Johnston | Tuesday, 17 February 2009 at 04:09 PM
Dear mp,
What you missed is that within reason, you don't care.
You can't see 14 bits worth of tonality. In a photograph viewed under truly optimal conditions, you can't see more than 250-300 different tonal levels (circa 8 bits). You need more bits than that because the eye doesn't map tones the same way the computer does, but 8 does a plausible job. All your output devices are 8-bit devices, unless you're buying very high-end I/O.
(an aside: for true tonal perfection, you need a coupla more bits than 8, but we're trying to be realistic, here)
So, yeah, a 12-stop-range camera that's photographing a 6-stop-range scene only has 8-bits of tone depth instead of 14... but you can't see the difference.
pax / Ctein
Posted by: ctein | Tuesday, 17 February 2009 at 04:33 PM
Dear mp,
Gaps in histograms are far less of a problem in real-world photographs than people make them out to be. Modest contrast changes do not produce enough binning in the histogram to be particularly visible. The difference in overall contrast between the two Fuji photographs at the beginning of this article is roughly the difference between Grade 2 and Grade 4 paper. A contrast change of that amount does not produce binning that is worth worrying about.
In the larger scope of things, if you're really concerned with getting good and appropriate tonality in your photographs, you should be recording (and working on) your photographs in 16-bit mode. if you're not, you're handicapping yourself very, very badly. Wrestling with the subtleties of contrast and gradation while working in 8-bit mode is like running a marathon race in flip-flops. I'm not saying you can't do it; I am saying you're making your life very much harder than it need be.
----------------------
Dear Ginger,
Amen! I have loved both of my Fuji cameras; I have hated the software that comes with them. I think I referred to it as an embarrassment in my reviews, and I'll stick with that assessment. Horribly slow, clunky, and when converting RAW files from my S100 it visibly and blatantly clips highlights and shadows.
In as much fairness as I can muster, I can't say that other camera manufacturers software might not be just as bad. But I don't grade this stuff on a curve.
~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================
Posted by: ctein | Tuesday, 17 February 2009 at 04:44 PM
Dear JC,
It was here that you read it and it was me that you read:
http://tinyurl.com/4ca9er
Must be true, then [grin].
The clarkvision tests are terrible from a scientific point of view. Very badly designed and the results don't come close to correctly portraying the inherent characteristics of the three media.
But they are GREAT real-world tests, in that they accurately reflect the results most photographers would get making photographs. Which is what matters in this discussion.
pax / Ctein
Posted by: ctein | Tuesday, 17 February 2009 at 04:56 PM
Mike et al -
Thanks for the most important article on craft to hit the photo web pages in a long time.
At last folks will realize they're shooting the digital version of black-and-white Kodachrome. And, hopefully, their pictures will be much better for it.
Bill
Posted by: Bill Pierce | Tuesday, 17 February 2009 at 05:19 PM
Bravo! Bravo!
What a lucid explanation.
Its time for us, the new generation of digital photographers, to get away from pixel peeping and instead to get a strong grasp of the basics underlying all photography. And it is high time we restored the level of dialogue to what it was in the old 'Camera & Darkroom' and 'Photo Techniques' days.
You deserve the gratitude of the online photographic community for the free, high quality public education. Yet again.
Posted by: Mani Sitaraman | Tuesday, 17 February 2009 at 10:18 PM
Oren, Ctein and Carl. Thanks and props to you guys too.
Posted by: Mani Sitaraman | Tuesday, 17 February 2009 at 10:21 PM
Ctein, thanks for the clarification. I thinking theoretically and thought that the gaps in the histogram would become visible in the end-result when increasing the contrast. Good to hear that it isn't the case. Anyway, the Fuji allows to switch between high and normal DR, so this isn't an issue.
Posted by: mp | Wednesday, 18 February 2009 at 04:33 AM
I really liked this post, Mike, not because it contained anything I had not known before, but because it is so extremely well put. In other words, while I knew everything that you wrote about, I would not have been able to formulate it this way if asked to explain dynamic range to someone new to the subject. You seem to have found just the right words - all I can say is bravo!
Posted by: Zoltan | Wednesday, 18 February 2009 at 11:02 AM
Printers above 8 bits are relatively common these days, and not THAT expensive. All the recent Canon iPF series claim "16 bit data", but are actually 12-bit printers, plus the newest Epsons are somewhere over 8 bits as well. Over 8 bits on a monitor is very expensive, but most newer inkjet printers (primarily 17 inches and up) are 12 bit devices.
-Dan
Posted by: Dan Wells | Thursday, 19 February 2009 at 09:39 PM
Mike -
Here's some info on a new camera that Ricoh is releasing that supposedly produces in-camera HDR (i.e. already tonemapped, not 32 bit EXR or HDR files).
Take a look:
http://www.adorama.com/catalog.tpl?op=NewsDesk_Internal&article_num=021909-2
Seinberg
Posted by: Seinberg | Friday, 20 February 2009 at 12:10 PM