« Books for What Ails You | Main | Canon Introduces Widest-Ever Tilt-Shift Lens »

Tuesday, 17 February 2009

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

" It's just that digital is more frustrating because it has less DR than many commonly available films did—and many films had too little, too. "

I will probably be the tenth to make the remark, but it may be of interest to state than a raw capture at base ISO followed by an adapted treatment (something easier to write than to do in some cases) gives a few stops (I'd say 2 to 4 depending on the acceptable noise) more DR than a jpeg capture.

if you look at the Fuji's F200 high-DR , what you want can be the left photography with clean shadows
with Low DR cameras you get terrific shadows, then even if you like the left hand look you need a lot more DR to get it clean
for my taste I like the left one with a little bit more details in the shaddows , but clean

thanks

I think it's worth mentioning that the reason we've got the "not-very-good" term "Dynamic Range" is because the term was coined, not for photography, but for music and audio, where it describes the range between the softest and loudest parts in a piece, or the range between softest and loudest sounds a system can clearly reproduce -- the modulation of loudness in music being referred to as "dynamics".

Thanks, Mike, for an interesting essay.

I find myself routinely increasing local contrast over the entire image as a first step in Photoshop (I run an initialisation script as soon as the file is loaded). These days, I really don't like the way things look without it! The histogram is expanded laterally a tiny bit, and the image looks a whole lot clearer.

First time I ever pressed the "Shadow/Highlights" button in CS, I gasped! The image opened up so much and now there aren't many shots it's not used in in my Real Estate work. Must resist temptation to overdo it, however. Great dissertation on HDR- thanks Mike, Ctein and Carl

And Oren, sorry!

Excellent explanation! I can tell that I am going to be linking to this article often as I try to explain dynamic range and histograms to my friends and family.

The one thing that is missing is that even if all the information fits within the dynamic range "limits" of a sensor, not all parts of that range are recorded equally--the sensor has an exposure curve. Today's sensors are able to extract much more information out of the highlight end (the right-most quadrant represented in the histogram) than the shadow end of the range leading to the advice to "expose to the right" of the histogram range without bumping into the right edge of the histogram box (which would clip the highlights) in order to have the most data available for post processing. That is, unless you decide to allow some of the highlights to blow out because that is how you want the photograph to look, an artistic decision.

I suppose that this is another kind of luminance "mapping" in which you are trying to map the tones to the part of the sensor that records the most information (to the right of the histogram display) without pushing them so far right that you lose them entirely (again, unless you want to blow highlights for aesthetic reasons) so that the sensor captures as much information as possible. We then can "re-map" the tones to the display media based upon our intentions. The need to "expose to the right" is likely to change as sensor designs evolve just as exposure decisions were modified as films with different exposure curves were produced. What I have just described is based upon the typical exposure characteristics of the "recording media" (sensors) now in use.

Thanks, Mike. In spite of knowing the basics already I savoured especially that small excursion about the creative use of those limits - something to be investigated further. Earlier publications about printing from slides, especially on Cibachrome, mainly dealt with mitigation of that effect, but I don't remember anything written about its possibilities.

Excellent!!!thank you to Oren, Ctein, Carl and Mike of course! I'll share this article directly.
It explains easily my choices in function of the subject. Do I use Portra, Reala, new ektar 100, Velvia, provia, the canon G7 or the k20d in function of the subject!!

Thanks, y'all!

I have read way too many times that the human eye can handle a much larger dynamic range than a camera sensor can. It's just not true! Our irises adjust to handle bright or dark, but NOT all at once. The world really does look like the sample from David Alan Harvey.

I think that's why 99% of the so-called HDR shots I see look so horribly lifeless and flat. Their DR isn't High at all; it's smashed flat.

Since it isn't likely we'll see a printing process with greater DR any time soon, I'm still hoping for a Fine Art Monitor.

I am not so sure if the picture at the right (Fujifilm samples) can be converted in the one at the left. You will be stretching mid-tones and gaps might occur in the histogram, resulting in less smooth transitions. A high DR sensor is good for capturing high DR scenes but not for low DR scenes. Fujifilm's solution where you can choose between high and low DR is very interesting.

I read comments from folks asking for cleaner ISOs, more frames-per-second, and so on...but after graduating from point 'n shoots to DSLRs, I find that almost everything that ruins a picture is either my fault or exposures exceeding the sensor's limit (which can also be my fault!). Blowing highlights - especially on skin - is becoming my primary enemy.

The new Fuji sounds like a great step forward, if for no other reason than to offer a little more flexibility for the photographer.

Even with current standard sensor technology, there are still unexplored dynamic range territories where in-camera processing could take us. We have cameras that can make billions of calculations in an instant to correct exposure, and which take only a few instants more to do miracles like face or even smile detection.

And while we do have very good algorithms already in many cams to increase dynamic range (D-lighting, for example) I'll bet we could all come up with others.

If a $200.00 camera can detect faces, why can't it detect sky blue and assume we want to actually see sky blue, not a blown-out sky? Why not measure the actual DR of a scene, and automatically take two exposures for an instant in-camera HDR?

The Canon G10 has a setting for an internal, electronic up-to-three stop neutral density filter so one can slow down shutter speed. How cool would it be if the camera could map out the over-exposed pixils in a scene, and apply a graduated ND effect (lower the pixil sensitivity)to just those pixils?

I agree with Mike that the new Fuji camera is really interesting. I admit I have a soft spot for cameras from that perplexing little company, as my first camera was a little 2 MP Fuji 2800.

Fuji has a real knack for white balance and color rendition. They also have a maddening propensity for crippling their efforts with poor feature inclusion or implementation. For example, Fuji has marketed increased DR compact cams before, and they really worked. But they came crippled out-of-box with software that only accessed a small portion of the sensor's available DR range! Folks would hack their software to increase the range of the DR sliders, and were able to produce really impressive images.

It makes me happy to see Fuji *finally* putting IS into their compact cams after all these years, and thrilled that this new sensor is generating interest.

Thank you, Mike, for writing about this interesting camera and the enigmatic company which manufactures it. :)

Great article, Mike.

I've tried to train myself to think about DR as I shoot, so that I sense through the viewfinder when the range is just too wide. Then I can either re-frame the shot or abandon the idea altogether.

I recently shot some snowboarders on a bright afternoon. I was expecting deep shadows and blasted-out highlights. Instead, because there was so much sunlight being reflected back off the snow, my DR problems were solved for me.

It was like I had brought a lighting crew along.

Good post Mike.
I agree with Steve Rosenblum on his "Expose to the right" point. It's the single most important thing I learned in using my camera in 2008. Because there is so much more information in the right side you get much smoother tonal transitions. In the case of your low contrast example picture I would have exposed so that the bumps in the histogram would stop just short of the right side. Michael Reichmann explaines it much better than I can. Typepad doesn't let me add a link but of you search on the Luminouse Landscape sight for "expose right" it will be the first hit.
Best, Nick

Also, note that Dynamic Range depends on the ratio of the largest signal to the smallest non-noise level. So, noisy small sensors tend to have lower dynamic range than large sensors.

"as long as you captured the brightness levels of the subject correctly relative to each other in the first place."

This has relevance to painting; John Singer Sargent was a master at this, why one of his contempories fell flat sometimes when painting out doors, William DeMerit Chase.

Good article!

Bron

What a great explanation of a complex concept. Thanks!

Thanks Michael for your post. I have a couple of comments on your post, though.
Let's assume we have a digital camera. And let me remind that an histogram of the subject is conceptual, that is, we imagine the subject discretized in small squares (matching the pixels of the image) each one having associated a luminance value.
We don't know (through measurement) the histogram of the subject, the only thing we know is the histogram of the output of the camera. But we can “see” the subject histogram, is the impression we get when looking directly at the subject.
In the case where the SBR fits within the DR of the sensor, we shall get a realistic image of the subject if the image histogram has the same range (say 1:32) as the subject histogram. If the response curve of the sensor is steeper (less dynamic range) we'll get a higher range (more contrasted) and if the response curve is flatter we'll get a less contrasted image.
And this has nothing to do with the display device. We can thing of the output of the camera as a voltage (say 1V, after amplification). 1 V is the saturation voltage and 0V is the correspondent to pure black. Of course, on top of this comes the display device.
The output voltage is to be discretized in (let's assume 8 bits) 256 levels. If we have a picture in which the mid-tones span for 64 levels we can make them span for 128 levels but we do not get more information on doing that. The extra levels are either unused or guessed. That's why I disagree with your statement regarding the Fujifilm camera “if you start with a file that looks like the one on the right, you can create the picture on the left”. In which concern the mid-tones you'll not get the same level of discrimination, hence the same information, as on the left image.
In the case of the SBR exceeding the DR of the sensor, I think we are trying to do what the human eye and mind are not capable of. Taken your photo of the garage as an example, I really doubt that someone could see what is inside the garage without protecting his, or her, eyes from the harsh highlights. But that means to get two “human views” of the scene, one of the highlights only and the other of the garage only. To have a photographic device getting the two views in one is more that the human eye can do. Incidentally, I guess that the human eye is more like a center-weighted device than a spot one.
In the Fujifilm images, the one on the left is the one I reckon matches the impression we would get at the spot, a scene under harsh sunshine light. Of course the left image has no information on the highlights or on the shadows.

Dear Folks,

It is possible to overthink this matter.

I can easily give the longer range Fuji photograph the same midtone contrast as the shorter range one by applying an S-shaped characteristic curve to it. Making the characteristic curve more S-shaped is the standard way to get more midtone contrast without clipping the highlights or shadows from a longer subject luminance range scene. One can also do local dodging and burning-in, (and routinely would in fine darkroom printing).

The difference between the two Fuji photos will then be that the short range photo has no detail in the highlights or shadows, while the long range photo will, albeit at lowered contrast. That is not a trivial difference; it's one of the key things we try to teach people in fine B&W printing, which is how to achieve artistically appropriate midtone contrast without blocking up highlights or shadows.

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

Dear Steve,

In my judgment, far too much is made of this. While it is technically true that more data is collected at the high end of the tonal scale them below, this rarely translates into visual information. It's an issue that exists more in theory than in practice.

More importantly, the really big difference you see in digital cameras' rendition of different tones is that the signal-to-noise ratio goes down as the tone gets darker. If you "under"-expose one digital photo by two stops relative to another and rebalance them in Photoshop so that your middle grades look the same in both, what will not jump out at you is any lack of data in the darker tones of the former photograph; it will be the overall noisier quality of the photograph. That's not about how much data was collected; it's entirely about the signal-to-noise ratio.

(And to forestall a nitpicking question from some reader, yes, if you hold noise constant and drop the amount of data collected, the signal-to-noise ratio degrades... but noise isn't a constant; it's also a function of exposure level. The important metric is signal to noise ratio, not signal level nor noise level individually.)


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Steve Rosenblum - Two things :

1) Sensors are linear devices, and as such don't apply any curves to the data. More light = a higher value in a linear fashion. Any curves applied are done in software (or firmware in the camera).

2) The Expose To The Right (ETTR) philosophy isn't really about capturing the "most data". Sure, there are more levels toward the highlight end, but that's not the real advantage. The advantage is that you are over-exposing the shadows. This serves to increase the signal-to-noise ratio of the sensor, which gives you cleaner, smoother shadows. Then when you correct the exposure in software, you get a very clean file, because you've maximized signal-to-noise.

There is software out there that will combine two images - one exposed normally, and one at 3-4 stops overexposed (WAY to the right). The shadow areas are taken from the overexposed image after the exposure is corrected. This yields very clean files.

Interesting article. I read someplace else that viewing conditions also affect dynamic range -- that is, you will see less or more of it depending on the quality of the light falling on a print.

However, there seems to be a lot of argument about the DR of film vs. digital. Here's one site that will tell you that high-end digital is better than film:

www.clarkvision.com/imagedetail/dynamicrange2/index.html

Being an intellectual, rather than somebody who actually knows anything, I have no way of really judging the quality of the clarkvision arguments. 8-)

JC

Thanks Mike - A very clear summary in relatively few words. As someone who is frequently captivated by scenes with high SBR I've done a fair amount of experimenting with HDR. Another thing that can make HDR images look unrealistic is that loss of local detail that goes along with capturing the overall brightness range. I'm thinking this is analogous to your description of what happens when you compress the information from a high brightness range capture into the dmax to paper-white range of the output device be it monitor or print.

1) Comparing the stops of DR between film and digital is not fair. Nearly every slide film (and negative film) has a smooth pleasing rolloff at the high-end of its range. This does not exist in most digital cameras. Even though a consumer digicam may capture 5 stops, images will show abrupt displeasing clipping when you exceed the DR on the high-end. I find that I need another, almost, 2 stops to create a smooth film-like highlight rolloff in digital (am I the only one with this problem?). (Would Fuji Velvia 50 be an example of "clipping" at the low end?)

2) MP, it is possible to make a high DR camera with smoother tonality across its range than a lesser DR camera. The Fuji S3 and S5 cameras are great examples (when compared to "most" pre-2008 DX cameras).

Not sure if I understand this correctly, but if you have two cameras capturing 14bit data. One spreads 12 stops over the 14 bits (high DR camera), the other only 6 stops (low DR camera). Suppose you capture a scene with 6 stops contrast with both cameras. It seems to me that the latter will record more fine transitions than the high DR camera because it uses the complete 'data-range' where the other doesn't. Did I miss something?

Jeff, that's a good point, and it's why the A900 is coming up so much in DR conversations. It has a wonderful highlight rolloff.

Luke said
"I have read way too many times that the human eye can handle a much larger dynamic range than a camera sensor can. It's just not true! Our irises adjust to handle bright or dark, but NOT all at once. "

AFAIK, iris/pupil size is _not_ the only way that the eye adapts. Different parts of the retina exposed to different light levels can adjust their sensitivity independently of each other. So, one part exposed to bright light can reduce its sensitivity; another part exposed to dim light can independently increase its sensitivity. The pupil size is only one factor involved, which controls the maximum brightness.

This is why eyes are (currently) cleverer than film or digital - until Fuji gives each photosite the ability to adjust its sensitivity independently :-)

Jeff Hartge,
That's a huge issue for me. Highlight clipping is a very big bugbear when you're coming from B&W film. It's probably *THE* thing I like least about digital.

Not only that, but rolloff in general is something I'm very sensitive to. I had to learn by a long and painful process that I prefer sealed-box (acoustic suspension) speakers because of their more gradual rolloff in the bass. Almost no one makes them any more. I could write a long essay on *that* topic.

As a B&W printer I almost always prefer FDP (film-developer-paper) combinations that had quite low highlight contrast (for those of you who are uninitiated, variable contrast B&W papers tend to have fixed highlight contrast; the shadows are varied with filters).

I suspect this is one reason I like the Sony A900 so much....

Mike

Dear mp,

What you missed is that within reason, you don't care.

You can't see 14 bits worth of tonality. In a photograph viewed under truly optimal conditions, you can't see more than 250-300 different tonal levels (circa 8 bits). You need more bits than that because the eye doesn't map tones the same way the computer does, but 8 does a plausible job. All your output devices are 8-bit devices, unless you're buying very high-end I/O.

(an aside: for true tonal perfection, you need a coupla more bits than 8, but we're trying to be realistic, here)

So, yeah, a 12-stop-range camera that's photographing a 6-stop-range scene only has 8-bits of tone depth instead of 14... but you can't see the difference.

pax / Ctein

Dear mp,

Gaps in histograms are far less of a problem in real-world photographs than people make them out to be. Modest contrast changes do not produce enough binning in the histogram to be particularly visible. The difference in overall contrast between the two Fuji photographs at the beginning of this article is roughly the difference between Grade 2 and Grade 4 paper. A contrast change of that amount does not produce binning that is worth worrying about.

In the larger scope of things, if you're really concerned with getting good and appropriate tonality in your photographs, you should be recording (and working on) your photographs in 16-bit mode. if you're not, you're handicapping yourself very, very badly. Wrestling with the subtleties of contrast and gradation while working in 8-bit mode is like running a marathon race in flip-flops. I'm not saying you can't do it; I am saying you're making your life very much harder than it need be.

----------------------

Dear Ginger,

Amen! I have loved both of my Fuji cameras; I have hated the software that comes with them. I think I referred to it as an embarrassment in my reviews, and I'll stick with that assessment. Horribly slow, clunky, and when converting RAW files from my S100 it visibly and blatantly clips highlights and shadows.

In as much fairness as I can muster, I can't say that other camera manufacturers software might not be just as bad. But I don't grade this stuff on a curve.


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear JC,

It was here that you read it and it was me that you read:

http://tinyurl.com/4ca9er

Must be true, then [grin].


The clarkvision tests are terrible from a scientific point of view. Very badly designed and the results don't come close to correctly portraying the inherent characteristics of the three media.

But they are GREAT real-world tests, in that they accurately reflect the results most photographers would get making photographs. Which is what matters in this discussion.

pax / Ctein

Mike et al -

Thanks for the most important article on craft to hit the photo web pages in a long time.

At last folks will realize they're shooting the digital version of black-and-white Kodachrome. And, hopefully, their pictures will be much better for it.

Bill

Bravo! Bravo!

What a lucid explanation.

Its time for us, the new generation of digital photographers, to get away from pixel peeping and instead to get a strong grasp of the basics underlying all photography. And it is high time we restored the level of dialogue to what it was in the old 'Camera & Darkroom' and 'Photo Techniques' days.

You deserve the gratitude of the online photographic community for the free, high quality public education. Yet again.

Oren, Ctein and Carl. Thanks and props to you guys too.

Ctein, thanks for the clarification. I thinking theoretically and thought that the gaps in the histogram would become visible in the end-result when increasing the contrast. Good to hear that it isn't the case. Anyway, the Fuji allows to switch between high and normal DR, so this isn't an issue.

I really liked this post, Mike, not because it contained anything I had not known before, but because it is so extremely well put. In other words, while I knew everything that you wrote about, I would not have been able to formulate it this way if asked to explain dynamic range to someone new to the subject. You seem to have found just the right words - all I can say is bravo!

Printers above 8 bits are relatively common these days, and not THAT expensive. All the recent Canon iPF series claim "16 bit data", but are actually 12-bit printers, plus the newest Epsons are somewhere over 8 bits as well. Over 8 bits on a monitor is very expensive, but most newer inkjet printers (primarily 17 inches and up) are 12 bit devices.

-Dan

Mike -

Here's some info on a new camera that Ricoh is releasing that supposedly produces in-camera HDR (i.e. already tonemapped, not 32 bit EXR or HDR files).

Take a look:
http://www.adorama.com/catalog.tpl?op=NewsDesk_Internal&article_num=021909-2

Seinberg

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007