« Sony vs. Fuji: Battle of the Diminutive Titans | Main | Open Mike: 'You Keep Using that Word...' »

Tuesday, 10 October 2017


On scenes like this, I also wonder how much we're subconsciously adjusting the exposure continuously through our eyes. By that I mean, when you're focused on the rising steam from the cup, are all the shadows and highlights visible as in your photo, or do we move from subject to subject within the frame and exposing (using the our pupils) for each part so rapidly that you think you're exposing for the whole scene while in fact you're kind of doing an internal type of HDR?

Shouldn't HDR be more accurately termed LDR? The whole reason for HDR is because the sensor/monitor/printer/paper cannot capture the dynamic range that often exists in real life, so we end up having to compress the dynamic range.

I'm not sure I like the "correction" part, as it implies an image is only "correct" if it looks just how your eyes saw it. If that were the case, B&W images would be entirely wrong ;)

Perhaps there needs to be some baseline from which one deviates in the interests of artistic expression when developing raw images, but should that be "how the average human perceives things"?

It reminds me of the argument for increasing framerates for movies, since that would make them more "realistic", as if that was ever the goal of movies in the first place.

Whatever you want to call them I know I certainly use those techniques all the time. When we dodged and burned or Ansel created his black skies, no one complained.
Technically HDR has typically meant combining multiple exposures rather than recovering details that already exist in the frame. But in any case they are nothing more than techniques that are now available to us. Like any technique, it can be ignored, used sparingly, or used by the bucketful. Some folks will like it, some won't.
I say use whatever you like, in the amount that makes you happy.

I also occasionally combine images into panoramas, sometimes to give an 85mm lens the angle of view of a 35 other times to create a wider vista. They Are Just Techniques used to match a print to what we think we saw.
if I like the results, I put the work out there and let it stand on its own.
If someone asks how I made the picture, I'm .happy to tell them but other than that it's just my work.
I think we spend too much time worrying about the names of techniques, (and being 'for or against' them) which in the end really shouldn't matter at all.

"It's too bad we don't have distinct terms for HDR that's obvious and HDR that's invisible."
I think you may have just coined a term for the latter: ERC. I'm going to start using it.

Some vague memory of doing that in the darkroom as well...

Seeing the clever dripper in the picture makes me say: "We are rather overdue for some kind of posting on coffee - roasting or consuming ..."

Whether you're working with camera-generated JPEGs or raw files, the tone curve of any digital image always is going to be set algorithmically—either by the camera firmware or some post-processing program—because all the light sensor can capture are individual RGB luminance values. Sure, you can overdo the expansion or compression of highlights and shadows. (Or your camera can.) But I would argue that the distinction between “natural” and “artificial” is elusive—and essentially subjective, because there is no way to know precisely how the subject of a photograph would be perceived through the eyes and brain of someone else.

Two examples, from a few days my wife and I recently spent in St. John’s, Newfoundland:

This photo of Government House ( http://www.flickr.com/photos/chriskernpix/36892993121/in/datetaken-public/ ) seems to me to have “that HDR look,” because the high sun angle left the facade of the building almost entirely in shadow and I had to compensate for that to make the detail of the brickwork visible. Actually, though, the processed photo is a fairly accurate representation of the way I remember the scene.

By contrast—pun intended—this photo of the Basilica of St. John the Baptist ( https://www.flickr.com/photos/chriskernpix/36845750406/in/datetaken-public/ ) looks quite natural to me, although it is the result of very aggressive post-processing. And it doesn’t come close to representing the extremes of light and shadow that I was experiencing in the dark room.

(By the way, both of these images have been subjected during post-processing to yet another “artificial” algorithmic modification: perspective correction. That, I would argue, is essential to making them look “natural.”)

Burn Highlight and Crush Blacks.....or Vise Versa. That is no HDR that is ERC. Real HDR kan span 32 EV's (and more) and has but one only use and that is to light a 3D scene in IBL lighting in which case you use a .exr 360 degrees skydome to light a scene.

And then there is HDR TV and there are HDR monitors (LG has some in there newest 4K range for instance). Wich is no HDR either but uses an extended color gamut and 10 bit cable panels to display this color gamut (promissing deeper blacks and whiter whites). I have seen some (under less then favourable lighting conditions) but I was marverlously underwhelmed. Great for gaming if you wanna see zombies in the darkers parts of the dungeons were they usually wait to be popped.

It's like watching those videos shot at night with some owl eyed Sony. At a first glance you stand collect you lower yaw from the floor. But as a second glance you notice that the atmosphere of the night shot was totally lost. Nice for spies and voyeurists but for photographic art it has limited use, right?

Greets, Ed.

Yes, indeed, the advent of "Shadows" adjustment sliders was a boon to photo editing. Ditto its mate, "Highlights".

Some brief and rather disjointed thoughts on this subject. First, this type of characteristic is a real object lesson in "know thy camera", and has parallels in chemical photography. Folks who used the same film stocks for years got to know how those films would handle various contrast ratios. By extension, they learned how much to trust their camera's light meter. The same considerations apply here, although on a rather more elaborate scale. As you "test" the X-T2 and A6500 you'll want to experiment with such high-contrast scenes to compare how the cameras perform. Fuji's "X-Trans" sensors are excluded from the DXOMark "dynamic range" rating world so you'll be on your own. As you're experimenting be sure to also manipulate each camera's various metering modes, as they'll produce quite different results.

Second, this shadow/highlight recovery is one of the most fundamental reasons for recording in RAW format and for using the best sensors you can afford. You may not actually want to lift shadows nearly as high as you have in your example but it's good to know that you can, eh? Personally, whenever possible I try to fill..with anything... flash, reflection, etc. Your back-end control improves exponentially when you give the computer better luma/chroma data.

Third, not to beat this too hard, but "HDR" has gotten a bad rep mainly due to amateur photographers' extreme ham-handed applications and poor understanding of light. When an image's dynamic range is skillfully managed it's completely transparent to viewers.

And lastly, as you experiment with your two cameras also take note that shadow-lifting is not a free deal. What you gain in the luminance elasticity you will lose, to varying degrees, in color saturation. The "varying degrees" is the interesting part, as it can vary by sensor. Be sure to put colorful stuff in the shadows, such as fruit, to compare sensors.

Got a few techical definitions, although someone like Thom would be a better reference:
- HDR, capturing multiple images where a scene's DR exceeds the capture device's DR, and combining so that the final image contains the full scene DR, to play with when editing.
- Tone mapping, capturing multiple images where a scene's DR exceeds the capture device, and combining so that final image has all intended scene detail to play with when editing. I.e. You have limited DR, not full scene DR, it just looks like it. I believe a lot of early implementations of HDR were actually tone mapping, due to it being easier/cheaper to implement. I haven't looked into it for 5 years so can't comment on the latest camera/software implementations.
- ERC / Adjusting expsosure of a single image (whatever was in capture device's DR) - titivating the sliders or tone curves to produce the final image. Given camera and software improvements, it may have reduced the need for the above methods.
These methods can all produce similar looks, just different technical paths to get there.

Is it necessary to make a picture look like the scene you see with your own eyes? The photos that illustrate this entry make a strong case for those who say no.
Admittedly, this is a banal scene, but the out-of-camera JPEG image gives it a brooding, mysterious feel. It plays with one's imagination - or at least it would, if you cropped the bananas away.
On the other hand, the ERC version is just so boring it doesn't even reach the lowest standard of interestingness (a detestable word invented by the folks at Flickr, but it can apply here). It's merely informative, but what does it tell? That you like coffee and bananas?
I don't mean to be rude. I understand you were just making a point with the photographs and had no intention of coming up with a masterpiece - but you almost did so with the untreated image!
I know this comment has strayed off-topic, but the point is - one photograph expresses an idea, the other just plainly depicts some inert (and visually uninteresting) objects. ERC, HDR or whatever are useless if they turn a picture into something deeply uninteresting. Trying to show everything that's in the scene isn't always a good idea. (Incidentally, that's also why sexy clothes win over nudity.)

[What the picture was meant to tell me was how much room for correction the GX8 raw files would give me. I was reviewing the camera at the time. I don't do calibrated tests; I just take pictures of some scenes that I know are tough and see how the camera does with it to get a seat-of-the-pants feel for it.

BTW, having people evaluate test shots from an aesthetic viewpoint is one of the hazards of the kind of nonscientific camera reviewing I've always done. I even wrote a post about it long ago. I think I was reviewing a Zeiss ZF 28mm ƒ/2 at that time. --Mike]

To make a photo look like it did to the photographer viewed the original scene: that seems like a fair standard. My own is to have the image look like I imagined it could given the camera, film or sensor ISO, lens, aperture choice, selected shutter-speed, and digital or darkroom tools available.. "Previsualization" then. Sometimes the image is very different from the the original scene.

I had a good chuckle at your description of the products in the picture. It took me back to John D McDonald's Travis McGee stories. Bought my first HiFi speakers after McGee mentioned them in a story.

The Clever Coffee Dripper is intriguing. Looks like it has the potential to expand the dynamic range of my morning cup.

The steam looks wonderful.

And I agree a lot with your sentiments here.

For me, adjusting highlights is straightforward. It's the shadow adjustments with which I'm capricious. From one photo to the next it's anybody's guess where the shadows fall. I've stopped worrying 'bout it. Photographs are mutable like a poem read out loud at different times and occasions.

PS Interesting shadow/reflection in your photograph.

I rather like the original version, though I do find some of the background detail a bit distracting. In the ERC version the background looks pretty decent, but the foreground doesn't to me, especially the reflection in the stone counter-top; that looks flat-out wrong (visually).

For my $0.02 worth, there is no requirement to make an art photo look like anything in particular except what you want it to look like; it's all artistic choice. I think pretty nearly all of us would agree with that, even when we have very strong preferences in some areas. Everybody doesn't have to make photos I like! (Which is good, because that's hard; even I can't do it reliably.)

There's an overly-enhanced local-contrast look that I first saw with over-cooked HDR that I kind of liked a little bit for a while (luckily I didn't do a lot of it that I now have to live down); I suspect that may be responsible for a lot of the avowed hatred of HDR.

You are all correct, I think. What most people mean when they say 'HDR' is actually dynamic range *compression*, i.e. mapping a wider contrast range onto a medium with a narrower contrast range. This is what we're doing with the sliders, with dodging and burning, and when digitally combining multiple bracketed exposures. The difference is doing it at different points in the process, the "limited medium" being, respectively, the screen, the paper, or the camera and sensor. And apparently the combining of multiple exposures is akin to what the eye does as it scans a high-contrast scene.

And, yes, it pertains to that discussion of "realism" from the other day, which I now realize was actually about "naturalism".

Camille Silvy was an HDR enthusiast...


A single frame from last month that brings to mind the wonderful dynamic range in the latest set of sensors, an also how I often pull the shadow / highlights sliders to the extreme in camera raw with my Sony a7rII. How the JPG would look, I leave to you to imagine. Somewhere Festival 2017 Stevns Klint sunrise

When I started working in Hollywood in the 1970s, you had to get it right in camera. Sure a good color timer could help you out, but the idea was to use them as little as possible (costs, time, etc).

On a movie set you set the key with your meter, and the rest you do by eye. I may be color-blind, but there are very few people who can see contrast as well as I do. This holds true for almost all DPs and many Chief Lighting Techs. How well do you see contrast? Check it out.

One of the things you learn is that it possible to over fill. Anything over about 14 stops DR looks wrong. You start to get that noonday sun with a black sky look. Don't do it.

My mother used to say all his taste is in his mouth. I'm always amazed by what people share 8-0

ERC fits in nicely with Ctein's ER. IMO, dynamic range applies more appropriately to audio (sound) rather than photographs (light). If the DR of sound audible to human ears is 20-20,000 hz, what's the range in EV perceptible to uncorrected human vision? I'm sure the latter exceeds "what the camera sees" (even fiftyish eyes vs. the best sensors).

Maybe it's just because I was brought up on Kodachrome but I like the crushed shadows in the first image. It's almost like a backlit spotlight on the coffee and my eye really likes the effect.

Interestingly, I find the same holds true when I'm shooting my own photos. It's rare that I try bringing up shadows much in post-processing. And on the rare occasions when I bracket photos and combine them in post to create an HDR image, I find that it's only so I can put a digital graduated filter on instead of doing so with a physical filter in the field. If I try to bring up the shadows throughout the whole of an image, the effect just never looks "right" to me. Again, I'm guessing that's related to being brought up in a family of stock photographers who all shot slide film, mostly Kodachrome.

Oh, and about making it look like what the photographer saw—I think that's right, if you understand "saw" to mean the entire process, which takes place mainly in the brain not just the eyes.

Maybe it's just for certain kinds of quick work, or certain styles of shooting. Frequently, what happens with me is that something catches my attention, and then I figure out how to recover what caught my attention in my head, and then figure out how to render it somehow in a frame. So you could call that "what I saw", but it's often a fleeting impression that goes away if confronted directly, that I have to dig back to.

And I wonder if some contemplative shooters get to the point where they construct possibilities intellectually that don't first present themselves visually, some of the time. Maybe. That's me speculating. It's always safe to say people don't all do it (whatever, anything) the same way, but that doesn't mean people actually do do it any particular way I make up in my head necessarily.

Well if you are our age , in film terms I guess the camera is seeing in Kodachrome (reversal), and you are seeing in Kodacolor (negative). My problem has always been that unless I am consciously avoiding it I am attracted to things that look like the first version to my eye and are unphotographable

"By the way, both of these images have been subjected during post-processing to yet another “artificial” algorithmic modification: perspective correction."

In some software applications, perspective correction is done by grabbing a corner of the image with the cursor, and pushing it into "proper" perspective.
In the darkroom, we did this in three dimensions-we just lifted the corner of the printing easel, and moved it up and down to get the desired effect.

ERC is what our brain does every time we look at a high contrast scene. Our memory is a composite of several rapid adjustments, all of which have a quite limited DR (less than most cameras).

Put them together and we can remember all the features in scenes with up to 20 EV difference. (Or so it is claimed).

So, by recreating that memory, we are only doing what our brain did in the first place, but the limitation is always the medium. If we try and reproduce the full range of a 13 EV image on 7 EV medium, it never looks quite right because there is less contrast that we remember seeing in the individual features.

V2 looks overcooked to me, too (FWIW).

In Lightroom terms, even when I lift the shadows, I tend to crush the blacks back down.

I propose you setup the scene again and do an experiment. You should look at a particular point within the scene and not move your eyes from that point. You should then try to "scan" over your entire field of vision without moving your eyes. Look at the relative values within the scene.
Repeat by picking another point (much brighter or much darker) and, again, not moving your eyes, scan the entire scene.
Do this several times.
Finally look over the entire scene by moving your eyes around.
This works best with a scene of very high contrast (which appears to be the case in your image above).

Oh, and yes, I also prefer the deep shadow version, although it’s a matter of expression objective.

I am not sure I understand your post. Why all the talk of HDR?

All digital cameras capture a raw image. In order to view this raw image it must be converted into a viewable form. In camera this is a jpg. The raw to jpg is an “automatic” function. It is common to refer to the raw file as the digital negative. However, in the case of digital the negative cannot be seen unless processed.

Therefore an OOC jpg is not some god given version of the photo, just an automatic conversion of the “negative” into print (jpg). The film equivalent of taking your roll of film down to the drug store (American terminology) to be processed and you get back a set of “automatic” 6x4 prints.

These gave you an idea of what you had captured and then, if you were so inclined, you could then print it yourself. When you would choose the type of paper and developer, dodge and burn to your hearts content to give “you” the image that satisfied your creative intent.

Obviously, someone with your knowledge did not actually go through the drugstore route, and you might have even deliberately biased your camera exposure knowing that you were going to pull or push in development.
When you are varying the developer, dodging, burning etc. you were using your skill and technical abilities to obtain the highest quality from the “technology”. Compared to having the print made at the drug store you could be said to using extreme “HDR” techniques, compared to the “true” unmanipulated drug store photograph?
Moving a slider marked HDR highlight is, in actuality, no different than creating a sophisticated curves adjustment, just much easier. No one commented when you did that in photoshop you were just congratulated on your PS skills.

A negative or a raw file has a finite dynamic range. For example, the exposure is either within that range 0-255 or blown out i.e 255 is pure white. You cannot recover blown highlights there is no information. Programs like LR can reconstruct (create) data if all 3 channels are not blown but there is no data in a blown channel.

Therefore, so called HDR processing is no different to dodging and burning in the dark room.

The other aspect of HDR processing is simply that of “artistic intent” and I would have thought someone of your background would stress this aspect. No, garish colours are not my “thing” but that is simply my artistic bias. I believe that Ansel Easton Adams, would produce different versions of his famous prints at different times of his career – I look to you for confirmation.

Garish HDR is nothing like reality and neither is black and white! Your “corrected” photo of your coffee is just one version of many that could be created using the technical tools available within the limits of the technology (camera). Talk of HDR, as a bad thing, is simply not appropriate. You may make an artistic choice to not use all of the tones available (low HDR😊) or go mad and increase saturation until your eyeballs complain.

Assessment of the result should have no consideration of the technology. You are aware of the technology and may be able to spot that the fact you can “see” the outside view through a window, of an interior shot. Therefore HDR (careful use of the available technology) has been used – so what? What of the “normal” viewer of a photograph? They just see the image and form an opinion.

It is like an artist looking at someone else’s work and commenting on the brush stroke technique. Technically interesting but of no concern to the viewing public. They may like the picture, god forbid, as a picture.

I always viewed your blog as ”not about the gear / technique”, but about the art. A refreshing change from the usual sort of photo blog. Have I been wrong?

Before long we might have digital sensors that we can afford with enough dynamic range to render HDR obsolete.

Call it whatever you like regarding dynamic range or exposure compensation. I would have loved to have seen a comparison to film, (35mm) of course using a medium speed color film as well as a medium speed B&W film. How about it Mike, feeling up to it? PLEASE !!

Natural HDR is the lack of over lifting the shadows and too much micro contrast added. Best program I've ever found for this is SNS-Pro with perhaps if needed some manual adjustments in PS.

Here's a link to one of my best past examples of 360 VR Natural HDR which can for the most part always benefit from HDR when shooting inside with lots of windows or even extremely unequal lighting. You may need Flash installed in your browser to view this.


The 360 VR's are of the second home of the main founder of my home town, Quincy Illinois. I did this work for the non profit as a donation.

If you like historical sites, and want to see one of the first indoor bathrooms in the mid-west, then this is for you.

The comments to this entry are closed.