« Snow, Beautiful Snow | Main | Blog Note—Boosted? »

Wednesday, 19 January 2022

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

It isn't just tail lights. My wife's garden has some lovely red peonies, but they rarely photograph well. They are a deep rich red that often comes out flat and washed out, or edging into magenta on screen. Other flower colours are stunning. Well, I've learned to be careful with begonia orange and yellow. So far my only solution has been to shoot in the lowest light I can.

What metering mode are you using? You could try spot metering the actual light and giving +2 or 3 stops exposure compensation which should avoid clipping the lights but the rest of the image is probably going to need some serious shadow recovery. I remember watching a video from a street photographer recently and he talked about using -1 stop exposure compensation with normal metering methods for night photography but I doubt he was concerned about not clipping the actual lights themselves.

If you're using your Fujifilm camera you could also try using its 400% Dynamic Range option but you'll need to set your ISO to 800 or above to use that setting. I don't know if that would avoid clipping the actual lights themselves but it should produce better results than you get when not using the Dynamic Range option.

https://www.amazon.com.au/Saul-Leiter/dp/3868282580

Digital has a problem with red. (As in blowing it out, first to orange and then..)


This could be related.

The only way I can see square is comparable to Ludovico's Technique [from Clockwork Orange] - referring to the actual mechanical process, sans the sensory stimuli . . . my eyes simply get chilly . . .

It's simply the consequence of exceeding the dynamic limits of the sensor's exposure range. Today, those nearby tail lights are far brighter (especially if from modern hi-output LEDs) than anything else in a typical nighttime composition. They're only a small portion of the scene, so the camera's automatically derived metering is going to (mostly) disregard that hot spot when setting the final (nighttime) exposure. It's the same old issue that compromises sunset photos; you have to keep the hot spot (sun) out of the frame.

It's likely that you won't sense this exposure disparity quite so often in the case of traffic lights at night because the signal will generally be farther away and less dominant in the composition.

LED lights are more intense than you realize. Perhaps part of the reason is that the color is more "pure" and collimated than incandescent or halogen. If they don't bother your eyes ... you've had eye surgery recently, haven't you? ... perhaps your sensitivity to the red portion of the spectrum is muted.

(Just flinging out a SWAG)

I'm curious to the comments on this - I had hoped that Xtrans would help with this - not much, and more resolution does seem to help - but digital sensors really do not like red LEDS or colored LEDS in general - blue LEDS do weird things to a few on my cameras as well. I need to try more with my IR X-T1 and see if its a case of bleed through IR wavelengths maybe?

“ Does anyone know why digital has such trouble rendering red taillights and traffic signals? ”

In my experience this is a matter of simple exposure. Traffic lights and auto lights are lensed to focus as much light as possible straight toward intended viewers, making matters worse. Try setting up an exposure bracket burst, depending on how dark your surroundings are. It’s worked for me in situations where I’ve wanted to preserve such light colors in twilight or evening.

Love the flowers, and the light falling on them. Can I make a couple of suggestions: I find the partial appearance of the lamp shade (on the right) a bit of a distraction, so I wonder how it would look if it was out of frame? And also, maybe, pull the bust forward a bit so it gets some direct light as well? But the petals and the light fall-off, right to left, are beautiful.

I don't think I've ever, deliberately, taken a picture of a taillight. With anything.

TLDR: read Richard Gregory‘s book Eye and Brain. As far as I know nothing in the past 50 years is a better explanation of how humans see.

A scattering of thought follows:
It’s not that the camera is getting it wrong it’s that your eyes are. Also your friends are overexposing the red. Coincidentally Pete Turner underexposed his red photos. Not for exactly the same reasons but for similar reasons. Anyway, underexpose or put a light green filter on the camera to bring the red back into gamut.

Human vision doesn’t really see absolute colors because until the last century or so it never had any to look at. It sees differences.

If you really want to see ugly LED lights go to a live music venue where they are using RGB LED spotlights. Horrible awful no good lights. I despise colored LED stage lighting.Single (or three) wavelength lights are an abomination in most situations although they have some interesting advantages in reproduction of color photographic prints.

Where was I?
That said…

The human eye just is just barely sensitive to red.the human brain on the other hand just loves red. Film unless it is treated with dyes is also just barely sensitive to red. The history of the dyes that are added to film to make them red sensitive is fascinating. Electronic sensors on the other hand just love red. It costs money to filter out the red. Electronic sensors aren’t very blue sensitive. All too often in order to get adequate blue exposure you will clip the red.

I was working on some tests of using color correction filters on digital cameras in artificial light before I moved to a rural environment and stopped working on it. As far as I got was that a magenta filter in daylight kept greens from clipping and a blue filter like an 80B really helped under tungsten light.

Somewhere in there it should be mentioned that the human eye, film, and sensors are really all about green.

Color as we humans know it is a pretty weird construct and there is a lot of stuff we just don’t see, plus stuff that we see that doesn’t actually exist. Magenta for instance. Recreating all those weird artifacts is hard but we get a break because all a color photo has to do is fool our visual system and not actually replicate reality. The. There are imaginary colors that are not physically possible but your brain is built to “see” Doctor Edwin Land was doing some interesting work in that field before he got interested in photography.
Read the book if you haven’t already. 90% of my incoherent ramblings are covered in a coherent manner.

I think that most cameras will be irritated with those bright lights attacking the sensor, and underexposing is your only option here. And yes, medium format probably has a slightly bigger dynamic range than smaller ones...

One thing I noticed is that, if you look at the business end of a TV infra-red remote via the screen on a mirror-less camera, you can see the flash when you press a button.

So, something to do with the amount of infra-red in red (led?) lights and the sensitivity of sensors to those wavelengths? IIRC, that was a problem on that score with the Leica M8 - they had to issue a special filter to deal with it.


LED lights have extremely non-thermal spectrum. If tail lights are traditional red LEDs (I do not know if they are) it will be narrow, very peaky spectrum in what our eyes see as red. So think then the question is: does this agree with what the red filter on your sensor sees as red or is it at a wavelength which also leaks into the green filter? Remembering that the filters on the sensor were designed to produce good renditions of light reflected from thermal sources notably the Sun.

(Note also this means red LEDs make very good darkroom safelights as there really is no short wavelength light at all: is very hard to make really bright thermal source for which this is the case.)

Digital has a different response to highlights. Once the sensels (photodiodes) are saturated no more information can be gathered. Film has a characteristic curve that gradually rolls off at the top end and can reproduce extreme highlights better than digital,. This especially applies to colour negative.

Those lights are all LEDs and LEDs flicker -- so fast that humans can't see it and don't notice. Modern digital cameras are so fast that they "see" the flicker.

Tail lights use red LEDs, not white ones. The light from a red LED is concentrated in a small part of the visible spectrum, which I suspect the red filtered pixels on sensors are very sensitive to.

It might be so much more sensitive that it over influences the rendering of the green and blue filtered pixels; when the algorithm assesses the pixels to produce the full colour image, it reads as white or pink.

It would be worth trying a deep blue filter to correct for this, correcting in the white balance setting in the camera or in software later. Whether or not this works may point to a solution.

White LEDs produce light over a wider part of the spectrum, but in tail lights produce a washed out red; not what's required. LED bulbs made for replacing tungsten bulbs in motorcycle tail lights include a few white LEDs for illuminating the number plate.

With the car lights, I suspect you've clipped the red channel. You'll probably find your overall histogram looks OK, and you'll only notice the problem if you look at the red, green and blue channels. It isn't a topic that gets discussed a lot, though infrared shooters get concerned by it. You wouldn't have noticed the problem as frequently with film - it is primarily due to digital sensors consisting of red, green and blue filters, and they are more susceptible to clipping in those colours.

It could be sensor blooming where there sufficient light photons from that specific area that are causing leakage to adjacent pixels ... something you wouldn't see on film, but I'm guessing.

e.g. https://www.photometrics.com/learn/imaging-topics/saturation-and-blooming

Pak

LEDs really are powerful! My small bicycle rear light, that runs on two humble AAA batteries, is absolutely blinding if you look straight into it, from a short distance.

I believe all cameras have a cover glass in front of the sensor, for protection purposes. That could have an influence, with strong specular light sources.

It seems obvious that the rendering of traffic lights will depend on the time of day of the photo. I've shot traffic lights dozens of time during the day, both with Phase One backs and Nikon cameras, and they always come out clearly and with full tonality. It may be an issue only for nighttime, where the exposures are much longer but the intensity of the traffic light, which remains the same as during the day, is relatively much brighter.

The photo is over exposed. Look at the asphalt.
Once again the in-camera meter is fooled. If a scene is all black or all white it's hard to get a proper exposure with anything except an incident meter.

I knew a DP who was trained in Wiemar Germany. He used a Gossen Luna Pro, and read the reflected light from the back-of-his-hand.

Part of the camera reviews on dpreview utilize a swatch of red fabric that used to trip up some of the older and smaller sensors, newer and larger sensors seem to have overcome this. Maybe the issue with taillights has to do with how sensors render areas of red that have a pattern. I'm shooting m4/3 (Em1 ii) and sometimes see issues in how it renders red objects.

I do think that the Combo of a LUMIX G9 with the Olympus 12-200mm Lens is about the Best All Around kit for older photographers who are not as flexible as we once were so we can get closer to our subject. Also, trespassing is a worry in today's political climate. I do make a lot of photos of old signs and buildings and need wide to long focal lengths. I have tried a Sony RX10iv, but don't want to pay $1700 for a 1-inch sensor camera. The EVF on the G9 is a wonderful joy to look thru. I would rather have a flip up LCD, but this is it's only shortcoming for me. Plus, I have quite a large collection of micro 4/3rds Lens over the years.

Those red lights and the purple fringing didn't seem to be a problem with film, at least not that I remember.

Digital sensors don't always match our eyes as well as film did.

I have no idea what's at play with the sensors. Too technical for me, I'm afraid. But you'd think the problems would have been addressed by either sensor makers or software publishers by now.

Aside from slide film, I never thought about white balance when I used film. It was automatically corrected in the prints and "no one was the wiser".

I'll be curious to read the comments from people who really know about this topic!

"Or, I kept grabbing different cameras. Always the problem with owning more than one."

This seems so bass-akwards to me. "I want to shoot square. Where are some square subjects"

My inner dialog is more like "I want to photograph this subject. What camera, lens, etc. will do the best job? Can it be done to my satisfaction with what I have with me?"

"Is it because digital has some particular kind of problem with LEDs and most taillights and traffic signals are LEDs these days?"

Incandescent lights are broad spectrum. LEDs are inherently single frequency, although various strategies, such as multiple colors in one lamp, make this somewhat less so.

Cameras with RGB sensor arrays have to make up color values for the other two colors for each individual pixel from surrounding values. This doesn't work well with LEDs.

Are you talking to Ctein? He has a passion for photographing holiday lights, so he has practical experience with the problem and better theoretical knowledge than I.

One reason he moved to an Oly E-M5 II a few years ago was the samples I sent him of straight shots of christmas tree lights, both incandescent and LED, and otherwise identical HR shots which I had then downsampled to the same size as the others.

Oly's HR Mode isn't only about resolution, it's also about color accuracy. Yes, the downsampled HR frames had noticably better resolution of detail*, but also many fewer color troubles.

As the sensor is micro moved, each pixel location is exposed with each color sensel. There is no demosaicing. In that sense, it's like Foveon sensors.

The problems you are asking about are thus mostly or entirely artifacts of the nature of the light sources and the nature of the Bayer or Fuji X RGB array digital color technology.

* The demosaicing process also loses resolution. Ctein estimates about 50%.

Here's a photo with a red light or signal-

I'm sure others will also mention this but red LEDs have a very narrow emission bandwidth compared red-filtered incandescent lamps. That emission curve interacting with the pass band curve for the red filters in camera sensors may lead to the camera "seeing" LED red differently to the human eye.
That said I would think if you want to be able to get LED red lights to record the way you want you should supplement any hypotheses with a large dose of experimentation using the camera in question.
Dave.

We had a long discussion on a similar topic here, but I don't think it resolved anything:
https://community.usa.canon.com/t5/EOS-DSLR-Mirrorless-Cameras/Car-LED-lights-are-pink-purple-not-red/m-p/270603#M59600

I like shooting square too and I purposely have a dedicated square format in B&W set on my Canon RP and Fuji X-E3 for that purpose.
I am also shooting square with my Rolleiflex T (that shutter button is located just in the right place for street photography).
I think it's easier to start with square and crop into rectangle than the other way around.

Traffic lights and tail lights are quite a bit brighter than they were in the time of the linked film photos. There are regulations on how bright tail ligths must (collectively) be for all uses (turning, braking, headlights on, etc.), and I'm sure stop lights are even more regulated. Plus, new technology enables this with LEDs and improved reflectors.

In general, though, the issue is local contrast. Our eyes adapt locally to varying brightness and extreme contrast (yay, iris), while a sensor does not. Film has a response curve which "adapts" globally, flattening contrast, while digital files can be adapted the same way (out-of-camera JPEG engines, anyone?). The terrible "HDR tone mapping" craze of a decade ago is an extreme example of this in software and society, and I'm glad tail lights get blown out.

I live in Hong Kong, home of a famous skyline, and LED's are a blight. We used to have much dimmer neon signs on top of our buildings, now all replaced by super bright LED's. This has shifted the magic moment of balance between ambient and artificial light a good 20 minutes or so towards sunset (approx 10 mins after sunset is now optimal), 20 minutes is a long time in the tropics. I wish the owners would turn them way down in brightness but that is absolutely never going to happen! As someone who believes in getting it done in camera I find LED's a total pain.

I did a project involving tail lights and traffic lights at high ISO after dark and your post prompted me to take a look at those 2012 photos.

Yep, the red tail lights and red stop lights show those blown out white center highlights, but so do the green lights and yellow lights.

I suspect that it's fairly straightforward - modern LED traffic signals and vehicle lights really are brighter than prior incandescent lights. That and the lower dynamic range at high ISO settings ( I used an early Olympus E-M5 for the project ) make for blown highlights, especially when those near-point-sources are so many stops brighter than the camera's overall exposure for the larger dark scene.

I can't answer the digital vs. film question, but I've been shooting quite a bit of night scenes with traffic lately (exclusively with digital) , trying to get the light trails and all that, and I can make a few observations about the red tailllights. Certainly when I catch taillights straight on, such as in your example, I usually get the result that you show: the pale yellow light (I think that's the LED bulb through the lens of the light) surrounded by a red border (I think that's the reflector in the taillight). Even at a slow shutter speed, the lighttrail often has the yellow streak surrounded by red streak. However, when I'm shooting in the rain the reflection of the taillight on the road is straight red without the yellow. I don't know why this is.

Secondly, when I'm shooting at dusk ("blue hour" you might say), the light trails seem to show no signs of the pale yellow light, just the straight red that my eyes expect to see. I wonder if this has something to do with white balance? Or perhaps the ISO setting? I don't think its the shutter speed since I'm usually shooting a consistent shutter to get the light trails (often 1/2 or 1/4 second) in each case but at dusk I'm usually shooting around ISO 200 or 400 and when it is fully dark I'm usually at ISO 3200 and I almost always see the pale yellow light in the centre of the taillight (and through the centre of the light streak) at that high ISO.

I don't really know why this is but it is curious.

Saul Leiter had quite a few with red lights, although rarely in focus.

It's the camera profile: if you use a calibration tool like the Xrite colour checker the lights retain their colour even if they're brighter than the rest of the scene.

anecdotally, my pentax cameras all struggle with red in bright light

i suspect the intensity that we appreciate for visibility overwhelms the sensor...maybe a matter of contrast?

Hmmmm...
Digital certainly has trouble with what I would call blooming when there's an intense light source in the image area against less bright stuff. I presume part of it is interaction between lens flare and what's going on at the sensor. It's not always a negative. I have a large panoramic print in my office I made from stitched frames taken toward the setting sun in South Dakota. I was using an Eos-1Ds MkII, which was a 16.7 mp D-SLR released in 2004, the early days of digital. The sun of course blew out to white, and there is a warm artifactual halo extending vertically down into the landscape. But it looks quite dramatic, and most people like it.

The same issue with shooting Christmas LED lights I assume - the picture in no way captures what I'm seeing regardless of camera type.

I dare say a part of this will be the Bayer filter, with alternating lines of R-G-B-G etc. I.e. half the light at the sensor filters through green pixels, and a quarter through each of the red and blue pixels.
Another part will be how the metering works in response to that, as well as the manufacturer’s special processing sauces.
Thom’s books confirm the same risks and behaviour for Nikon cameras too. It’s easy to blow highlights in the red channel. One need’s to watch the RGB histogram, not just the luminance histogram.

Well, I rather think the lights on cars are simply brighter than they used to be. Also more monochromatic (red LEDs tend to be); so this may mean the red channel in particular is MUCH brighter than it used to be.

Not sure about traffic signals; in particular I know they make an effort to avoid the green light being monochromatic (apparently it makes it easier for a wider range of people to tell which is which), not sure if this applies to all three colors or just the green. Nor does it seem to me that they're brighter today.

I think it's a d-range problem... I have the same problem with Christmas lights on my X-T3 (and others). I generally do the two-exposure thing, but it really takes a lot of minus compensation to get 'real' colors back into some of the lights.

This post reminded me of the traffic light photos by Lucas Zimmermann. He uses long exposures and fog to show all three colors at once.

Here is what I think is proper explanation of why bright red LEDs turn yellow. I have not read all the comments but certainly the featured ones do not explain this.

Normal sensor (so not foveon) consists of an array of photosensors which are all the same and which all are fairly broad-spectrum ('black and white' sensors). Over this is an array of filters which mean that each underlying photosensor sees only some ranges of wavelength. There are three sorts of filter we will call R, G, B in this array, probably with more G than R and B. Final colour image is then constructed by merging results from neighbouring photosensors with different filters.

Is worth noting here that astronomical (I am astronomy-related person, but theorist: I do not do experiments myself or make experimental apparatus) sensors work differently usually: they have array of photosensors but no filters glued to it. Rather they have several big filters they can switch in which cover the whole array. Thus they make several exposures if they wish to make a colour image. You can see this in the picture of HST's WFC3 here: https://hst-docs.stsci.edu/wfc3ihb/chapter-2-wfc3-instrument-description/2-1-optical-design-and-detectors – the thing labelled 'SOFA' is the filters. This is why some idiots go on about 'false colour' of astronomical images. I do not know if astronomical sensors designed to watch fast events like supernovae etc are different: I suspect they might use dichroic beam splitters with three sensors (what people call '3CCD' I think).

So now for photography sensors again there are some constraints. You want the sensor not to have gaps in its wavelength sensitivity: a sensor which is sensitive at 500nm (green) and at 700nm (red) but completely insensitive at 600nm would be useless. Due to nice maths it is not really possible to make 'brick wall' filters. These two things mean that the frequencies selected by the filters must overlap. Same is true for eyes and for the same reason and you can see this on any picture of spectral responses of rods & cones in eye. (Aside: is strong evolutionary reason for eye to not have holes in frequency response – if there were such a hole then predators would evolve to be bright only in that hole and would eat you.) This means that photosensor under green filter will see some red and some blue, red filter will see some green and very tiny blue and equivalent for blue filter.

Final thing is that sensors can saturate, and for digital sensor this is especially true and the saturation is rather 'hard': it will be fairly linear right until it hits the point of saturation at which point all values are alike to it.

So now we can see what happens. LED light is small, bright, nearly-monochromatic source in red wavelengths. To get adequate exposure in rest of picture inevitably means that photosensors under red filters will saturate in region of light. But this would mean only that was intense red in image: saturating a photosensor which sees red can only make red. But the filters overlap: the green-filtered photosensors also see some red. So they see the light a bit as well. So end result is that the red channel is saturated but the green channel is creeping up from its red sensitivity. Which is yellow.

Quite likely algorithm which takes data from sensor and turns it into image then falls about in interesting ways, further mangling colours.

Whether this happens for film I do not know. Clearly it can happen: filters must overlap for film too, channels can saturate for film too. But filter curves will be different and saturation will be softer and response curves in general will be different. Also probably are not so many film pictures of bright red LEDs so the traditional red lights would be filtered thermal source, which is very different thing.

About flickering—lights flicker at twice the mains. 100 flickers per second in a 50 Hz country and 120 with 60Hz. Some modern DSLRs have addressed this problem.

A blackbody radiator* (incandescent or sun light) has a continuous color spectrum. A LED does not, as you can see with the attached illustration.

Consider also the CRI of a light source—https://www.waveformlighting.com/tech/what-is-cri-color-rendering-index

Many things matter when discussing light—it ain't a simple subject.

* http://dept.harpercollege.edu/chemistry/chm/100/dgodambe/thedisk/spec/blackbod.htm

A second comment, after reading the first batch of comments. I just had cataract surgery done on my right eye (left eye to follow in a month or so.) As soon as I got my new right lens (which is purely transparent without color) I noticed a color shift between my right and left eyes -- my left eye was actually yellowed with age, so before the operation, I was walking around with two slightly yellow filters in my eyes. We have nearly dead white walls in my house, and for the first time, I can see that what my brain was telling me was white is actually an extremely light yellow. The change is harder to see with intense color, but it's there. This has emphasized for me how much we see with our brains. In this list of comments, somebody remarks that the blacktop in one of the photos is over-exposed. Well, maybe. I've painted for years, and went through a brief period of landscape painting, attempting to replicate natural colors. Roads always bothered me in the paintings, and after a while, I figured out that while I was painting blacktop (tarmac) black, it usually wasn't, despite what my brain said. Sometimes, depending on the sun and the age of the tarmac, it could actually be quite a light gray. I found that holding up color swatch cards of various pure paints against the actual scene, could radically change my perception of the colors out there. The American folk artist Grandma Moses painted snow "white," and said one time, as a bit of folk wisdom, something like "I've never seen blue snow." That's completely wrong: if you look carefully at snow under a blue sky you see blue snow all over the place. In any cases, if you're a little older, and you're fussing about color, remember that you're looking through yellow filters.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007