Ever wondered why simple desaturation is so unsatisfying as a way to render a color original in monochrome? Tim Soret explains, in a Twitter thread. With nice example illustrations.
Tim is a 31-year-old French art director, cinematographer, and tech artist living in London.
Good stuff.
Mike
(Thanks to Phil Gyford and Tim Bradshaw)
ADDENDUM by Tim Bradshaw: There's a comment at the end of the Twitter thread that our eyes are most sensitive to blue so none of it makes sense. I thought I would explain that, because it's right in an interesting way, but also wrong. As you can tell from the length of this comment, I haven't got enough to do right now. The physics below is right, but the details of how eyes work may not be (but they are, obviously, constrained by physics).
First of all, blue light is relatively short-wavelength and hence relatively high-frequency, as wavelength, λ and frequency, ν (that's lambda and nu) are related by ν = c/λ. The three kinds of cone cells in our eyes have sensitivities which peak at around 560nm (red), 530nm (green) and 420nm (blue), corresponding to frequencies of 536THz, 566THz and 714THz respectively.
Well, a famous person, whom I'll call 'Albert' because that was his name, worked out in 1905 that light comes in little chunks we call photons, and that the energy of a photon is given be E = hν, where h is a constant, Planck's constant. Interestingly Albert is much more famous for doing something else, but he won his Nobel prize for this work.
What that means is that the amount of energy per red, green and blue photon is in the ratio 1, 1.06 and 1.33 respectively: blue photons have about 33% more energy than red photons.
Now, I'm not going to pretend to understand how the cone cells in our eyes work in detail, but there are two things which anyone who has done B&W printing in a darkroom will now immediately understand.
Firstly, safelights mean that the detection of light at all depends on its photons having enough energy—of them having a short enough wavelength, in other words. Safelights are generally red: the photons they emit have energies low enough that the papers we use don't detect those photons, at all. In particular, anyone who has worked in a darkroom in recent history will know that LED safelights (just buy the LED stoplights for cars) are a wonder: because they're LEDs they have a very non-thermal spectrum which means they can be both really bright and also have no high-frequency photons at all, which makes working under them both easy—you can see—and safe—they don't fog paper, at all.
(Incidentally, this effect—photons needing to have enough energy to be detected at all—was what Albert explained so brilliantly in 1905.)
Secondly, some detection mechanisms require the reception of more than one photon to trigger a detection. Film photographers know all about this as well, in fact: many (perhaps all) film emulsions require the reception of more than one photon to register anything at all (really, for the chemical change that is part of forming a grain to happen). And they also tend to 'forget' each photon after a while if they have not seen enough yet. So long as photons arrive fast enough, everything is well; the emulsion just sits and counts '1, 2, 3, 4: OK, seen enough now!'; but if photons are arriving too slowly the count goes '1, 2, er, 1, 2, er, 2, 3, er, 1, 2, er, 2, 3, 4: finally I've seen enough now!' and you need a lot more photons arriving. This is why reciprocity failure happens in low light: photons are arriving slowly enough that the emulsion has time to forget they've arrived.
(As far as I know Albert did not work on reciprocity failure.)
OK, so now we're ready to explain the whole blue light thing.
First of all, in very dim light not many photons are arriving. The energy, and therefore frequency and therefore colour, distribution of the photons may be the same as it is in bright light (it may not be, but this is not why very dim light seems to be blue), but there are hugely less of them than there are in bright light: we're now definitely in the regime where photons are arriving at cone cells one at a time.
Now we can infer that red and green cone cells need to see multiple photons before they register they've seen anything at all, and they also have this 'forgetting' property (I think this is just that the chemical process which happens in the cell has to get reversed because things have to be 'reset' so they can count another photon: you would not like your eye to be like film and only able to see something once!). So in very dim light, the red and green cone cells simply start forgetting fast enough that they stop registering.
But the blue cone cells are detecting significantly more macho photons: they're a third again as macho as red photons. And these photons have enough energy that you need to see significantly fewer of them for an event to be registered.
So in very dim light, the blue photons are still triggering the blue cone cells, because they have more energy, while the red and green cells are not getting triggered as they need more of the lower energy photons, and they're forgetting fast enough.
So very dim light seems blue to us. Or, to be precise, the last colour we lose in very dim light is blue: very dim light has no colours to our eyes at all, as even the blue cone cells then have time to forget, and only the rods are seeing anything at all.
But this behaviour in dim light is nothing at all to do with the behaviour in bright light, where there are plenty of photons for all the cells to register and the whole forgetting thing stops being a problem. The relative brightness of colours we see in bright light has nothing to do with the relative brightness in very dim light, which is dominated by the ability of our eyes to register very tiny numbers of photons, and hence by the quantum mechanical properties of those photons. So, yes, our eyes are 'most sensitive' to blue in the sense that it's the last colour we lose as the light gets dim, but this is nothing to do with the properties of our eyes when the light is bright.
(As a final note: I believe that the dark-adapted human eye can detect—that is, we can notice—around 10 photons. Cats can notice individual photons. Cats, as always, are better than us.)
CRITIQUE by Frank Gorga: "I agree that Tim Bradshaw's physics is spot on as far as this retired biochemist can tell. His knowledge of the biochemistry of the eye is, however, as he admitted, lacking. The defect in his argument is made at the juncture of switching from physics to biochemistry. The fact is that the primary photoreception event in cells that respond to different wavelengths of light is exactly the same. The same compound (retinal, a form of Vitamin A) absorbs a photon and undergoes a light-induced change in its shape (a photoisomerization) to initiate the process. The retinal is bound to a protein (opsin) which is slightly different in each of the three types of cone cells. The sightly different slightly forms provide different environments for the retinal and thus 'tune' the absorption of a photon to the three different color sensitivities. The rest of the process by which that primary event (the absorption of a photon by retinal) is converted to a neuronal (electrical) signal is the same in all three types of cone cells.
"What happens next in terms of how the brain processes these signals into what we perceive as color is nowhere close to being fully understood. However, it is clear from work beginning in the middle of the last century by many folks, most notably Edwin Land (yes, of Polaroid) that color is mainly constructed in the brain and not by the retina.
"One last correction...another commenter asserts that 'the photosensitive pigments in animals' eyes are close chemical relatives of chlorophylls and other phytochemicals in plants.' This is simply not true. The structure of retinal is not related, at all, to the structure of chlorophyll, except maybe that they are both organic molecules."
Original contents copyright 2020 by Michael C. Johnston and/or the bylined author. All Rights Reserved. Links in this post may be to our affiliates; sales through affiliate links may benefit this site.
(To see all the comments, click on the "Comments" link below.)
Featured Comments from:
Craig: "Well, sure. This is why lighting for B&W is very different from lighting for color. In color, you can trust something red and something blue to be visually separated. In B&W, you have to be careful about the values, and you use lighting and filters to ensure things of similar value are separated as you want them to be.
"This was at the heart of Roger Ebert's criticism of colorizing B&W movies back in the '80s. The process inherently ruins B&W movies that had been carefully lit to get exactly the effect that the creators wanted. Simple desaturation is kind of like colorizing in reverse. You take an image that you originally saw in color (because that's how our eyes work), composed with that in mind, and you reduce the colors to grayscale according to a simple algorithm that has no artistic understanding of the image you're working with. This is unlikely to produce a good result.
"When I want a B&W image from a color digital image, I carefully adjust the color sliders in Lightroom's B&W mode to get just the effect I want. It's not really a perfect solution (the ideal solution would involve lighting, at least if you're in a studio, and filters, which people don't often seem to use with digital) but it works acceptably most of the time. The core point is that you need to think about how you want the image to look in B&W, and then you need the tools and the skills with the tools to reduce your color digital image to B&W in a way that achieves the look you intended."
Mike replies: If anyone were to colorize my B&W photographs—I don't see why they'd care, but never mind that—I would consider it vandalism!
Christopher Mark Perez: "Hah! Brilliant, this. As an aside, I have very much enjoyed your series on digital black and white photography. Following this latest post I took a group of color wheels gathered from around the 'net (including pastels and subtle color variations—not just prime colors) and did two things. First, using the Gimp I added a black layer over the color wheels and set its blend mode to LCh (Luminosity Channel). Perfect! Second, using RawTherapee I enabled the 'Black and White' module, and selected 'Method—Luminosity.' Perfect! Both are Open Source Software programs and they seem to implement the color math correctly."
Patrick Murphy: "Seriously? The best way for Tim Soret to get this information out was with a series of tweets with 11 (!) GIFs that have to be individually clicked on?"
Patrick Murphy: "For example, couldn't this information be put on a blog (like Mike's!) as a single article?"
Patrick Murphy: "And instead of GIFs which fade between two values, just have the before-and-after picture? Also makes it easier because we can visually compare side by side as fast as we want, without waiting for the GIF cycle."
Patrick Murphy: "Or put it at a website, or even (gasp) Facebook—someplace where people can see the thread—sorry—of your argument all at once, instead of chopped up pieces."
Patrick Murphy: "I suppose when all you have is a hammer, everything gets pounded. And when all you have is Twitter, thoughts get chopped up into 19 tiny bits that start-stop, start-stop, start-stop, start-stop like trying to drive down a street where you hit Every Damn Light One By One By One."
Mike replies: I see what you did there.
Moose: "He's right, of course. But what a convoluted way to go about it. Photoshop provides simple desaturation, but also provides a very powerful Color-to-B&W conversion tool, capable of everything from wild effects to great subtlety. Repeating myself from 4/15: 'In Image > Adjustments > Black & White (Alt-Shift-Ctrl-B), Photoshop provides enormous control over the conversion. Once you get a conversion you like, you can save it.'"