Ever wondered why simple desaturation is so unsatisfying as a way to render a color original in monochrome? Tim Soret explains, in a Twitter thread. With nice example illustrations.
Tim is a 31-year-old French art director, cinematographer, and tech artist living in London.
Good stuff.
Mike
(Thanks to Phil Gyford and Tim Bradshaw)
ADDENDUM by Tim Bradshaw: There's a comment at the end of the Twitter thread that our eyes are most sensitive to blue so none of it makes sense. I thought I would explain that, because it's right in an interesting way, but also wrong. As you can tell from the length of this comment, I haven't got enough to do right now. The physics below is right, but the details of how eyes work may not be (but they are, obviously, constrained by physics).
First of all, blue light is relatively short-wavelength and hence relatively high-frequency, as wavelength, λ and frequency, ν (that's lambda and nu) are related by ν = c/λ. The three kinds of cone cells in our eyes have sensitivities which peak at around 560nm (red), 530nm (green) and 420nm (blue), corresponding to frequencies of 536THz, 566THz and 714THz respectively.
Well, a famous person, whom I'll call 'Albert' because that was his name, worked out in 1905 that light comes in little chunks we call photons, and that the energy of a photon is given be E = hν, where h is a constant, Planck's constant. Interestingly Albert is much more famous for doing something else, but he won his Nobel prize for this work.
What that means is that the amount of energy per red, green and blue photon is in the ratio 1, 1.06 and 1.33 respectively: blue photons have about 33% more energy than red photons.
Now, I'm not going to pretend to understand how the cone cells in our eyes work in detail, but there are two things which anyone who has done B&W printing in a darkroom will now immediately understand.
Firstly, safelights mean that the detection of light at all depends on its photons having enough energy—of them having a short enough wavelength, in other words. Safelights are generally red: the photons they emit have energies low enough that the papers we use don't detect those photons, at all. In particular, anyone who has worked in a darkroom in recent history will know that LED safelights (just buy the LED stoplights for cars) are a wonder: because they're LEDs they have a very non-thermal spectrum which means they can be both really bright and also have no high-frequency photons at all, which makes working under them both easy—you can see—and safe—they don't fog paper, at all.
(Incidentally, this effect—photons needing to have enough energy to be detected at all—was what Albert explained so brilliantly in 1905.)
Secondly, some detection mechanisms require the reception of more than one photon to trigger a detection. Film photographers know all about this as well, in fact: many (perhaps all) film emulsions require the reception of more than one photon to register anything at all (really, for the chemical change that is part of forming a grain to happen). And they also tend to 'forget' each photon after a while if they have not seen enough yet. So long as photons arrive fast enough, everything is well; the emulsion just sits and counts '1, 2, 3, 4: OK, seen enough now!'; but if photons are arriving too slowly the count goes '1, 2, er, 1, 2, er, 2, 3, er, 1, 2, er, 2, 3, 4: finally I've seen enough now!' and you need a lot more photons arriving. This is why reciprocity failure happens in low light: photons are arriving slowly enough that the emulsion has time to forget they've arrived.
(As far as I know Albert did not work on reciprocity failure.)
OK, so now we're ready to explain the whole blue light thing.
First of all, in very dim light not many photons are arriving. The energy, and therefore frequency and therefore colour, distribution of the photons may be the same as it is in bright light (it may not be, but this is not why very dim light seems to be blue), but there are hugely less of them than there are in bright light: we're now definitely in the regime where photons are arriving at cone cells one at a time.
Now we can infer that red and green cone cells need to see multiple photons before they register they've seen anything at all, and they also have this 'forgetting' property (I think this is just that the chemical process which happens in the cell has to get reversed because things have to be 'reset' so they can count another photon: you would not like your eye to be like film and only able to see something once!). So in very dim light, the red and green cone cells simply start forgetting fast enough that they stop registering.
But the blue cone cells are detecting significantly more macho photons: they're a third again as macho as red photons. And these photons have enough energy that you need to see significantly fewer of them for an event to be registered.
So in very dim light, the blue photons are still triggering the blue cone cells, because they have more energy, while the red and green cells are not getting triggered as they need more of the lower energy photons, and they're forgetting fast enough.
So very dim light seems blue to us. Or, to be precise, the last colour we lose in very dim light is blue: very dim light has no colours to our eyes at all, as even the blue cone cells then have time to forget, and only the rods are seeing anything at all.
But this behaviour in dim light is nothing at all to do with the behaviour in bright light, where there are plenty of photons for all the cells to register and the whole forgetting thing stops being a problem. The relative brightness of colours we see in bright light has nothing to do with the relative brightness in very dim light, which is dominated by the ability of our eyes to register very tiny numbers of photons, and hence by the quantum mechanical properties of those photons. So, yes, our eyes are 'most sensitive' to blue in the sense that it's the last colour we lose as the light gets dim, but this is nothing to do with the properties of our eyes when the light is bright.
(As a final note: I believe that the dark-adapted human eye can detect—that is, we can notice—around 10 photons. Cats can notice individual photons. Cats, as always, are better than us.)
CRITIQUE by Frank Gorga: "I agree that Tim Bradshaw's physics is spot on as far as this retired biochemist can tell. His knowledge of the biochemistry of the eye is, however, as he admitted, lacking. The defect in his argument is made at the juncture of switching from physics to biochemistry. The fact is that the primary photoreception event in cells that respond to different wavelengths of light is exactly the same. The same compound (retinal, a form of Vitamin A) absorbs a photon and undergoes a light-induced change in its shape (a photoisomerization) to initiate the process. The retinal is bound to a protein (opsin) which is slightly different in each of the three types of cone cells. The sightly different slightly forms provide different environments for the retinal and thus 'tune' the absorption of a photon to the three different color sensitivities. The rest of the process by which that primary event (the absorption of a photon by retinal) is converted to a neuronal (electrical) signal is the same in all three types of cone cells.
"What happens next in terms of how the brain processes these signals into what we perceive as color is nowhere close to being fully understood. However, it is clear from work beginning in the middle of the last century by many folks, most notably Edwin Land (yes, of Polaroid) that color is mainly constructed in the brain and not by the retina.
"One last correction...another commenter asserts that 'the photosensitive pigments in animals' eyes are close chemical relatives of chlorophylls and other phytochemicals in plants.' This is simply not true. The structure of retinal is not related, at all, to the structure of chlorophyll, except maybe that they are both organic molecules."
Original contents copyright 2020 by Michael C. Johnston and/or the bylined author. All Rights Reserved. Links in this post may be to our affiliates; sales through affiliate links may benefit this site.
(To see all the comments, click on the "Comments" link below.)
Featured Comments from:
Craig: "Well, sure. This is why lighting for B&W is very different from lighting for color. In color, you can trust something red and something blue to be visually separated. In B&W, you have to be careful about the values, and you use lighting and filters to ensure things of similar value are separated as you want them to be.
"This was at the heart of Roger Ebert's criticism of colorizing B&W movies back in the '80s. The process inherently ruins B&W movies that had been carefully lit to get exactly the effect that the creators wanted. Simple desaturation is kind of like colorizing in reverse. You take an image that you originally saw in color (because that's how our eyes work), composed with that in mind, and you reduce the colors to grayscale according to a simple algorithm that has no artistic understanding of the image you're working with. This is unlikely to produce a good result.
"When I want a B&W image from a color digital image, I carefully adjust the color sliders in Lightroom's B&W mode to get just the effect I want. It's not really a perfect solution (the ideal solution would involve lighting, at least if you're in a studio, and filters, which people don't often seem to use with digital) but it works acceptably most of the time. The core point is that you need to think about how you want the image to look in B&W, and then you need the tools and the skills with the tools to reduce your color digital image to B&W in a way that achieves the look you intended."
Mike replies: If anyone were to colorize my B&W photographs—I don't see why they'd care, but never mind that—I would consider it vandalism!
Christopher Mark Perez: "Hah! Brilliant, this. As an aside, I have very much enjoyed your series on digital black and white photography. Following this latest post I took a group of color wheels gathered from around the 'net (including pastels and subtle color variations—not just prime colors) and did two things. First, using the Gimp I added a black layer over the color wheels and set its blend mode to LCh (Luminosity Channel). Perfect! Second, using RawTherapee I enabled the 'Black and White' module, and selected 'Method—Luminosity.' Perfect! Both are Open Source Software programs and they seem to implement the color math correctly."
Patrick Murphy: "Seriously? The best way for Tim Soret to get this information out was with a series of tweets with 11 (!) GIFs that have to be individually clicked on?"
Patrick Murphy: "For example, couldn't this information be put on a blog (like Mike's!) as a single article?"
Patrick Murphy: "And instead of GIFs which fade between two values, just have the before-and-after picture? Also makes it easier because we can visually compare side by side as fast as we want, without waiting for the GIF cycle."
Patrick Murphy: "Or put it at a website, or even (gasp) Facebook—someplace where people can see the thread—sorry—of your argument all at once, instead of chopped up pieces."
Patrick Murphy: "I suppose when all you have is a hammer, everything gets pounded. And when all you have is Twitter, thoughts get chopped up into 19 tiny bits that start-stop, start-stop, start-stop, start-stop like trying to drive down a street where you hit Every Damn Light One By One By One."
Mike replies: I see what you did there.
Moose: "He's right, of course. But what a convoluted way to go about it. Photoshop provides simple desaturation, but also provides a very powerful Color-to-B&W conversion tool, capable of everything from wild effects to great subtlety. Repeating myself from 4/15: 'In Image > Adjustments > Black & White (Alt-Shift-Ctrl-B), Photoshop provides enormous control over the conversion. Once you get a conversion you like, you can save it.'"
Excellent post Mike! Thanks for sharing the link, insightful.
Posted by: Mark Kinsman | Monday, 20 April 2020 at 05:48 PM
Aperture had the most wonderful channel mixer , which you could combine with slight amounts of sepia to shape tones to look just right, in a wide variety of styles.
I still use it. It is simple and powerful.
Posted by: Michael J. Perini | Monday, 20 April 2020 at 07:04 PM
Great Post, TOP rules
Posted by: louis mccullagh | Tuesday, 21 April 2020 at 05:16 AM
I've found this set of channel mixer "recipes" for converting color photos to b&w film equivalents to be helpful: http://www.tjansson.dk/2012/11/photography-channel-mixer-rgb-values-equivalent-to-traditional-bw-film/
Posted by: rp | Tuesday, 21 April 2020 at 08:01 AM
Tim Bradshaw's explanation is superb. Thanks! I may even have heard of that Albert guy, but I thought he was a clerk in a patent office. I didn't realize he also dabbled in physics. ;)
I do everything in Lightroom, and over the years have tried every way of converting to grayscale. I never liked the desaturation approach.
Nowadays I keep it simple: apply Adobe Monochrome camera profile and fine tune the colour channels.
Adobe's Monochrome camera profile (introduced in 2018 I believe) was a major step up compared to the approach used in earlier version of Lightroom. According to Julieanne Kost, "This profile slightly shifts colors as they are converted to grayscale – brightening the warmer colors and darkening the cooler colors. It also adds a small amount of contrast but allows lots of headroom for editing." It certainly works for me.
Posted by: Rob de Loe | Tuesday, 21 April 2020 at 08:57 AM
It took me a good hour of fiddling and looking at his overly quick instructional gif to figure out what he was doing in Photoshop (my skills are just above beginner in that program, so I kind of stumble around). But after I figured out how to make a solid color black layer, the results looked quite natural and pleasing with one portrait, with a "just right" look to the skin tone. I'll keep experimenting.
Posted by: John Krumm | Tuesday, 21 April 2020 at 09:55 AM
Addendum to Tim's addendum:
Tim mentioned dark adaptation in passing, but didn't go into any detail. The shift to blue sensitivity is a result of the physiology of dark adaptation involving the two primary photoreactive pigments in retinal cells. I hadn't thought of the photon energy threshold before (embarrassing as I'm also an astronomer by hobby and degree), but that's an important part of the relative spectral sensitivity of the various retinal pigments. Dark adaptation increases the eyes' relative sensitivity to blue light as a result of switching from predominantly using green-reactive rhodopsin in the color-sensitive (and bright light dependent) cone cells to preferring more blue-reactive pigments in the more light-sensitive but color-insensitive rod cells. This is known as the Purkinjie effect (https://en.wikipedia.org/wiki/Purkinje_effect). Apparently, the process involves a process akin to the binning that some digital cameras can perform in dim light.
As a further addendum, the photosensitive pigments in animals' eyes are close chemical relatives of chlorophylls and other phytochemicals in plants.
Posted by: Peter Dove | Tuesday, 21 April 2020 at 10:10 AM
Here is a simple conversion using the Twitter method (black layer in color mode). I kind of like it, a sort of gentle conversion that maintains some fealty. On this one I only hit "auto exposure" in camera raw, something I normally don't do, then opened in Photoshop. You can click through to larger versions.
Posted by: John Krumm | Tuesday, 21 April 2020 at 10:21 AM
One of the first hard lessons I learned when going digital was that B&W was going to be difficult. Simply turning down the color produced flat, soulless pictures.
In the years since I have tried several B&W conversion plug ins and my favorite remains the first one I bought over a decade ago. It is B&W Styler from the Plug in Site.
It isn't a runaway. The conversion tool in PS is excellent and I have tried other third party converters and they all seem pretty good but B&W Styler is the one I know best and I am happy with what it does for me.
But what works for me may not be best for you.
A friend and one of the best photographers I know is Kent Sievers, a Fuji shooter and he told me he likes the Acros preset with a little touch up in both shadows and highlights. His B&W looks like Tri-X done right.
Also on the subject of B&W, Mrs Plews and I are using part of our confinement to take advantage of TCM and some streaming services to do a Film Noir marathon and it's wonderful.
We are hitting some of the less well known films of this genre and we are seeing some amazing photography. Here are a few that have really made me smile: Cry of the City, Naked City, He Walks by Night, Out of the Past and A Cry in the Night.
I don't suppose if is exactly Film Noir but Kurosawas masterpiece High and Low also has glorious photography. Panorama shooters can learn from this film too.
It is in widescreen and if you watch it you may notice that Kurosawa makes sparing use of close ups. Instead he uses lighting and composition to direct your eye. It is also interesting to watch how he makes use of the entire frame. Often important elements of the story take place on the very edge of the image. Also instructive is his choice of focal lengths in a widescreen film. For the most part the lenses are moderate to slightly long.
I think the genius of this film is that it feels like three films in one to me. It starts out with a character study set almost entirely inside a house then switches to a great police procedural and tucked away in all of this is a scene in a drug alley that is the best short zombie movie ever made.
I wonder how many anime directors have that sequence committed to memory. Good stuff.
But I digress.
In digital I try to limit my fiddling with my pictures to what I could do (or hoped I could do) in the darkroom.
As it turns out getting a OK color print is much easier in digital than in the darkroom but black and white is proving to be quite a challenge. Go figure.
Posted by: Mike Plews | Tuesday, 21 April 2020 at 10:31 AM
In digital, I set the camera to B&W to visualize, and RAW to have later if I need it. In film, I sometimes check the scene with my iPhone set to B&W, and maybe even take a reference shot, since I may be using the Lightmeter App anyway to get the exposure range.
Posted by: Bob G. | Tuesday, 21 April 2020 at 10:52 AM
I solved the color to b&w conversion problem via the purchase of a Leica M262 Monochrom.
Posted by: Roger | Tuesday, 21 April 2020 at 12:01 PM
I set the viewfinder of my camera (Olympus OMD) to "monotone".
My results are way better this way, it helps a lot the composition.
Posted by: Gibeault Marc | Tuesday, 21 April 2020 at 12:53 PM
Good grief Charlie Brown, doesn't anyone understand how to use LAB Color Space to obtain a "perfect" BW conversion?
http://lifesquared.squarespace.com/blog/http/lifesquaredsquarespacecom/blog-page-url/2020/4/14/new-post-title-4
Posted by: Mark Hobson | Tuesday, 21 April 2020 at 01:44 PM
Thanks, Frank! That's what I get for typing too much. I was mis-recalling from long-ago biology class about similar molecular structures in chlorophyll and oxygen-ferrying molecules like hemoglobin and hemocyanin. Mea culpa.
Posted by: Peter Dove | Tuesday, 21 April 2020 at 04:59 PM
This is how you do this in ImageMagick:
convert color.jpeg -colorspace LAB -channel R -separate bw.jpg
This converts the image to LAB space and then uses the LAB lightness channel to create the black and white image.
https://imagemagick.org/
Posted by: Freddy S. | Tuesday, 21 April 2020 at 05:25 PM
While not disagreeing with Tim Soret's point, I can't help but feel that he is taking aim at a straw man or woman. As a number of other commenters have noted, it's certainly the case that taking the luminosity channel in Photoshop will give you a B&W conversion with an apparent brightness and contrast that fairly closely match that of the source colour image. Which is what you're doing by using his 21xR + 72xG + 7xB weights. I say "fairly closely", because I don't always like what that approach does to skin tones, which can end up a bit dark.
But B&W is an abstraction, and surely the point is to get the most pleasing B&W conversion / abstraction. Desaturate may well give it sometimes. The luminosity channel may do so far more often. But there is a close-to-infinite number of ways to convert to B&W in Photoshop. You can't try them all on every image, but it's worth exploring some of them to get a sense of what approach suits what type of image. I think that desaturate vs luminosity is a false dichotomy that conceals far more (ways of converting to B&W) than it reveals.
Which, given your current voyage of discovery into Capture One, raises the related question of which raw converter is best to convert colour to B&W, if you convert at that stage rather than in PS? Lightroom and C1 have similar conversion tools featuring R,G,B,C,M,Y sliders, but LR adds orange and purple. You'd think that this would be better because it allows more precise targeting of colours to tweak in the conversion. Am I the only one who thinks that these eight sliders are too narrow with not enough overlap? I often get artefacts at colour boundaries, especially under artificial light. I think the C1 B&W tool is better for this reason. But even so, PS conversions allow more flexibility and more options.
Posted by: Brian Stewart | Tuesday, 21 April 2020 at 11:44 PM
Absolutely fascinating Mike. Much appreciated Mike,the other Mike, Tim, Frank and everyone.
Posted by: Bob Johnston | Wednesday, 22 April 2020 at 05:51 AM