« The Lucky Photographer | Main | Time to Scramble Again »

Thursday, 13 October 2011

Comments

"Black-and-white digital isn't like black-and-white film; it's like slides. You have to expose for the highlights. That's what I do. Yes, it causes a modest increase in overall noise, but that's nothing I can't largely compensate for in post-exposure processing, if it's actually objectionable (and as often as not, it's not). This whole "expose as far right as possible" business is crap. (Ctein)

Thank you, Ctein, thank you, thank you, thank you, thank you . . .

Before I got to Ctein's final comment, I was thinking that camera makers do manufacture versions that don't have IR filters for specialist/technical markets.

From a marketing point of view, all that's necessary in the first instance is to choose some relatively low-price enthusiast model, perhaps something at the $1000 mark, and release exactly the same camera NF -- no filters. Give it to photographers without bells and whistles and say, "here, try this". They can sell external IR-blocking filters separately, and they get to sell two bodies to some dedicated photographers where otherwise they would sell only one. The first manufacturer to do this might even persuade a few people to switch lens system, which is such a marketing grail that Nikon or Canon should do it tomorrow.

It's a worry that in-camera image processing firmware might be significantly compromised by there being no color information. There's an easy fix for part of that, which is to make the NF camera RAW only, and let the Photoshop market provide the first generation of software tools. I wonder, however, whether parts of the focus algorithms depend on color? What else might be color dependent in in-camera firmware, and how easy would it be to strip it out?

I can't help thinking that the Ricoh GXR system would lend itself perfectly to this.

A monochrome A12 unit with a 24mm lens would sell like hotcakes (relatively speaking) to the `street shooter' crowd... (Though I'd prefer a 30mm personally), and you can pack another lens/sensor unit for those times you do need/want color...

The "workaround" discussion is fascinating. I've noticed that no top-flight photographer I've ever talked gear with has ever owned a perfect piece of equipment; what they all seem to have is a collection of equipment they know intimately well, and know how to get great results from in certain circumstances that they find useful. This also seems to apply to those I haven't been able to interact with, but who have written about their craft (a much larger set :-) ).

I've still got the filters I used for B&W; though come to think of it, not in sizes to fit my current lenses usefully (mostly 49-52-58mm in the old boxes, and my modern lenses go to 72mm a lot). So I could just go back to them.

Let's not forget that the really seriously high-quality sensors in the world are nearly all monochrome; the satellites and spacecraft and observatories that contain them apply external filters and take multiple exposures if they need to capture color information. They're too expensive for my blood, and often require active cooling and other complicated support infrastructure, and tend to be rather large for the coverage of the lenses I own. But in terms of physics and bleeding-edge R&D, that's where the limits are being investigated and extended, and what they learn there can at least potentially be implemented in sensors of smaller sizes, after a while.

Ctein says "This whole 'expose as far right as possible' business is crap. Noise isn't a dominant problem in digital photography any more; blown highlights still are."

Maybe it's "crap" for noise these days but not for avoiding posterisation in pushed shadow areas. Give me a blown highlight over a posterised shadow area any day (I like to avoid both, if possible).

Speaking of exposure and curves, would you mind doing a column on your approach or are there currently some good books that explain the technique you are describing? I will often tinker with the exposure while I'm shooting and then try my best to make the image look good to my eye in Photoshop and ACR, but I don't have a formula for doing so. Any information would be appreciated and thanks in advance!

In one of my favourite Kirk Tuck blog entries-- http://visualsciencelab.blogspot.com/2011/09/old-cameras-new-cameras-old-pictures.html --Kirk wrote: "the intention for most [B&W] film projects was to hit paper as a final destination." As is evident in Kirk's post and particularly in the comments from his wife Belinda, Kirk was referring to artistic redaction. I'm fairly confident that you propose a B&W digital camera primarily as an artist's tool. If that's the case, then B&W digital printing also becomes an essential part of the discussion. I'm wondering whether you've adequately considered just how demanding digital B&W printing can be and whether that difficulty might narrow market acceptance for a digital B&W camera? (I've put a lot of time and effort into inkjet B&W.)

My other thought is that most digital cameras fall into the category of "consumer electronics," with the emphasis on consumer. I think a B&W camera will have to be targeted at a very focused audience, and I suspect that audience will demand the best in terms of viewing and handling and will very much prefer to strip out all the useless crap that makes so many cameras these days such appallingly unpleasant tools. I think that if the form/function aspects of this camera aren't well thought out, it won't have a chance in the market.

Mike, I guess I do not see how a dedicated black and white digital camera is going to be an improvement for you personally.

A black and white digital camera will still have the same response as a colour digital camera, the look of the output will still be the same, it just won't have colour information and still require the same 'work arounds.'

Am I missing something?

Well, it's nice to read that Ctein agrees with me. I've always felt that ETTR (Expose To The Right) is, well, wrong, and for obvious reasons.

One thing I've learned from doing HDR photography is the advantage of using the auto-bracketing feature in my D300, even when taking non-HDR photos. Jay Maisel does this all the time, and if it's good enough for Jay, it's good enough for me.

Thanks fopr the discussion. Read it with interest.

First, I've always thought that ETTR was about getting more details as there are more of them in the right part of the histogram than in the shadows.

Secondly, there have already been a camera, at least, that was dedicated to b&w: the Kodak DSC760m. How was it in its days and how could it benefit from today's technology? Strange, it was not mentionned. Or maybe not.

The black-and-white-only camera idea is starting to interest me. I can see colleges like the one where I teach buying them for photo classes. If only someone would make one.

I can't imagine Canon or Nikon jumping on the idea, but Sony or Pentax might give the concept a hard look. Besides a few sales to colleges for photo courses there might be a few professionals or serious amateurs who currently shoot Nikon/Canon who'd buy a dedicated B&W camera to experiment with. Then they might be persuaded to buy an extra lens or two and get to know what the "other" camera makers have to offer. In other words, such a camera could be a "foot in the door" for Sony or Pentax.

Pentax made their very successful white version of the K-X available when they realized that actually producing the camera wouldn't cost any more than doing the market research to see if they should produce the camera. Depending on the complexity of actually making the thing, a similar equation might apply to a dedicated B&W camera. I hope someone gives it a try.

"I think that if the form/function aspects of this camera aren't well thought out, it won't have a chance in the market."

latent_image,
I'm essentially in agreement about that.

Mike

Andre,
We've mentioned it many times. Several commenters mentioned it in the previous posts. We can't mention everything in every post....

Mike

Since this post is (kind of) speculative, I'm pretty sure Fuji could build some interchangeable filters into that 23mm lens on the X100, since they managed a shutter and a ND filter. Maybe. This solution just wouldn't make sense for system cameras (with interchangeable lenses.

As far as marketability goes, I get the impression that compact digicams are a bit like books or movies these days - you know: lose money on 19 out of every 20 models you put out and make up the loss on the one hot seller. Given such a low success rate, I see absolutely no downside to any of the camera companies putting out a B&W sensor compact digicam. It would lose money as a color camera anyway!

The same argument does not hold true for the $499 plus camera segments of course. Most of those make money (or have to). I don't believe manufacturers are losing money on bodies to make it up on lenses. In fact, it is probably the opposite, with the overwhelming majority of lenses sold being kit lenses, and those go for about $50 to $100 each in kits.

Still, being mostly system cameras, the $499 plus segments create a very strong incentive to add a B&W camera body. Would't you like to be the ONLY system in town with B&W sensor option? Just the fact that its there would attract color photographers to the system, as it sends the signal that other systems are somehow incomplete.

After perusing his B+W portfolio, a quick nit-pick for Ctein: the Weisman (single 'n') is in Minneapolis.

Terrific discussion, guys. I'd be all about a bw specific camera body. Are any of the 'big boys' listening?

FGJB,
No, unless you want to buy me one. It's a scientific back, for one thing, and requires some significant workarounds to be used for pictorial photography. And the cost impediment is absolute for most people.

Tim,
Not really interested. Would you pay three times the value of the camera for a jury-rigged solution of questionable utility that voids the warranty? That's really not what I'm talking about here at all.

Mike

The biggest problem of Bayer array is handling of sharp luminosity edges. Where GRGB parts of photosite end up on different sides of the edge, you can't help but get weird colors. Different algorithms were devised to deal with that, but the fact remains that its only guessing. In real life photography it only becomes an issue in some cases of architecture or product photography or scenes like brightly lit flowers on dark background, but even then it largely falls into pixel-peeper domain. So, pure B&W sensor will only benefit scientific or industrial applications where pixel-level exactness is prized, but not the regular photographers.

On the other hand, by capturing only luminosity, you discard all the spectrum information, which makes it impossible to do all kinds of statistical analysis heavily used in post-processing algorithms. This is bound to have considerable negative effect on the final output. Yes, it enforces the discipline on photographer, but makes it impossible to capture large number of scenes.

This brings up an interesting point that in the age of post- 10 megapixel sensors, it might actually be better to increase the number of color components captured, not limit it to just RGB, maybe even add dedicated UV and IR channels. Even if it does not improve on-screen look of the pictures, it will definitely improve printing and add volumes of information to be digested in post-processing.

Hm. Here's some thinking aloud...

Would it be easily (for a relative value of "easily") possible to create a higher bit-depth on the B&W sensor? So instead of 3x16/3x8 colour bits we have a 32-bit monochromatic sensor, for instance? Or even 24-bits?

Since B&W film and prints are continuous, such a higher bit depth would come closer to reproducing the response curve, no?

Hi Mike, Kodak (of all companies....) seemed to have played with the idea and even produced a prototype or two back in the early 2000s. This article over at LL is about writer Peter Myers experience with a Kodak 760m, a full monochrome dSLR (seems only one or two were produced...)
http://www.luminous-landscape.com/reviews/cameras/kodak-760m.shtml

A question for anyone, though Ctein seems the most apt to answer it:

Would there be any advantage to shooting raw with a black and white only sensor? Or would an uncompressed format, like a tiff, offer the same advantages in post? It seems to me that the benefits in using raw formats is in deciding how the bayer filter information is interpreted, and that without said filter, a tiff would be just as good.

To address the marketing (and stocking and distribution) of a no-filter version of a widely available camera some more, I imagine the first to market would get as much free marketing as they could wish for from sites such as TOP, directed at precisely the target market.

Stocking and distribution can be major costs for new cameras, although perhaps not on a par with marketing costs, but an NF version can plausibly be special or mail-order only, assuming that the body the camera is based on is widely available in its own right, and the differences between the base camera firmware and the NF firmware are not deal-killers.

As Mark Roberts says, perhaps Sony or Pentax are more natural fits for this kind of thing than Canon or Nikon. Olympus might be even more natural, to differentiate one of their PEN offerings from Panasonic as a street camera.

For printing, the Epson Ultrachrome HDR set (for the 4900 and probably other printer) has four black inks (though that might be only three black levels). The Ultrachrome K3 set is the same. In practice I've been happy with the K3 set (haven't tried the HDR set).

If that isn't good enough, there's always Piezography.

Beyond Dmax and Dmin issues, there can be driver issues and other problems, and learning curve issues, of course. But my impression is that the state of B&W inkjet printing is really pretty good, if you approach it seriously.

A B&W sensor might well have considerably less effective dynamic range than its Bayer equivalent.

In some scenes (ones which contain sandstone buildings in sunlight, for example), you can completely and irrevocably blow the red channel, but recover detail from the green and blue to give a passable B&W image. No such luck with an arrayless sensor.

Instead of the Bayer filter, a grid using some ND filters (or more interestingly, some dichroic ones - though development expense might preclude this) would enable those gently tapering highlights to be engineered in camera. You could even have settings for different virtual film looks for the non-pros amongst us.

I'm not very clear on what the difference is between "expose to the right" and "expose for highlights". I learned about this expose to the right business from this article here:
http://www.luminous-landscape.com/tutorials/expose-right.shtml

Where it says:
"The simple lesson to be learned from this is to bias your exposures so that the histogram is snugged up to the right, but not to the point that the highlights are blown. This can usually be seen by the flashing alert on most camera review screens. Just back off so that the flashing stops."

So exposing to the right doesn't mean blowing highlights. Or am I missing something?

If this is going to happen, I see two most-likely implementations (relatively speaking, in the context of an improbable product) that would be pretty much at opposite ends of the quality spectrum, and a third that is as unlikely as it is attainable and cost-efficient.

One of the more likely ways, as others has mentioned, is a GXR module, but I propose that the most sensible path for Ricoh is a BW M-module.

M-mount users may be a small fraction of the equipment market, but, on the other hand, they likely constitute a larger proportion of the potential BW market, as well as a larger proportion of those who could afford a BW M module. (That's BMW spelled sideways, isn't it?)

Further, the M is a proven and robust lens ecosystem, including many lenses probably capable of delivering the resolution required to show off the BW sensor advantage.

Of course, this would only be considered if the color M module is a solid success, with the first run of 9,000 looking to nearly sell out. But in that event, I don't think it would be much trouble for Pentax/Ricoh to implement a limited run of BW versions at the same price.

A more likely scenario, however, is Lomography taking a CCTV sensor and releasing an overpriced, plastic toy "digital Holga", complete with accessory filters and a few pre-programmed FDP simulations. This won't do much to prove digital BW as an artistic medium.

The most unlikely, and at the same time most easily and efficiently attainable, implementation for high-quality digital BW capture would be development of BW-only firmware for any Foveon-based camera. Those sensors already lack the Bayer array, and were developed from the outset to be as traditionally panchromatic as possible. Poor ergonomics aside, those cameras would work well for the application being discussed.

"In some scenes (ones which contain sandstone buildings in sunlight, for example), you can completely and irrevocably blow the red channel, but recover detail from the green and blue to give a passable B&W image. No such luck with an arrayless sensor."

Nigel,
I don't think that's true, because a single photosite in a B&W-only sensor could have the same area (and equal sensitivity) as both of the green photosites in a Bayer array. Or even more.

Mike

Thanks Mike and Ctein for this provocative and informative discussion.

However, when Mike said...

"I wish I could provide visual illustrations here, but as always, it seems like it would be very unfair of me to find pictures by strangers on the web and hold them up here as examples of what not to do".

...it got me to wondering Mike, if you have invested as much time using one digital camera - in B&W mode - as you must have spent years ago when learning to optimize the use (including printing) of a particular B&W film - since you don't seem to have digital B&W images of your own to post in your blog.

Perhaps this has the makings of a new TOP challenge:
- One Lens
- One DSLR set to B&W Mode Exclusively
- One Year

;~))

Cheers! Jay

Thanks guys, good discussion.

best wishes phil

I bought a great book, 'Kodachrome and How to Use It' by Ivan Dimitri(Really Levon West but who's counting?) and found much of the advice spot on for exposing digital. Sure, I can shoot my D7000 and salvage amazing things our of horrible captures, but nailing it right gives so much more flexibility to what I do with it it's magic. And, besides, nothing like consulting a 70 year old tome on how to operate your 21st century contraption:)

One stop improvement in noise but at least two stops in tolerable noise, plus a potential 16 bits of greyscale...plus less need to sharpen (and increase noise again) etc. The image processor could be set to handle noise specifically in ways that deliver specific "looks" to the image (grain in B&W is often part of the charm - not so in colour).

The gain in usable DR (as opposed to mathematical DR) could be substantial. 15 stops of potential detail would allow some heavy curves work without banding and would exceed anything available on film, plus potentially make the camera usable at very high ISO. Imagine a B&W version of the D3s.

Plus lack of a colour filter does mean you get a lot more detail in single coloured objects (green foliage, red petals, blue petals etc.)

Gimme gimme. Still think the GXR with optional module would be the obvious answer - just like a film back. Its already a rather cool camera IMO. Think I may invest in one and hunt around for some suitable glass.

It sounds to me like a camera with a dedicated B & W sensor is a project for someone like John DeLorean, Richard Branson or Elizabeth Meyer. Someone personally well funded who wants to do it because they can and because they feel it needs to or should be done.

...this gets back to the whole idea of: "...we really don't want a black & white sensor as much as we want a black & white film emulator..." as brought up before...it needs to emulate the less than perfect panchromatic effects of film, as well and the curve we were used to...

FWIW, I went back to using glass filters on practically everything because I can't even get color digital to give me the 'look' of transparency film that I used to filter for the color I was looking for...my Nikon just gets redder if you try and 'warm it up', which is 'technically' correct for correcting for a higher kelvin temperature, and the Canon's I used to use, I could pick red or yellow biasing, but the 81 series filters are 'red+yellow', so 'no joy'...yeah, I know I can 'correct' it in PhotoShop, but I want the closest thing to what I was getting on film before I have to do any post processing...

Film emulation seems to be the way to go, and it has to be an easy thing to write for camera software, don't you think? The new Fuji x10 is supposed to have an emulator built it that lets you select between Astia, Provia, Velvia, and B&W Fuji, but no one knows about that yet and what it really does, looks like, or how it works...

Canon already has B&W sensors. They've had them since 2007 when they created a 50MP sensor and they upped that to 120 MP in 2010

    http://www.dpreview.com/news/1008/10082410canon120mpsensor.asp
. All the talk about the technical problems of creating one are pointless. It's been done.

RE: Exposing to the right... I'm reminded of an argument I had on the Medium Format Digest years ago when I talked about split filter printing and recommended setting the starting exposure for the highlight detail. Another member immediately came back with the old "expose for the shadows and develop for the highlights" saw which is true for negatives. The problem is that I was making prints. A more accurate way to view that advice is to expose for the area of least density and process for the area of greatest density. For prints, slide film or digital exposure that would require exposing for highlight detail and processing for the shadows.

BTW Last year I wrote a blog post about why exposing to the right appears to have less noise

    http://jims-ramblings.blogspot.com/2010/02/right-idea-wrong-reason.html
. I tend to underexpose enough to ensure that the highlights aren't blocked. Like Ctein, I don't like blocked highlights. With the exception of specular highlights, blank areas of white are unnatural looking. Our eyes don't see the world like that. OTOH we tend to tolerate not seeing into deep shadows because that is a natural occurrence. For a natural appearance if you have to lose detail in one or the other, lose it in the shadows.

One 'advantages' of a B&W sensor I didn't see you mention the fact that you would have no temptation to examine the color file. You would be forced to see in B&W since that's all the camera will hand you. If nothing else a B&W only digital, with its instant feedback, might be a good training tool toward that end.

Dear FGJB,

Not even close, unless you're planning on buying two of them as gifts for Mike and I in thanks for the lovely essay we just wrote. In which case, may I express extreme gratitude for both of us!

Totally violates the concept of affordability. Makes one long for a Leica version at a bargain price of $20,000!

Medium format backs are optimized very differently from full frame and smaller cameras. They favor lower noise and longer exposure range, but they're weak on low light performance, relative to “normal” cameras that mere mortals buy. So, on the absolute level, it's hard to learn anything from this product that would be relevant for us. But…

Peter Fauland did us the favor of comparing Bayer array and monochrome array Phase One backs:

http://blog.fauland-photography.com/2010/10/09/phaseone-achromatic-vs-p45-conversion/

That's an apples-to-apples comparison… even if what we're interested in is oranges. Still, his results seem broadly consistent with what Mike and I hypothesized. A visible, incremental improvement, but not a revolution.

~~~~~~

Dear latent,

Peculiarly, I have found black-and-white digital printing to be a piece of cake. Much, much easier than digital color printing. It just seems to work well, and gorgeously, right out of the box for me. I don't know if this is good karma, good choice of printers, instinctively good technique, or that I'm completely insensitive to aspects of black and white prints that drive other people crazy.

Could be the latter. In discussions of this with people like Carl, Mike, and Oren in private, it's very clear that each of us is preternaturally sensitive to some visual characteristics of the medium that go right past others of us. We each have our own idiosyncrasies. Maybe mine just mesh well with black and white digital.

~~~~~~

Dear Scotth,

No disrespect intended, but yeah, pretty much missed everything we talked about. Please go back and reread.

~~~~~~

Dear DB,

Thanks for the spelling correction. And, I always get confused by what's in which city when one crosses the river.

But does it really matter? Minneapolis and St. Paul, they're pretty much the same thing.

[g,d, & r]


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear James,

Yes, next week's column will explain about this expose-for-the-highlights business works, with illustrative photos of curves and results. It's not a tutorial, per se, but I hope it's a clear enough technical example that you'll be able to figure it out.

~~~~~~

Dear John,

First off, do understand that rules of thumb are supposed to be things that are guides that work right more often than not. They're not dictates. For every rule of thumb you can come up with, there will be cases where you don't want to do it. It should merely be usually right. “Expose to the right” isn't.

While I happily acknowledge that there are plenty of cases where you don't want to do it, I didn't understand your example. If you're pushing to get more out of the shadows but then you're increasing exposure to provide better gradation in the shadows, you're back where you started. Like setting the ISO on the camera two stops higher but then dialing in +2 stops of exposure compensation. The quality of shadow detail you get depends solely on the number of photons you collect. If you're doing self-canceling compensations, you're pretty much back at square one.

Well, there may be a small advantage because the signal processing chain is a little different, but averaged over all cameras, all photographers, it's not going to be profound. Your personal mileage, of course, may differ and probably will.

If you want to pursue this on a technical level, you should e-mail me and we'll discuss it off-line. Mike would rather the comments section not devolve into technical debates.

~~~~~~

Dear Andre,

I think we may be getting a language barrier here (assuming English is your 2nd language), I am not sure I understood what you meant by:

“I've always thought that ETTR was about getting more details as there are more of them in the right part of the histogram than in the shadows."

If what you're getting at is the argument that more exposure means you're getting more image data because you're collecting more photons, that's technically true. It's also irrelevant. First, data is not the same as useful information, and second, inadequate amounts of data are not a problem with most current cameras under most situations. Whereas blown highlights are still a problem with digital photography, just as they are with slide films.

If I've misunderstood you, my apologies In advance.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

One word - X100.

Talk to Fuji.

Kodak's 760m slr was the last, as far as I know of a significant line of monochrome cameras, but was abandoned as was so much else of Kodak's technology in its death throes.

I tried it for a bit, but it was too much money for me at the time, but I've been waiting ever since for a current iteration from some other manufacturer, preferably with a lens mount that is compatible with my equipment, as is my IR converted Canon. I would jump on a monochrome camera in an instant as long as the price was not more than 50% more than the base model.

A short article about the 760m is at:
http://www.luminous-landscape.com/reviews/cameras/kodak-760m.shtml

As a Leica M8.2 user, I'm amused by the sentiment that use of external filters would pose no concern; in fact, would even be desireable. I've never had any issue with using an external UV/IR filter on my camera (regardless whether the initial design was so intended); but certainly the photo critics jumped all over the issue as if using a filter was some calamity. I did it with film all the time; no big deal.

On the contrary, the M8.2 does a better job of IR filtering (better blacks) than the M9, and the use of an internal rather than external filter on the latter camera results in somewhat less crisp files out of the box. I'll take my camera for black and white, given my print needs and preferences, any day.

After several decades producing silver prints, I've been more than happy with my digital black and white work. Most incremental improvements, however, have resulted from tweaks down the processing chain (Lightroom improvements, better papers and profiles, better printer and inks, etc; and, more importantly, more learning and better execution).

But if Leica does ever produce a monochromatic M (even as an a la carte option), I'll be right there to give it a try.

Jeff

You have to expose for the highlights.

[...]

This whole "expose as far right as possible" business is cr*p [redacted so that I can submit my comment --G]. Noise isn't a dominant problem in digital photography any more; blown highlights still are.

Ctein, would you elaborate on this, please? As far as I understand exposing to the right, it means exposing as far right as possible without blowing out the highlights (or, in RAW, without blowing them out far enough to preclude recovery in post-processing).

So exposing to the right (of the histogram) and exposing for the highlights are the same thing. Or am I misunderstanding you?

I've noticed that I get larger files (for any given scene) when I expose to the right. Does this not mean that I'm getting more information in the file, and that the technique is therefore useful?

Goodness.

Which monochrome are we talking here? Orthochromatic, panchromatic, extended red sensibility panchromatic, what? Are we talking TriX developed in Microdol-X and printed on Ilford Multigrade or are we talking of Adox KB14 in Rodinal 1:100 printed on Agfa Brovira 3?

The combinations of B&W were back in the day complex and more than a tad more the result of experimentation and finding one's one personal style (me, Adox, Rodinal, Brovira): how can one replicate this in digital?

Simple: you can't. Digital monochrome is going to be its own beast and it were best so. I kinda like the B&W native JPGs from my E30 and EP1.

That said: I'm not sure that we can really get a panchromatic B&W sensor without using the Bayes filters, and from what I remember of silicon, without the filters we end up with a predominately blue-sensitive orthochromatic sensor. I fear that filtering will continue to be a necessarily evil until, say, Foveon were to bring out an appropriate sensor.

Oh be still, my beating heart. That would be a lovely thing indeed. Especially, dare I say it, in a 4/3 format? :-)

I have limited experience as a hobbyist but I hold it to be a very important principle that exposing for highlights and applying a curve is essential, for black & white especially.

I wish someone would make a tool to export Photoshop curves into the camera, and a camera that can save and select them for applying to the live view.

Interesting post, I particularly like Ctein's no-nonsense call it-as-I-see-it candidness. Of course if I disagreed I might feel differently.

Theoretical discussions aside, this is a niche market, and I am not sure it would be worth it for any of major players to redirect resources to something like this without a significant price premium (which would then affect demand) and there could be aftermarket conversions that achieve the same thing. I doubt the cost would be any better for the end user, and the implementation may be somewhat compromised in an attempt to increase the camera's appeal. Not sure of the technical issues of removing a Bayer filter from a sensor aftermarket, I expect it would be not quite as easy as removing the AA filter. Maybe the sensors could be replaced. You would just have to get the manufacturers to sell a batch of chips in an incomplete stage i.e. before the Bayer filter is applied. Technically feasible but unlikely. It all comes down to the sheckles.

I'm alarmed at the idea of dropping the anti-aliasing filter. I don't see how that's a good idea at all -- it should just turn those highly detailed areas you're trying to get more information about into an ugly hash of aliased crap.

Do people seriously propose that this would be beneficial? What am I missing?

re the Tim F featured comment, wouldn't the low light capability of a B&W Pen be the same as a FF colour camera? FF has 4x the sensor area too. That's not 'mind boggling', it's normal.

Mike,
As someone who started out in the digital era, only recently progressing to B&W film developing and printing (One of the few new converts, partly stimulated by your Leica Year articles!), i'm very interested in your statement:

"For one, really good digital B&W is rare; most digital B&W looks anywhere from tolerable to horrible. Then again, I was pretty picky with film B&W too—the majority of it wasn't good".

I have far too little knowledge on the aesthetics of B&W, either digital or film - can you refer me to any of your other writings which might discuss this, or articles/books by other authors you recommend? Or ideally it would be fantastic if you could write an article explaining some of the do's and dont's, or good vs. bad B&W examples you can find (or have made yourself)!

I already bought 'The Negative' and 'The Print' by Ansel Adams as technical bibles, but as always clear discussion on aesthetics is much harder to come by for anything photography related.. Pity this isn't a 'which lens should i buy' question, or there would be no end of reading material!

Thanks very much,
Nick

OT kinda but Ctein did bring it up. I have seen in DxOMark graphs that current sensors 12-13 stops of dynamic range but I don't understand how to access it. Seems like the raw converters don't use it.

Would also like to see the curves Ctein uses to give an image a heel and toe. Can understand visually when words confuse me.

"This whole 'expose as far right as possible' business is crap"

I thought ETTR was to expose as far to the right as possible *without* blowing highlights. Anyway this is how I always understood and used it (and even played with uniWB when my camera had relatively poor DR).

"Note that simulating on-lens filters in Photoshop is much cruder than using a real filter on the camera."

I would love to know more about this. Having never shot film or used a colored filter, I've gone along with the prevailing wisdom that there's no use for colored filters anymore. Is the difference miniscule? Do you see any reason someone would benefit in any substantial way from using colored filters rather than post-processing?

I suppose the reason filters are different is that they aren't really working on a color, they're working at various wavelength profiles?

As far as ETTR goes, some early misinformation on some popular websites has probably hurt people's understanding of the point of it pretty seriously. It's not really about "more levels" or "more information" or "more detail". It's simply raising the signal (light) in your image higher than it typically would be, in order to minimize noise (gives a better signal to noise ratio).

Personally, I find it works very well but only on limited dynamic range scenes when you have a lot of room to avoid blowing out highlights (which can be tricky since your histogram and camera screen don't exactly show RAW data). So I wind up not using it very often.

Ctein -

I thought "expose to the right" meant you should expose as far to the right as you can without getting blown highlights. How can you argue with that? Expose to the right doesn't mean that you should blow out the highlights.

Also, there are some photographers who feel that there is a certain percentage of blown highlights that is acceptable if it means preserving detail in other areas.

Ed

First, I thought that all that sensors did (at least with raw images which is all I ever use) was record tonal values of the light falling on them and that camera and external software interpreted that data to create the color.

I also thought the purpose of Mikes original article was to comment on manufacturers being more interested in marketing products than creating interesting or truly better tools, i.e., a philosophical question, not just a product one.

Mike: "First of all, it's not a technical issue. People who are technical and practical have a hard time grasping that. It's a procedural, visual, psychological, mental issue. It relates not to what happens in your camera, but to what happens in your head."

As for teaching student photographers to think in b&w, have them shoot their assignments thinking of how they will look in b&w, and then process them in Lightroom with the default set to b&w. And don't accept any assignments turned in that are in color. Won't take long before they are thinking in b&w, and maybe even lovin' it.

Most of my photographic career (48 years) was spent shooting in b&w because I could accurately control the tonal values in my (b&w) darkroom whereas the color photos I made were at the mercy of the photo lab. I now shoot digital almost exlcusively in color because I can control the color to a degree previously unimaginable in my (Photoshop/ACR) darkroom. And I love color.

These days, no one can buy a good toaster. Thankfully, I have an old Sunbeam Toastmaster that still works like new.

Cheers,

I have the Epson R-D1. It does really nice B&W images with the emphasis on mid tones. The B&W histogram is very different to the RAW histogram of the same shot. The raw will sometimes have blown highlights or black shadows but the black and white jpg does not. In black and white mode it exposes for a low contrast JPG image. Strange little camera, but fun to use.

http://www.phaseone.com/en/testimonials/acromatic-plus.aspx
Phase one has a link on this site to let people test their cameras. They should be happy to let either of you play with an Acromatic+ back.

How about a one-shot HDR sensor? Rather than RGB, the elements of the Bayer array have 0, 4-, and 8-stop neutral density filters (say). With the appropriate software to go with, we'd get almost double the dynamic range, and that wouldn't be a bad thing....

Well, after reading all this I've come to a firm decision. I'm going to keep using tri-x in my leica, rollei and sinar as I have for the last 35 years and if I need a digital form of an image, scan it. Just discussing the digital alternative hurts my brain, so god only knows what using it would be like.

Considering I shoot a fair amount of B&W film and convert most all my digital work to B&W it was natural this entry caught my eye.

With zero science to back my statement I would suggest a smaller sensor like 4/3 would maybe make a better B&W camera than full frame and possibly APS. I've owned several M4/3 cameras and yes there is more inherent shadow noise overall than larger sensor cameras. Personally I think this would be a good thing because the lack of grit and bite in most digital conversions (along with harsh, quick highlight gradients)is a crutch not an asset IMO.

Even the best slow 35mm B&W films show grain when magnified to 100% on a computer screen yet make superb prints. The smooth plastic look doesn't work as well for my eyes.

Having seen some of the software hacks for Canon's etc. it brings to mind the question, what would be required for someone to hack a T3i or other camera into a B&W unit? Would it even be possible?
I, alas, am not the one to come up with that "work around" but would be interested in learning about the process etc. and possibly attempting it with instructions.

Thank you Ctein. I have been 'exposing to the left' for years, even though it seems everybody who shoots digital says to do the opposite. While it never had an effect on the way I shoot, it does feel good that I am not the only one!

I suspect that a small-sensor camera such as the Ricoh GR or Pentax Q, would be the most likely candidate for such a thing. The cost premium for a specialized sensor would be much lower and the camera unit price - while higher than the equivalent colour version - would still be low enough for enthusiasts to pick up as a second camera. A small sensor would also benefit more from our tolerance for BW noise than a larger one.

Just a quick question on filtering in front of the lens (like you would with black and white film).
If you are using a primary colour filter like a 25 red with a bayer (RGGB) sensor, you can only expose 1/4 of the pixels with light!! Wouldn't this reduce your resolution by a factor of 4?
Jeremy

Mike, TOP might be a good venue to petition for the digital black & white camera. I, for one, would be interested in a sub-$1200.00 (fingers crossed) monochrome body. Astute companies must be using blogs like this one as market research. Don't you think you're concept of the DMD had some influence on what is now the mirror-less catagory? I do. Also, if someone out there is way more industrious than I am and has good contacts in manufacturing this would make for a very interesting Kickstarter project.

Dear Erlik,

Unfortunately it's not that easy. Bit depth is controlled primarily by the A-D conversion hardware. High precision, high accuracy converters are very expensive. Currently the limit is around 16 bits of real accuracy, and those boxes aren't cheap.

But the good news is that you don't need a lot of bits. Regardless of what films and papers may do, the human eye can only see about 700 gray steps, total. In a reflection print, it's about half that many. More gray levels than that are simply invisible. That doesn't mean you can get away with only 8 bits of grayscale data in a print, because the gray levels as seen by our eyes aren't close to uniformly distributed. But 12 bits is plenty and 16 bits is major overkill.

~~~~~~

Dear Will,

RAW conversion is useful for a lot more than that. It's where you get to define the precise characteristic curve for the image.

A 16-bit TIF file contains all that same information, but it doesn't present you with any advantages over RAW. Might as well stick with RAW.

~~~~~~

Dear Kostas and Gerry,

My column next week will go into this in more detail, but the short version is that a rule of thumb is supposed to be something that is practical, not theoretically optimal. In practice, exposing to the right and not unintentionally blowing out highlights is much, much more difficult than that article lets on. It gets you modest gains today in exchange for major risks. It's very bad practical advice.

~~~~~~

Dear John O.,

We're talking panchromatic. As for the exact shape of the spectral distribution, does it really matter? Unless you're talking about a seriously wonky one, like Tech Pan had, doesn't much matter. Approximately flat across the visible spectrum (emphasis on the word approximately) is where you begin.

Dealing with that with monochrome sensors is very well known and standard practice. You've misremembered silicon's sensitivity; it's much more red-sensitive than blue sensitive (which is why photographs made under incandescent light have such huge problems with noise in the blue channel). But your point is still correct; unfiltered silicon has a very odd spectral response. So, your IR cutoff filter (which you want, unless you do want an all spectral camera, and most photographers don't) is tinted blue-green to compensate. This is still a more efficient use of photons than a Bayer array.

pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

"This whole "expose as far right as possible" business is crap. Noise isn't a dominant problem in digital photography any more; blown highlights still are."

It took me years to finally go against the infamous LL expose-to-the-right article. I wish I had learned this earlier. Thanks for this comment, hopefully will help others realise this earlier...

I understand that ETTR is a very debatable topic, but Michael Reichmann never said "expose to the right as far as possible." ETTR means exposing to the right WITHOUT blowing highlights.

I know Ctein never claimed that this was what Reichmann wrote, but it seems a lot of readers think he did and still misunderstand what ETTR actually means.

BW Cameras did exist, and they were beautiful. In the 90s, most of the Digital SLRs that Kodak made were black and white (such as DCS-460m). I worked with a few (they were 6 years old at that point) and they still took very competitive photographs with much newer color cameras.
The photographer that I worked for, Stephen Johnson (http://www.sjphoto.com/, took some great portraits with these cameras, as well as great landscape work, although he mostly worked with a digital scanning back. He also had a DCS-420IR, which had no IR-blocking filter on it. That was fun, and got me into converting my own years later.
I love color photography (when done with restraint), but I would buy a digital B&W camera, if just to learn (I grew up digital).

Ctein: "I have to say that for me all photography is about workarounds."

Thank you Ctein for that moment of clarity.

There has been lots of discussion regarding the look of a B&W digital camera image, but there are some sensor (detector) issues that need further exploration. I deal with them - and have for over 30 years in optical applications and over 40 years with radiation, and my first thoughts go to spectral sensitivity. Semiconductor detectors (Silicon in this case) are very sensitive to wavelengths (colors). CCD detectors have a peak sensitivity around 900 nm, in the infrared. CMOS peaks sooner, just above the peak sensitivity of the eye, around 650-700 nm. (The eye's sensitivity BTW, is almost exactly the same as the spectral energy output of the sun, for pretty obvoius evolutionary reasons!) A CMOS sensor would have considerably more sensitivity at visible wavelengths (about 2X it seems) but it is highly variable at that point.
The issue is that a B&W sensor simply "counts photons" and does not care about the wavelength. But the spectral sensitivity of the detector affects the intensity measured at that wavelength, so the same amount of light at 500 nm will get register lower intensity than at the peak of a CMOS sensor at say 700 nm. This will materially affect the way the sensor responds to and records colors.
Early film was very blue sensitive (blue photons have more energy than red ones) and the development of panchromatic film was simply adding more sensitivity in longer (redder) wavelengths.
These silicon sensors also have considerable sensitivity in the infrared (IR) out to over 1000nm (1 micron) wavelength. We show people how to use a digital camera in a cell phone to "see" invisible IR light in fiber optic systems (our biz) since cell phone cameras appear to not have IR filters like most digital cameras. For a couple of hundred bucks, you can have the IR filter removed and use your DSLR for IR photography. If you have an old DSLR, try it.
The bottom line is that assuming you can use a raw sensor is probably flawed - you will not like the images you get. And because you have no wavelength information (I guess I should say color), you have no way to correct it in software. At the least you will need an IR and maybe UV filter to get the sensitivity close to panchromatic film. You may be much better off with the Bayer filter you have now.

I did some research and you might find these interesting:

Sensor sensitivity: http://www.fen-net.de/walter.preiss/e/slomoinf.html

Kodak on camera sensors: http://www.kodak.com/ek/uploadedFiles/ColorCorrectionforImageSensors.pdf

Film sensitivity:
http://photo.net/learn/optics/edscott/pss00010.htm

Mike and Ctein: great topic, post and comments, per usual.

Now I think we need a pole: exactly what type of digital B&W would people go for:
- ultra compact (canon S90/S95 type)?
- enthusiast compact (olympus XZ1, canon G12)
- mirrorless system (micro 4/3, NEX, Nikon 1)
- entry level DSLR (D3000, K-3, 600D/T3i)
- enthusiast DSLR (D7000, k-5, 60D)

O.K., lets stop there! Assuming same price for B&W sensor as for color, which would you fo for? A B&W K-5 would tempt me even more than the color version. That is my top wish for a B&W camera.

Around three years ago I started to look at how I might be able to improve the quality of my images in a technical sense, this led to me developing a holistic photographic system called True Light Capture which I started to run workshops in a couple of months back. One of the many aspects I wanted to address was the less than optimal monochrome images. The question was of course "why is this so?"

From much experimentation and observation I came to a few conclusions that eventually solved the issue, I will try to summarise briefly those points relating to the camera end of things only.
( Most of my testing was done with an Alpha 900 so Mike can email me if he wants some inside info to help turn his new baby into a monochrome monster)

The problems are as I see it...

Camera sensors are not equally sensitive across the three colour channels, in the case of the A900 for any given exposure the red channel is the poorest exposed the blue not too bad and the green perfect. I won't bore people with the figures but it means the that the noise signature of the three channels are different and there are also differences in tonal gradation, highlight clipping and shadow clipping points. This of course is an issue with colour images but its even worse for monochrome as most monochrome images are derived by mixing different percentages of the red and green channels and sometimes blue which gives an uneven noise pattern and tonality to the resulting image.

This channel exposure variation also means the noise apparent in the scenes objects which be very dependent on the colour of that object in the original scene. For example red object may display a nasty noise pattern whilst green object might look quite fine grained.

The channels not clipping equally causes many other issues but the worst of them for monochrome is the impact it has on highlight and shadow tonality, which can easily end up slightly posterized.

The imbalance also effects overall detail at the textural level, which for colour is not much of an issue, but with monochrome, being so dependent on fine textural rendition for its classic 3D look presents a serious issue.

Because the real problem is inherent in the actual capture it does not respond well to the tricks we can apply during Photoshop manipulation and indeed it can easily be made to look worse.

Another issue is I feel the lenses used, basically most modern lenses are just too high in contrast. I look at it this way, if it isn't recorded it can't be edited and high contrast lenses render highlight tonality perilously close to clipping or even beyond the clipping points and bury the shadow tones so far down the scale the lose their editability (for want of a better word).

If you use a low contrast optic it may well render a flat looking image but it will be easy to bump up the contrast and fine tune the image. A little micro flare also has some benefits as well on the subtle tonalities when combined with appropriate editing. The lenses I have obtained best monochrome results with are those with really good resolution in the green wave lengths and slight softness/flare in the blue wavelengths

So how might we solve the main problem? Well first you must shoot RAW and then......

Apply filtration to the lens before the light gets to the sensor, this filtration needs to give a balanced exposure across all three channels. It takes some working out and it is camera model specific but trust me on this when done the results are superb. Basically you get image which has identical cross channel levels of noise, tonality and sharpness, you can then apply whatever channel mixing options you wish in Photoshop and obtain high grade results that withstand an incredible degree of pushing and prodding.

One side issue of the system and method is your histogram does not lie any more so you can easily set the exposure so it just sits below the clipping point and thus make maximum use if your cameras dynamic range and keep the quality optimal.

The deficit? You will be shooting effectively at about 32 ISO on the A900, but this approach is all about quality so that doesn't bother me at all.

I realise some of this might sound like digital heresy and of course there is quite a bit to the whole process and it works for colour too but perhaps this might plant a seed for some.

Arg,

The question is not sensor size but rather pixel pitch. Without looking it up, I would guess that a D3X does not have noticeably larger pixels than my Pen. It also is not known as an awesome low light performer, although it is pretty good for that segment of the market. The D3s does have larger (and fewer) pixels and as far as I know its low light performance sets the industry standard.

In fact I want to thank you for bringing to might what might be the perfect comparison. The D3s and my Pen both have a chip rated at about 12 megapixels. The Nikon's pixels should be about 4x as large, in in proportion with with the overall difference in sensor size. Thus a monochrome Pen would perform about as well as a D3s if you ignore any other advantages of a monochrome chip. I would call that a pretty impressive jump.

Plus, I think that other advantages would make the monochrome Pen potentially preferable to the current D3s chip. For one the pixels are not spread around in a weird array but rather set up in a neat grid that covers the entire sensor area, minus pixel borders. That would not eliminate aliasing problems but would drop way down the amount of work that an AA filter would have to do. I believe that the Foveon chip has a similar advantage in this respect.

"Dear Scotth,
No disrespect intended, but yeah, pretty much missed everything we talked about. Please go back and reread."

I did that and I still do't get it.

My assumption is that Mike's biggest complaint with digital black and white is something that is inherent in the sensor technology. Once the sensor is saturated, your highlights are gone. Removing filters and arrays, or post processing in the camera is not going to change the fact the sensor is saturated and there is no data available to work with.

Extra resolution, sensitivity, and whatever are nice; but none of those seem to be the issue.

I agree that minus EC is often preferable with small sensor cameras as they have a tendency to blow highlights. I do this routinely with my Canon G10, typically when doing landscape photography.

DSLRs are a different story in my experience, particularly if one has profiled their camera with tools like xRite Color Checker and utilize software like the latest versions of LR and ACR in PS.

FWIW I also profile my G10 and it does make a difference.

Brooks Jensen has also made the point that when converting digital color to B&W to first make the best color adjustment that one can before converting to B&W to optimize dynamic range and reduce shadow noise. I concur with that.

But do we need a B&W only sensor camera? I'm not certain that we do these days.

Very interesting discussion around this topic. Thanks. I hope that one day we'll see an effort in this direction - at least to explore this corner of the photography universe and see what comes of it. If it works, then I'd be first in line.

As a software developer, my first thought after reading this discussion is that it should be straight-forward to modify one of the camera firmware hacks to record B&W raw files. The GH2 hack seems to be under pretty active development, so it might make for a good candidate.

One of the benefits of doing it in software on a camera with a bayer sensor is that you can change the spectral sensitivity on the fly (e.g., you can have a menu option to switch between T-Max and Tri-X modes).

I'd do it myself if I had more free time on my hands, but unfortunately I don't. However, I suspect that if you were to organize a donation drive on TOP, you could pretty easily convince one of the hack developers to do it for you.

Verrrry interestin' gentlemen. Based on doing B&W for over 60 years (color for only about 45), I have a few thoughts:
If you make a dedicated B&W camera, its viewfinder should be in B&W - probably means an EVF. Seeing B&W tonality in a color scene is a not always easily learned skill. This would help most users.
The sensor and viewfinder must have the same spectral sensitivity.
Some basic filters could be built in, possibly as software, which would permit better control, and not require carrying a dozen different pieces of glass. Of course, some of us will always want options not built in....
A good B&W mode in a color camera may not be a bad alternative. It would have the above characteristics. Granted there will still be issues re Bayer array and anti-aliasing filters, but done right it might work. And I am NOT talking about in camera conversion, which is currently available.
Personally, I prefer taking it all in color, and doing my B&W conversion in Photoshop, where I can use channels and other tools to control the final tonality in my conversions. Like I used to do making B&W internegs from slides, using filters. I am one of those who likes to experiment and sometimes make 10 or more versions of an image (and sometimes have trouble chosing the one I like best).
Finally, I really don't care what process you use, its the final image that counts.

"But do we need a B&W only sensor camera? I'm not certain that we do these days."

I don't think that's the question at all. It's not "need", it's "want". It's not a matter of just having the capability of making B&W photos. Obviously there are a lot of ways to do that digitally. It's more about optimizing a sensor for B&W and what the advantages would be. It would always be a niche product.

That being said, for me, I don't think I'd buy one. But I can understand the desire.

My, how we do go on. If the color image on the back of your camera keeps you from thinking in BW, tape over it and convert your files to BW when you put them on your computer. It is still quicker than processing film and making a contact sheet. Just pretend you are shooting Vericrome Pan (it was my favorite also)and underexpose one stop in bright sunlight. Maybe some manufacturer will add a BW option to its firmware, should be fairly easy. Maybe someone already has.

Pentax makes black and white cameras. And red and blue and yellow and pink ones too.

"In practice, exposing to the right and not unintentionally blowing out highlights is much, much more difficult than that article lets on. It gets you modest gains today in exchange for major risks. It's very bad practical advice."
Thanks so much for clarifying this. IMO this deserves an article on its own, imagining how many enthusiastic beginners are negatively influenced by such misinformation.

Regarding "Expose to the right:"

M. Reichman
August 2011
http://www.luminous-landscape.com/tutorials/optimizing_exposure.shtml

"The reason why we want to expose every shot that we take with the data as far to the right of the histogram as possible is because that's where the data is! It also is where the visible noise isn't. The visible noise is lurking in the darker stops."

Ctein
October 2011
"This whole "expose as far right as possible" business is crap. Noise isn't a dominant problem in digital photography any more; blown highlights still are."

In trying to reconcile conflicting "rules" and opinions of photographers I always think back to Fred Picker's wise advice,

"Careful photographers run their own tests."

regards,

Richard

In trying to reconcile conflicting "rules" and opinions of photographers I always think back to Fred Picker's wise advice, "Careful photographers run their own tests."

regards,

Richard

I must say that I am finding this all quite interesting. I would love to hear you both open the "workarounds" can of worms...

Also, Mike said:

"I wish I could provide visual illustrations here, but as always, it seems like it would be very unfair of me to find pictures by strangers on the web and hold them up here as examples of what not to do."

I hereby give you (Mike) permission to take any of the B&W photos (or, really, any of the photos) at my Zenfolio site, http://njcondon.zenfolio.com, post them to ToP, and use them as examples of what not do to. I am pleased with every photo I put up there, but I am quite certain that I am not as good a judge of these things as you are. A two-line comment from you pointing out a mistake would teach me enough to easily counterbalance any angst I would feel at having my errors pointed out in public. I suspect many of your other readers would feel the same.

"Tim,
Not really interested. Would you pay three times the value of the camera for a jury-rigged solution of questionable utility that voids the warranty? That's really not what I'm talking about here at all."

I think it is. The tests on that site address the very first point you raised -- whether or not a dedicated B&W sensor could be significantly better than a color one. The cameras advertised on that site aren't the answer of course, but their hacking seems highly relevant to this discussion.

Does my unabashed love of red filters make me a shallow person?
I actually worried about that back in the 1970's. Now that I am north of 60 and no longer shackled by subtlety or taste it doesn't bother me so much. Go ahead grab those blue sliders and watch the sky go black. It will be our little secret.
This thread is a pip. Love it.

The reason why I find the whole ETTR business a bad approach is because is not practical:
+ If you are in a situation where you can take a test shot, check the histogram, compensate exposure, and shoot a again, then you might as well just bracket 3-5 raw shots or just go HDR
+ If you have no time for this -because you are capturing a unique instant aka decisive moment, etc- you are better off protecting those highlights and underexposing 2/3 or even a full stop. If you have to choose, dealing with noise is better than dealing with clipped highlights (noise reduction algorithms keep evolving at a much better pace than any highlight recovery hack I found so far)

Color transformed to B&W carries powerful editing ability that would not be found in a B&W sensor file, short of external, physical filters in front of the lens. The issue with physical filters is that they are global, and global is sometimes not desirable as it is applied everywhere. The same with simple, global color filters in post processing software.

Anyone who has used NIK uPoint selections, in NIK plugins, or in Nikon Capture NX2, knows that one can apply a multitude of effects to incredibly complex, and immediately available selections. Local contrasts, local sharpening, blurring, toning, local noise reduction, almost local any kind of effect or edit tool found in the arsenal can be selectively applied. Photoshop selections offer similar power, but again these can rely on the background color of the raw file to make the selections.

Anyone who has used NIK Silver Efex Pro knows that the selections based on the underlying color data are powerful, fast, and can produce a stunning image.

Yes, I suppose that more pixels, or pixels with more dynamic range or higher ISO, are possible if the sensor does not have to group several RGB filtered sensor sites to make a color pixel. But, that begs the question of whether we already have enough pixels, or not, or whether the dynamic range is great enough, or not. D3s versus D3x versus 80 megapixel medium format backs.

Part of that is how big is the print going to be. What is the subject, and the lighting conditions, distance to subject and so on.

I heard Joe McNally ask recently whether anyone really needed more pixels. Most folks in the audience agreed that they did not.

So, if there are enough color pixels already (or will be with the larger pixel count sensors in the pipeline), what is the advantage again of monochrome pixels?

..The anti-aliasing filter is a little easier to talk about. Getting rid of it produces a modest improvement in resolution, and I emphasize modest.
Not sure that the effect of the anti-aliasing filter is as minor as Ctein suggests, having just acquired the Ricoh GXR M-Module, which produces a clarity and transparency of color in the DNG files that is much better than of the GXR A/12-50mm-e and the A/12-28mm-e units, which do have an anti-aliasing filter. However, I don't know what other differences there are in the sensor or electronics in the new GXR-M. On the other hand, the latter, in my view, also produces better DNG files in terms of color clarity and transparency — which is the best way I can find to describe the look — than the Leica M8 and M9, which also do not have anti-aliasing filters, but do have older sensors.

—Mitch/Bangkok

A properly exposed to the right image has zero blown highlights. Blown highlights are not an issue. Bracketing could save time for hurried situations, when one cannot do the best exposure. That, however, is no excuse for not doing the best exposure one does have the time. HDR does not work well in all situations, and it brings its own problems.

Shadows, the lowest exposed portion possible on the sensor, always have some noise. ETTR reduces THAT noise.

Take some account of how much data is found in each stop of light in the histogram. Far more data are found in the bright areas than are in the darkest. Editing has more headroom with the additional data.

Dear Tim,

Pixel size is a useful metric, but not a very precise one. You can find more than two stops variation in speed between cameras/backs with the same size pixels. In other words, the best of cameras with 3 micron pixels may outperform the worst with 6.

On average, bigger may be better, but no one buys an average, they buy a singular and specific implementation. And then, size doesn’t trump everything.

Aliasing is a problem with any discrete element sampler. A monochrome sensor won't produce color banding... but it will still produce banding.

Some people hate aliasing so much that a nonfiltered sensor will be almost useless to them. Others hardly notice it at all.

~~~~~~~~

Dear scotth,

You have taken one of many points that we discussed... and didn't even agree on... and treated one side of that disagreement as if it were THE point of what we wrote. That's why I said you missed almost everything.

If Mike and I had thought this all could be properly summarized in few dozen words, we'd not have spent a day of our time writing 100 times as much.

~~~~~~~~

Dear Richard,

Nice theory, bad practice. Unless you've got all the time in the world or are being paid to run tests. It's popular today to reject appeals to authority on the theory you shouldn't trust anyone and should always check things out for yourself. Fine idea if you're gonna live forever and have mastered the 48 hour day.

Most of the time, you just have to use your judgement to decide what you want to run with. Not run interminable tests.

Most of what I know I got from reading and thinking, not running tests.

~~~~~~~~

Dear Fernando, et.al.,

Really my next week's column will be entirely on this topic, so maybe let's hold the technical discussions of exposure until then?

The camera histogram, by the way, is a notoriously unreliable source of information on whether you're clipping highlights or shadows.

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

To me, I find the whole 'niche market' discussion of a B&W only digital camera interesting. I would gladly buy one myself (in Nikon DX/FX or Micro 4/3 mount). I still like using my GRD I as a B&W only camera from time to time. To me, at least I find it's out of camera B&W files pleasing and film-like. Now, if only my DP1 wasn't so slow in operation, it could potentially fill that role but I like the controls/interface of the GRD a lot more.

Given that I've just gotten my second Digital IR camera conversion done, isn't Digital IR even *more* of a niche market than Digital B&W only cameras would be?

Is Digital IR really even more of a niche? I'm just guessing but I may be wrong since there are several IR conversion companies in North America, and many overseas as well. Not to mention the already linked to maxmax B&W DSLR conversions as well.

Sure, a lot of the potential resurgence of Digital IR, could also be film IR shooters switching to digital for convenience during airport or international travel, or lack of local options for development/processing of films. Also, the heavy convenience factor of having a G10/G11/G12 or other fully featured and small camera IR converted to have an IR camera to play around with for a minimal investment.

Sure, I'm a bit of an odd case. I'm young-ish (turn 33 this month) but I started shooting on Color & B&W film when I was about 12. (My father is a cinematographer so it's kind of in my blood) Stopped shooting for a few years, got my own film SLR and started shooting again heavily with B&W & Color film. Then got a 2MP Canon P&S and shooted more casual snaps, and eventually got a D70 and that increased my Nikon lens & Camera purchasing.

My current main kit is a Nikon D700, D70IR. At least when I'm shooting for pay. For my personal work or when I travel, I'm usually bringing my GF-1, GH2 & my new GF-1 IR for the convienience. Sure I do like to shoot my Bronica 6x6 or Pentax 6xy7 gear from time to time to keep in practice.

I also have access to several 8mm, 16mm & 35mm motion picture cameras and even have a fridge with a lot or 8mm & 35mm B&W IR motion picture stock. Let alone all of our 35mm color negative stock.

I guess I'm just trying to say that not everything works for everyone. Different tools in the toolbox for different situations. No one has a perfect camera. That seems to keep being mentioned in these discussions about a potential B&W digital camera. Your shooting style/methods are different than mine due to how we learned to shoot, what we shoot, or just the shape/size of our hands.

Now, perfect lenses, that's a whole other story. :)

Chris

Why remove the Bayer filter? Just discard the colour channels in the in-camera software, and have the RAW file record all pixels indiscriminately. Sure, there'll be some variation in luminosity between adjacent (odd-coloured) pixels, but not much, and it should still give image quality better than the existing kludge. A really clever software engineer should even be able to write code that can be installed retrospectively, like a firmware update. No need for expensive new cameras, just reprogramme the superseded model sitting in your cupboard. Pay the programmer a modest amount, and convert it to a "new" B&W camera. (And a really, really clever software engineer should be able to use hidden colour channels to give in-camera filters, without letting colour into the RAW files.)

The comments to this entry are closed.