Introduction: The foregoing discussion of a possible B&W-only camera in these environs (in this post and then this one) has deliberately ignored an issue that is nevertheless of some concern...namely, implementation. How good could a B&W-only sensor be? Would it have any appreciable advantages over conversions from a conventional color sensor?
My observation has been that one potential problem of calling for a niche device is that the first manifestation of the device type can be taken as the litmus test of the whole concept, when its acceptance or rejection by the market might in fact depend far more on the implementation of that idea in that specific device. One thing I took for granted in the foregoing posts on this topic is that I'm assuming that a B&W-only sensor would provide pleasing B&W output! But that's not necessarily a given—I could see a B&W-only camera coming along that I would hate simply because the output didn't look good. So how hard would it be to get good B&W out of a dedicated digital sensor? Since much of this is theoretical to me, and my comments mostly speculative, I thought I'd ask Ctein to join me in talking it over.
Mike: Ctein, there are a number of issues that we can discuss in turn. But to begin with, it's often been repeated on the web that there are probably some inherent advantages to be gained in discarding the color filters on the photosites and the anti-aliasing filters common to Bayer-array sensors. The assumptions are a) that a dedicated B&W-only sensor would be sharper than a Bayer sensor with the same number of pixels, and b) that it would have greater light sensitivity because those filters cut down the amount of light reaching the photosites. Yet the few direct experiments that are available on the web seem to conclude, generally, that while these effects are real, they're not as decisive as some might wish. What are your thoughts?