« Beware Blurry Pictures on eBay! | Main | 'Wood, Water & Light' »

Monday, 18 June 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.

"Many cameras have video ASICs that require Bayer RGB input. The new filters were designed to accommodate that fact."

A-hem! I think they've got applications in mind, if they are thinking of interoperation with specific components.

One stop improvement does not seem so much given the fact that Canon improves by one or two stops with every generation of its high end cameras without changing the array.

Oh Mike, please do not fall prey to the (pun intended/not intended) plague. You writing is excellent as it is and you don't need this writer's crutch to express what you meant.

I think sensor design is still very immature, and manufacturers have so far focussed on reducing prices (and hence replicating existing technology) in sensors of reasonable quality. What makes it so shocking that this particular idea has taken so long to emerge is that it emulates nature! The human eye works on a luminance/colour basis!

How about combining this idea with other technologies?

Combine it with the Foveon sensor in a chessboard fashion - alternate pixels are panchromatic for sensitivity (ooh - just like the eye's rods) and filtered on three layers for colour (ooh - just like the eye's cones).

Combine it with the Fuji 'S' concept of two 'sites' per 'pixel'. Let the big, sensitive site by panchromatic, and let the smaller sites take care of the Bayer Array. At low sensitivities, the sensor wouldn't behave any differently to existing sensors. At high sensitivities, one would have a far cleaner luminance layer, at the expense of some colour information (gee whiz - just like the human eye...again). Nice ISO 200/400 colour images, and smooth ISO 3200/6400 B&W images from the same sensor. Dreamy.

Dr. Hamilton's comment on the RGB ASICs is interesting. My understanding is that one requires three axes to describe colour (e.g. R, G, B). BUT "Lab color" is described by luminance (L) and two colour components (a and b). Hence only two coloured filters should be required in the implementation above. These could be arranged in a chessboard fashion, leading to far simpler interpolation techniques than the Bayer array allows.

These ideas are simple extensions of what already exists. But something tells me the optimum solution will be something more...organic. A quasi-random (but known to the image processor) distribution of various luminance and colour cells of various sizes would be closer to what nature has developed. And what's good enough for nature....

Oh, by the way - you read all of this stuff first here, right, Mr. Fuji and Mr. Sony. It's all copyright me, 2007, okay?

In response to Robin's comments on mixing Pano & Foveon sensors - I think: I'd rather have it the other way: big RGBs (less colour noise) and small panos (good sensitivity anyway). Luminance noise is easier to deal with in processing.
Why not combine the 2 ideas and have a 2-colour Foveon-type sensor site?

Nice post - I don't know whether I would have had access to this kind of thing were it not for TOP.

It's kind of a relief to read about this development. It gives hope that increasing processing power in cameras and increasing pixel density will be used to make better images. That would be a big change from the present, where increasing pixel density leads to worse images, and more processing power is used to try to hide the newly created problems.

I like the idea of using light and dark pixels to increase dynamic range. But I admit I'm no expert. To me any change in this area, if it encourages still more changes, is good.

Interesting story, and being noticed: Rob Ralbraith (http://www.robgalbraith.com) has a nice plug for The Online Photographer today.

Thanks for the interview! It is refreshing to hear such things, not intermingled with marketing speak, and 'yeah we can do everything better'-answers.

The comments to this entry are closed.



Blog powered by Typepad
Member since 06/2007