« Affirmation | Main | Art Prize First for Photographer »

Friday, 15 June 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.

What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?

[[What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?]]

Kodak already did, in 2002.


Very interesting concept but am I the only one who finds the level of noise very high for iso 200 and the color frindging... Not to rain on any parade but the kodak sensors have not always been top of the game for high iso...

never the less very interesting.

[[but the kodak sensors have not always been top of the game for high iso...]]

That was my first thought as well. Kodak sensors used in point and shoot digital cameras have always had noise problems. Kodak's solution has always been to smash their giant hammer of a noise reduction algorithm to smooth out the noise along with anything resembling fine level detail.

However, anything new in the area of sensor design is a Good Thing (tm) and hopefully we'll see a response from Sony's sensor division that is more than just higher pixel counts and recommended pixel binning...

Károly Nikl,
I think the monochrome sensor is what they use in those dinky camcorders featuring "nite shot" mode. Notorious for filming "A Nite In Pairs"... and other luminate green eye monster ppl.

Not to rain on any parade but the kodak sensors have not always been top of the game for high iso...

Hence the suggestion that Kodak will make bank *licensing* the idea, not building it.

I'm curious about the decision to keep the GGRB ratio intact -- is the increased green sensitivity still that important?

This sounds like a creative solution to a limit that engineers are running up against. I like that. It will be interesting to see what, if any, are the drawbacks to using this new sensor pattern. (The engineers in the thousand nerds blog mentioned color bleed was a difficult issue to tackle. Until we get image samples, we can't decide if they were successful.)

Oh, and again from the thousand nerds blog (why was I drawn to that link? I can't imagine.), it sounds like this technology will NOT improve the detail in the image. You still have about 50 percent of the photosites being used for luminance information (Bayer sensors use the green channel). The only real advantage is that the luminance channel has no filter over it so it can gather more light. Unless they do something else fancy with it between now and production, it may not offer any advantage to photographers who have no need for high sensitivity.

I'm glad to see that people are willing to consider new approaches to sensor technology. Sigma gets a thumbs up for this, too. How much impact this will have on the photographer remains to be seen, though.

Brilliant it is, good for Kodak.

From school do we know that human eye's color resolution is about 1/3-1/4 of luminance resolution. That is widely used by TV: the broadcast signal contains three times less color information than brightness (they had to reduce the amount of transmitted info in order to squeeze the width of the radio band).

I just hope that Kodak will do well with this invention. History shows that although they were world leaders in many areas, the cream of the business was let away by bad marketing planning or even lack of persistence..

The human eye's color resolution may be bad, but that doesn't mean we can give up color resolution in a photograph. Do the experiment -- take a good shot at very low ISO and print it. Then print the same shot after putting 2 pixels of chroma blur on it. I guarantee you: your human eye will be able to tell the difference.

Not that this is necessarily a bad thing. Variation in sensors is a good idea. I still harbor the idealistic dream of an age of interchangable sensor modules, swapped in and out of the camera like lenses to take best advatage of a given situation.

I'm not convinced at all. They're trying to get better noise response due to small photo sites but the overall resolution is going to be that of many less larger pixels. How about just using less pixels.

More of my thoughts http://doonster.blogspot.com/2007/06/pushing-noise-envelope.html

Could be interesting for cameras with b&w modes - greater tonal accuracy than simply taking 2.5*R+6*G+1.5*B from an 8-bit file, by far!

In a normal Bayer array you lose half the light - one stop - on the green sites. Removing the color filter thus gives you a maximum of one stop improved sensitivity. Used judiciously it means you can smooth out the noise a bit better for the other sites, for a practical gain of somewhere between 1/2 to 1 stop in the shadows, with my guess falling squarely in the middle. Say, 2/3 to 3/4 stop improvement in noise. Forget any dreams about 2-3 stops.

On the other end, half your photo sites are suddenly one stop more light sensitive, meaning you will saturate them one stop earlier than the color sensing sites. So your highlights - the top stop - will have only half the detail of today.

The drawbacks don't matter all that much for small digicams, which have resolutions oustripping their optics already. But the drawbacks are more serious for big sensors. There's good reason their press releases are talking about cameraphones, not DSLRs.

Also, Fuji's sensors are basically using the same idea, implemented differently. You haven't seen them take the imaging world by storm, have you?

The theory of Bayer pattern image sampling allows for a plethora of different patterns, including "white" pixels so the Kodak pattern by itself should not be patentable since it should fail the inventive step test, being an incremental advance over existing techniques and obvious to a person skilled in the state of the art. What may be patentable are the post-processing techniques to extract information and reconstruct an image with minimal artifacts.

I could certainly think of other patterns, such as a CWYW pattern which would probably do the same job with less impact on chroma spatial resolution (don't even need a magenta filter, you can infer the green information from the luminance, cyan and yellow channels). This is just a modification of Nikon's CMYM pattern they used on some of their cameras, and Kodak on the DCS620x for example: http://www.lonestardigital.com/DCS620x.htm

In the end, the Kodak pattern is trading luminance for chroma resolution. It's not a big deal in my book. A three-layer pixel (ideally) would not suffer this trade-off, though current implementations have other problems which are specific to the colour separation technique (silicon absorption profile). Fuji's organic layered pixel technology may be able to overcome Foveon's problems in this regard.

Well, interesting idea. This is essentially a hardware implementation of what amateur astrophotographers using CCDs have been doing for years - full res and unfiltered for luminance, and binned for RGB to reduce the required exposure times. Works extremely well there.

I just wish somebody (Leica? they certainly seem to be willing to build rather low-volume products) would make a modern "B+W only" camera with no filters on the CCD a bit like the old Kodaks - even better sensitivity at low light levels, and real IR into the bargain!

Of course if the M8 is anything to go by everybody would just bitch about needing an IR cut filter all the time, so better make it internal with a little sliding switch or something :)

Enough dreaming.

Thirty years after the introduction of the Bayer pattern, Kodak finally figures out you get better images by imitating the human eye instead of a cheap color TV set ...

We're so enamored with color, we forget that less than 1/20th of our eye cells are dedicated to color reception (cones), the vast majority are dedicated to pure luminosity (rods). (http://en.wikipedia.org/wiki/Rod_cell)

Has anyone looked at the images at Imaging Resources? (link above) On the third set of images (the juggler) I see a lack of details from the "current technology shot" and the new sensor. Look at the foot of the red pipe and the first button (from the bottom) on the shirt. Granted is a prototype but.....

If I'm understanding all this correctly, then the skeptics are right that the Kodak array isn't any better than a Bayer array--but that's not the point. As Mike explains, color sensor resolution and accuracy are products of clever interpretation as well as physical design. Compared to the Bayer pattern, the Kodak array is simply a different method of "cheating"--a matter of priorities and trade-offs. The advantage (as I understand it) is that the new method is a better complement to the state of the art in cheaply produceable sensors (resolution-rich, sensitivity-poor). i.e., leverage.

As with the Bayer, it will be up to software to best interpret the data in any given circumstance.

Mike, it is my understanding that the extra G is to mimic the human eye's extra sensitivity to green. I suppose the presence of "L" pixels may allow a slightly different ratio, if that's what you're suggesting, but if so I suspect it's a matter of time and research.

Obviously, I'm no expert on this stuff, but your question leads me to hope that someone who is knowledgeable can comment on whether we can call this an evolutionary step from RGB to LAB for digicam sensors.

I doubt that sensor manufacturers will wait until Kodak provides engineering samples to begin working on this. There's no law against making one of these for yourself and building algorithms to decode it, as long as you don't sell it without a license. If it works out, then it's a licensing fee to Kodak and off to market with the new technology (possibly ahead of Kodak, if they'll allow it). If it doesn't work out, then you're not waiting until next year when Kodak finally sends out the engineering samples to find that out.

"What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?"

http://mega-vision.com/products/Mono/Mono.htm - i'd love to have one of these to play with!

I run the Windows "My Pictures" screensaver - it entertains by bringing up photos I'd long forgotten about, and recently I've been prompted to revisit files made on my first DSLR, a Fuji S1. The more I look the more I'm convinced that (given the same sensor area) less pixels means nicer photos......

Cheers< Robin

Perhaps I am the only one in North America that finds the news of this new sensor to be dreadful, but I do. The original Bayer patent was based on a cell of two luminance pixels, and one red and one blue pixel---not two green, one red and one blue. It was a scheme originally developed for single tube vidicon television cameras. So in essence Kodak is returning back to the original Bayer patent, but since they could not get a new patent on old work (and collect royalties), the green element has been retained.

Part monochrome camera, remainder bayer camera. While the monochrome portion is going to be two stops faster, not so for the bayer portion. So to suppress noise, the monochrome information will be superimposed on the bayer info.

All I think its going to do is create even more artifacts in images.

I cant wait until some company finally comes to the same conclusion that the broadcast industry did a long time ago---make a 3-CCD camera and get beyond matrix based sensors.

I really think this is more about patent rights and Kodak trying to show something "progressive" in a failing company.


PS In regard to digital monochrome, I shot the DCS 760m for 18 months and it was COOL---base ISO 400. Since then, I have lobbied with three of the top camera makers on making a digital monochrome camera or back---and six months of effort with one company in particular. In the end, there seems to be no vision of what a digital monochrome can do beyond Bayer from any one in management. Truly sad.

What's old is new again. Kodak used much the same chroma-subsampling with their PhotoYCC encoding for PhotoCD way back when. The difference is that YCC was 4:1:1 instead of 4:2:1:1 in the new layout, the C1 and C2 components being derived from the primaries with luminance subtracted. Worked pretty well. Chroma-subsampling would be ideal for Photoshop's internal editing space but I'm not holding my breath.

These guys' shirts would be ideal for testing.

What if the firmware of any camera manufacturer was changed to deliver luminance from the green channels exclusively?
Just asking.

Hey Guys,
The answer was there all along,look at their shirts,surely the shirt designer should get some credit/royalties!!

What seems to be more interesting to me is a technology that will allow the demosaicing pattern to be:

1. Adaptive (automatically)
2. User selectable
3. Different for different parts of the image
4. Ultimately - adjustable during post processing...

But this is surely a step in the right direction.

What if, in the array, you throw in some neutral gray pixels that are, say, one stop less sensitive to light than the RGB pixels, as well as the clear pixels that are more sensitive. Let's say every fourth clear pixel is a less sensitive neutral gray one. That could lead to an improvement in dynamic range as well as sensitivity.

"What if the firmware of any camera manufacturer was changed to deliver luminance from the green channels exclusively?
Just asking."

You can do it yourself. Find some public domain RAW software for your camera and go wild.

I imagine that Steven Johnson is not pleased with this - he doesn't even like Bayer filtering.

I don't see much discussion about how to format one of these RAW files. The last thing we need is yet another (TIFF-based) proprietary RAW format. So how would one represent such a CFA pattern (especially if it is semi-random) in the supposedly unified Adobe DNG RAW format? The answer I assume is to ignore the actual sensor's CFA pattern, which can't be represented directly anyway, and store post-interpolated linear data (IMO the way all DNG's should be written and without a proprietary maker note). Then it would be up to the licensee of this technology to compete based-upon the merits of the photos it takes. And speaking of RAW formats, what ever happened to Kodak's ERI JPEG format?

The comments to this entry are closed.



Blog powered by Typepad
Member since 06/2007