John Compton and John Hamilton (photo courtesy Kodak)
Craig Ferguson notwithstanding (see below), you can't believe everything you read on blogs. The following, for example, is a mixture of educated guesses, speculation, analysis, reading between the lines, and forecasting. All or some of it might turn out to be right. You know the alternative. I'm doin' the best I can.
First guess: the smattering of news you've seen about this on the internet recently is because Kodak's attorneys just locked the last patent down and gave PR the go-ahead to spread the news.
Second, starting in 2008, virtually every sensor maker in the world will be paying Rochester's Great Yellow Father a licensing fee (or, more likely, another licensing fee) for every sensor they make.
Third, this is the biggest news in digital photography of this season. "Biggest" as in "most significant." In several seasons, in fact.
Fourth, the idea is simple, and brilliant—brilliantly simple, in fact. One of those smack your forehead and say duh, why'n't I think of that?!? ideas.
What is it? It doesn't have a name yet (I even asked). It was invented by Kodak's John Compton and John Hamilton, although evidently they're too modest to call it the Compton-Hamilton Array. Call it Kodak's Transparent Array. (I just coined that; I'm always doing that. Better not quote me.) The idea is that since digital sensors have so many photosites/pixels now, perhaps not 100% of them need to be used to pick up color information. In the Bayer Array (the industry-wide standard), half the photosite lenses are dyed (filtered) green, and the other half are split between red and blue. But just as with a colored filter on your old black-and-white camera, the color filtering cuts down drastically on light transmission (remember how you used to have to add another two stops to your spotmeter reading whenever you made like Robert Adams and slapped that green filter on the ol' view camera? Sure you do). So, Kodak reasoned, why not use half the pixels to pick up color information and just not filter the other half at all?
Brilliant. Simple. But patentable.
In the Semitransparent Array, half the photosites/pixels are not filtered/dyed at all, and the other half are split in the old 2-green, 1-red, 1-blue ratio.
(The array isn't fixed, either. Several arrangements are possible.)
The result is an array that is one to two stops more sensitive to light. It will presumably yield less real color information that the old Bayer algorithms, but that's not so important now that almost all sensors have so many millions of photosites (quite a different situation from when the Bayer Array was devised in the mid 1970s, or with early digicams when "a megapixel" was a huge deal. Ancient history, eh?) (Please note also that this has nothing to do with how "colorful" your images will look. How the information is interpolated is still the determinant of that.) Counterbalancing that loss is a higher capture of real detail, in the form of a true luminance channel, and a true increase in light sensitivity.
That jump in light sensitivity is a big jump, too. It will mean that the same quality you get from your digicam and cameraphone at ISO 200 will be available at ISO 800. And it will mean that the quality you get from your DSLR at ISO 400 now, you will be able to achieve at ISO 800, with better real detail. (I'm assuming here that DSLR sensors are already optimized for high sensitivity, and that they represent the "one stop" end of the quoted "one to two stop" range of sensitivity improvements Kodak's press release touts. I could be wrong.)
Possible outcomes of all this: a) All digital imaging devices are in for a quantum jump in performance, and soon. b) Fuji or somebody has already done something similar but didn't protect the idea with patents adequately, and we'll see a roiling of the surface waters as Titanic legal battles go on under the surface. c) Some big company or other that already has good high-ISO-sensitive sensors could decide to soldier on with the Bayer Array and we'll see supremacy battles waged. d) Kodak gets added revenue of the best possible sort: a leetle bitty dose of money for every sensor made anywhere. Could be worse. Could be workin'.
One thing's certain. This is potentially very, very big.
_____________
Mike
Further Reading:
Kodak news release
Mike Tompkins' white paper at Imaging Resource
Kodak's "A Thousand Nerds" blog
Rather superficial PC World article
Associated Press article on CNN
What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?
Posted by: Károly Nikl | Friday, 15 June 2007 at 08:26 AM
[[What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?]]
Kodak already did, in 2002.
http://www.luminous-landscape.com/reviews/cameras/kodak-760m.shtml
Posted by: phule | Friday, 15 June 2007 at 08:48 AM
Very interesting concept but am I the only one who finds the level of noise very high for iso 200 and the color frindging... Not to rain on any parade but the kodak sensors have not always been top of the game for high iso...
never the less very interesting.
Posted by: S Jol. | Friday, 15 June 2007 at 08:51 AM
[[but the kodak sensors have not always been top of the game for high iso...]]
That was my first thought as well. Kodak sensors used in point and shoot digital cameras have always had noise problems. Kodak's solution has always been to smash their giant hammer of a noise reduction algorithm to smooth out the noise along with anything resembling fine level detail.
However, anything new in the area of sensor design is a Good Thing (tm) and hopefully we'll see a response from Sony's sensor division that is more than just higher pixel counts and recommended pixel binning...
Posted by: phule | Friday, 15 June 2007 at 09:00 AM
Károly Nikl,
I think the monochrome sensor is what they use in those dinky camcorders featuring "nite shot" mode. Notorious for filming "A Nite In Pairs"... and other luminate green eye monster ppl.
Posted by: Kainnon | Friday, 15 June 2007 at 09:00 AM
Not to rain on any parade but the kodak sensors have not always been top of the game for high iso...
Hence the suggestion that Kodak will make bank *licensing* the idea, not building it.
I'm curious about the decision to keep the GGRB ratio intact -- is the increased green sensitivity still that important?
Posted by: Mike | Friday, 15 June 2007 at 09:01 AM
This sounds like a creative solution to a limit that engineers are running up against. I like that. It will be interesting to see what, if any, are the drawbacks to using this new sensor pattern. (The engineers in the thousand nerds blog mentioned color bleed was a difficult issue to tackle. Until we get image samples, we can't decide if they were successful.)
Oh, and again from the thousand nerds blog (why was I drawn to that link? I can't imagine.), it sounds like this technology will NOT improve the detail in the image. You still have about 50 percent of the photosites being used for luminance information (Bayer sensors use the green channel). The only real advantage is that the luminance channel has no filter over it so it can gather more light. Unless they do something else fancy with it between now and production, it may not offer any advantage to photographers who have no need for high sensitivity.
I'm glad to see that people are willing to consider new approaches to sensor technology. Sigma gets a thumbs up for this, too. How much impact this will have on the photographer remains to be seen, though.
Posted by: Jeromie | Friday, 15 June 2007 at 09:02 AM
Brilliant it is, good for Kodak.
From school do we know that human eye's color resolution is about 1/3-1/4 of luminance resolution. That is widely used by TV: the broadcast signal contains three times less color information than brightness (they had to reduce the amount of transmitted info in order to squeeze the width of the radio band).
I just hope that Kodak will do well with this invention. History shows that although they were world leaders in many areas, the cream of the business was let away by bad marketing planning or even lack of persistence..
Posted by: Dibutil | Friday, 15 June 2007 at 09:17 AM
The human eye's color resolution may be bad, but that doesn't mean we can give up color resolution in a photograph. Do the experiment -- take a good shot at very low ISO and print it. Then print the same shot after putting 2 pixels of chroma blur on it. I guarantee you: your human eye will be able to tell the difference.
Not that this is necessarily a bad thing. Variation in sensors is a good idea. I still harbor the idealistic dream of an age of interchangable sensor modules, swapped in and out of the camera like lenses to take best advatage of a given situation.
Posted by: dasmb | Friday, 15 June 2007 at 10:16 AM
I'm not convinced at all. They're trying to get better noise response due to small photo sites but the overall resolution is going to be that of many less larger pixels. How about just using less pixels.
More of my thoughts http://doonster.blogspot.com/2007/06/pushing-noise-envelope.html
Posted by: Martin Doonan | Friday, 15 June 2007 at 10:23 AM
Could be interesting for cameras with b&w modes - greater tonal accuracy than simply taking 2.5*R+6*G+1.5*B from an 8-bit file, by far!
Posted by: Tim | Friday, 15 June 2007 at 10:30 AM
In a normal Bayer array you lose half the light - one stop - on the green sites. Removing the color filter thus gives you a maximum of one stop improved sensitivity. Used judiciously it means you can smooth out the noise a bit better for the other sites, for a practical gain of somewhere between 1/2 to 1 stop in the shadows, with my guess falling squarely in the middle. Say, 2/3 to 3/4 stop improvement in noise. Forget any dreams about 2-3 stops.
On the other end, half your photo sites are suddenly one stop more light sensitive, meaning you will saturate them one stop earlier than the color sensing sites. So your highlights - the top stop - will have only half the detail of today.
The drawbacks don't matter all that much for small digicams, which have resolutions oustripping their optics already. But the drawbacks are more serious for big sensors. There's good reason their press releases are talking about cameraphones, not DSLRs.
Also, Fuji's sensors are basically using the same idea, implemented differently. You haven't seen them take the imaging world by storm, have you?
Posted by: Janne | Friday, 15 June 2007 at 10:52 AM
The theory of Bayer pattern image sampling allows for a plethora of different patterns, including "white" pixels so the Kodak pattern by itself should not be patentable since it should fail the inventive step test, being an incremental advance over existing techniques and obvious to a person skilled in the state of the art. What may be patentable are the post-processing techniques to extract information and reconstruct an image with minimal artifacts.
I could certainly think of other patterns, such as a CWYW pattern which would probably do the same job with less impact on chroma spatial resolution (don't even need a magenta filter, you can infer the green information from the luminance, cyan and yellow channels). This is just a modification of Nikon's CMYM pattern they used on some of their cameras, and Kodak on the DCS620x for example: http://www.lonestardigital.com/DCS620x.htm
In the end, the Kodak pattern is trading luminance for chroma resolution. It's not a big deal in my book. A three-layer pixel (ideally) would not suffer this trade-off, though current implementations have other problems which are specific to the colour separation technique (silicon absorption profile). Fuji's organic layered pixel technology may be able to overcome Foveon's problems in this regard.
Posted by: Daniel | Friday, 15 June 2007 at 11:43 AM
Well, interesting idea. This is essentially a hardware implementation of what amateur astrophotographers using CCDs have been doing for years - full res and unfiltered for luminance, and binned for RGB to reduce the required exposure times. Works extremely well there.
I just wish somebody (Leica? they certainly seem to be willing to build rather low-volume products) would make a modern "B+W only" camera with no filters on the CCD a bit like the old Kodaks - even better sensitivity at low light levels, and real IR into the bargain!
Of course if the M8 is anything to go by everybody would just bitch about needing an IR cut filter all the time, so better make it internal with a little sliding switch or something :)
Enough dreaming.
Posted by: Jonathan Irwin | Friday, 15 June 2007 at 12:19 PM
Thirty years after the introduction of the Bayer pattern, Kodak finally figures out you get better images by imitating the human eye instead of a cheap color TV set ...
We're so enamored with color, we forget that less than 1/20th of our eye cells are dedicated to color reception (cones), the vast majority are dedicated to pure luminosity (rods). (http://en.wikipedia.org/wiki/Rod_cell)
Posted by: cerement | Friday, 15 June 2007 at 12:34 PM
Has anyone looked at the images at Imaging Resources? (link above) On the third set of images (the juggler) I see a lack of details from the "current technology shot" and the new sensor. Look at the foot of the red pipe and the first button (from the bottom) on the shirt. Granted is a prototype but.....
Posted by: Luciano Teghillo | Friday, 15 June 2007 at 12:58 PM
If I'm understanding all this correctly, then the skeptics are right that the Kodak array isn't any better than a Bayer array--but that's not the point. As Mike explains, color sensor resolution and accuracy are products of clever interpretation as well as physical design. Compared to the Bayer pattern, the Kodak array is simply a different method of "cheating"--a matter of priorities and trade-offs. The advantage (as I understand it) is that the new method is a better complement to the state of the art in cheaply produceable sensors (resolution-rich, sensitivity-poor). i.e., leverage.
As with the Bayer, it will be up to software to best interpret the data in any given circumstance.
Mike, it is my understanding that the extra G is to mimic the human eye's extra sensitivity to green. I suppose the presence of "L" pixels may allow a slightly different ratio, if that's what you're suggesting, but if so I suspect it's a matter of time and research.
Obviously, I'm no expert on this stuff, but your question leads me to hope that someone who is knowledgeable can comment on whether we can call this an evolutionary step from RGB to LAB for digicam sensors.
Posted by: robert e | Friday, 15 June 2007 at 01:50 PM
I doubt that sensor manufacturers will wait until Kodak provides engineering samples to begin working on this. There's no law against making one of these for yourself and building algorithms to decode it, as long as you don't sell it without a license. If it works out, then it's a licensing fee to Kodak and off to market with the new technology (possibly ahead of Kodak, if they'll allow it). If it doesn't work out, then you're not waiting until next year when Kodak finally sends out the engineering samples to find that out.
Posted by: Dave | Friday, 15 June 2007 at 01:53 PM
"What if somebody started making a camera with a monochrome sensor? I think it should be four stops faster than any current sensor. Wouldn't that be cool?"
http://mega-vision.com/products/Mono/Mono.htm - i'd love to have one of these to play with!
Posted by: Daniel Garcia | Friday, 15 June 2007 at 02:04 PM
I run the Windows "My Pictures" screensaver - it entertains by bringing up photos I'd long forgotten about, and recently I've been prompted to revisit files made on my first DSLR, a Fuji S1. The more I look the more I'm convinced that (given the same sensor area) less pixels means nicer photos......
Cheers< Robin
Posted by: RobinP | Friday, 15 June 2007 at 03:41 PM
Perhaps I am the only one in North America that finds the news of this new sensor to be dreadful, but I do. The original Bayer patent was based on a cell of two luminance pixels, and one red and one blue pixel---not two green, one red and one blue. It was a scheme originally developed for single tube vidicon television cameras. So in essence Kodak is returning back to the original Bayer patent, but since they could not get a new patent on old work (and collect royalties), the green element has been retained.
Part monochrome camera, remainder bayer camera. While the monochrome portion is going to be two stops faster, not so for the bayer portion. So to suppress noise, the monochrome information will be superimposed on the bayer info.
All I think its going to do is create even more artifacts in images.
I cant wait until some company finally comes to the same conclusion that the broadcast industry did a long time ago---make a 3-CCD camera and get beyond matrix based sensors.
I really think this is more about patent rights and Kodak trying to show something "progressive" in a failing company.
Pete
PS In regard to digital monochrome, I shot the DCS 760m for 18 months and it was COOL---base ISO 400. Since then, I have lobbied with three of the top camera makers on making a digital monochrome camera or back---and six months of effort with one company in particular. In the end, there seems to be no vision of what a digital monochrome can do beyond Bayer from any one in management. Truly sad.
Posted by: Pete Myers | Friday, 15 June 2007 at 05:17 PM
What's old is new again. Kodak used much the same chroma-subsampling with their PhotoYCC encoding for PhotoCD way back when. The difference is that YCC was 4:1:1 instead of 4:2:1:1 in the new layout, the C1 and C2 components being derived from the primaries with luminance subtracted. Worked pretty well. Chroma-subsampling would be ideal for Photoshop's internal editing space but I'm not holding my breath.
These guys' shirts would be ideal for testing.
Posted by: Stephen Best | Friday, 15 June 2007 at 10:20 PM
What if the firmware of any camera manufacturer was changed to deliver luminance from the green channels exclusively?
Just asking.
Posted by: Rusty Joerin | Saturday, 16 June 2007 at 12:01 AM
Hey Guys,
The answer was there all along,look at their shirts,surely the shirt designer should get some credit/royalties!!
Ed.
Posted by: Ed O'Mahony | Saturday, 16 June 2007 at 04:28 AM
What seems to be more interesting to me is a technology that will allow the demosaicing pattern to be:
1. Adaptive (automatically)
2. User selectable
3. Different for different parts of the image
4. Ultimately - adjustable during post processing...
But this is surely a step in the right direction.
Posted by: Boris Liberman | Sunday, 17 June 2007 at 03:37 AM
What if, in the array, you throw in some neutral gray pixels that are, say, one stop less sensitive to light than the RGB pixels, as well as the clear pixels that are more sensitive. Let's say every fourth clear pixel is a less sensitive neutral gray one. That could lead to an improvement in dynamic range as well as sensitivity.
Posted by: Bruce McL | Monday, 18 June 2007 at 12:50 AM
"What if the firmware of any camera manufacturer was changed to deliver luminance from the green channels exclusively?
Just asking."
You can do it yourself. Find some public domain RAW software for your camera and go wild.
I imagine that Steven Johnson is not pleased with this - he doesn't even like Bayer filtering.
Posted by: KeithB | Monday, 18 June 2007 at 10:09 AM
I don't see much discussion about how to format one of these RAW files. The last thing we need is yet another (TIFF-based) proprietary RAW format. So how would one represent such a CFA pattern (especially if it is semi-random) in the supposedly unified Adobe DNG RAW format? The answer I assume is to ignore the actual sensor's CFA pattern, which can't be represented directly anyway, and store post-interpolated linear data (IMO the way all DNG's should be written and without a proprietary maker note). Then it would be up to the licensee of this technology to compete based-upon the merits of the photos it takes. And speaking of RAW formats, what ever happened to Kodak's ERI JPEG format?
Posted by: Dennis Walker | Tuesday, 19 June 2007 at 05:49 PM