« The Cost of Flash Memory | Main | Ctein is One »

Tuesday, 11 September 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.

if you're going to discuss what makes up a pixel, why not discuss explicitly the difference between pixels and photosites? the foveon isn't the only sensor where the notion is illuminating. it confuses me why people routinely refer to sensor photosites as equivalent to final image pixels.

oh, and everyone knows you can get dozens more people into the vw, as long as they wear rubber noses. sheesh.

Pixels vs. photosites? Not sure why that differentiation should fit into the pixels article as, unlike pixel count and bit depth, it has no quantifiable value.

Sure, it has subjective importance such as noise level, but that is not a standard, measurable value as sensor noise varies between manufacturers and even between identical chips.

What about 3-CCD video cameras that may (or may not) have a "pixel shift"? Do the ones with pixel shift record really more information and/or do they have three times more megapixels? Seems quite tricky to me...

Dear Tonu,

I don't do video. Couldn't tell you if the 'pixel-shift' ones collect more spatial information or not.

pax / Ctein

Dear X,

My columns are limited to 500 words, give or take 20%. If you want additional topics covered in such a short piece, tell me what I should have cut out, because the article ain't gonna get longer!

pax / Ctein

Pixel will always remind me of a certain Jack Russel Terrier.

That's because that's the name our technical librarian gave to her little dog at a software company I worked at back in 1990, when the term was still somewhat unusual.

It might be worth noting (or perhaps not -- this is a little geeky) that a pixel is best thought of as a concept; a point having no real physical dimensions.

This fellow from Microsoft has a real bee-in-his-bonnet about this and wrote about it awhile back: ftp://ftp.alvyray.com/Acrobat/6_Pixel.pdf

Yes, all pixels are Not Equal, Some are better than others.

In almost all lenses, the center pixels are higher value than those at the edges. This is especially true with extremely wide angle lenses, fish eyes being the most circumspect. The level of resolution, and light convergence is typically optimized toward the lenses center and usually with a average F stop of 8 or so. And this also applied to silver halide particles, pixels be damned.

But, I quickly tire of these discussions, try to avoid them for the most as it really does not matter. A great picture slightly out of focus, or off exposure, or off in many other ways is still a great picture. A so..so picture is always that.

Composition always beats technique, always.


Robert Harshman

Dear Mike S,

Oh, that's a GREAT paper-- thanks for pointing to it.

Alvy's absolutely right. Pixels aren't actually little tiles, they're point samples located at the vertices of a regular lattice that 'grids off' the image area. From an image computation and analysis point of view, this is really important. But I think saying that to the readers of this column would cause most of their eyes to glaze over.

On the other hand, if you're the kind of person who's ever wondered how it's even possible to run a filter in Photoshop with a radius of less than one pixel, there's your answer.

pax / Ctein

Mr. Harshman,

While "circumspect" is a nice pun on (or pithy definition of) the fish eye lens look, I assume you meant something more like "suspect" (as in "fishy").

As to the rest: Agreed!

With respect,
robert e

Mike S -- I echo Ctein's thanks. That paper caused me to slap my forehead and say "duh!" (A metaphorical slap, anyway.) Thinking of pixels as SAMPLES makes it all a lot clearer. I've done some 1-D digital signal processing; funny I never made the logical leap to the 2-D image case. That paper caused the light to come on in the dimensionless point that is my brain.

Photosites have a quantifiable value -- color resolution. A standard Bayer sensor has one photosite per pixel with twice as many of these capturing green as captures red or blue. A Foveon sensor has three photosites per pixel, one of each color. Each photosite captures information for exactly one color. And so the Foveon technically has twice the color resolution per pixel for green subjects and four times the resolution for red or blue subjects.

Due to excellent interpolation routines, this isn't that important for most full color photographs. However, there are times when you're photographing a subject that is almost completely red or blue.


This shot, for example, was taken with a 10 MP Bayer sensor and was murder to process because there was twice as much detail in the white illuminated part of the musician's hair as there was in the red illuminated part. I eventually had to put two pixels worth of chroma blur over the entire image, wiping out the detail but making everything more uniform.

Of course, this shot was also taken at ISO 1600 and is relatively low noise -- a Foveon shot of the same scene would be unusable.

Thanks for the informative post.

So Ctein, I really appreciate your comments and I have read you for years. But for an older photographer such as myself, is there any way to relate the pixel subject/explanation analogously to the old finer grain information of the 35 mm vs 8 by 10 film era?
I hope the answer to my question isn't simply "no".

Dear Paul,

Well, the simple answer is "no." The complex answer is "yes." You can relate the two but it's complicated and multifaceted. I've written whole articles on the subject, and I ain't talking 500-word ones.

Grains (or dye clouds) in film are more akin to ink droplets in an inkjet print than they are to pixels.

Film grain is visual noise-- the finer the grain, the lower the noise level. Pixels aren't noise (although they can convey noise). You can have a very low-pixel count camera with very low noise that produces more 'grainless' results than an 8x10 view camera.

pax / Ctein

The comments to this entry are closed.



Blog powered by Typepad
Member since 06/2007