All three figures in this illustration have exactly the same number of
pixels (about 2,000). From left to right, they're comprised of 1-bit
monochrome, 8-bit monochrome, and 24-bit RGB pixels.
More than a little confusion exists about just what a pixel is. Pixels are so basic to digital imagery that it behooves us to understand them better.
A pixel (short for "picture element") is not a measure of image information. It's actually a dimensional measure: a pixel is the smallest-sized "tile" in a picture. All the pixels laid side by side, row by row, make up the two dimensional image we look at.
Why does this simple concept cause problems? Because photographs don't just have width and height; they have (bit) depth. The amount of information a pixel contains simply isn't part of the definition of a pixel. Pixels can be single-bit B&W (e.g., faxes). They can be grayscale, RGB, or CMYK, at standard sizes from 8 to 64 bytes. Less commonly, they can be hyper-spectral, where dozens of individual wavelengths are represented. In the most extreme case, a pixel can be full spectral, with a complete photon energy distributions recorded (mucho bits). But whether 1 or 100K bits deep, a pixel is still a pixel; that's inherent in the fundamental definition.
Instrumentality matters. It's correct for camera manufacturers to talk about the sensors in their cameras having X megapixels, even if each pixel only records one component of RGB. It's also correct for them to talk about synthesizing a full-color RGB image of X megapixels from that data. All the pixel count tells you is how many little tiles the image is subdivided into; it doesn't tell you anything about the content of those tiles.
Popular and reasonable practicalities also confuse. Thermal transfer printers produce real, discrete pixels—four-color (32 bit) ones. Inkjet printers don't. A computer display has discrete red, green, and blue pixels. The convention of giving the number of tri-color pixels is technically wrong, although it's useful. Unfortunately, camera manufacturers have chosen to use the technically correct nomenclature for their LCD displays and count each pixel individually. What a headache!
Sometimes confusion arises unreasonably. Foveon, for example, strongly implies they have three pixels for each real physical pixel, because their pixels capture multiple wavelengths. False! It's misusing the term, and it confuses people. It would be like claiming Hummers get better gas mileage than a VW, because they can carry more passengers. Yes, passenger-miles per gallon is an important measure of fuel efficiency...but that's not what MPG means and Hummer isn't foolish enough to claim otherwise.
Keep your terminology straight and you'll be less confused. Object when marketdroids play fast and loose with established vocabulary. They aren't doing it for your benefit.
_____________________
Ctein
Featured Comment by Jim Kofron: "A history of the 'pixel' ['Pixels and Me,' a talk by Richard Lyon of Foveon for the Computer History Museum]."
Mike replies: Dear lord, Jim, that's more than I needed to know about 'pixel' by a factorial, but it's fascinating, if you have an hour and 22 minutes to spare...(those without quite that much time should start at about 0:50:00; he talks about the Foveon pixel count at about 1:05:00).
if you're going to discuss what makes up a pixel, why not discuss explicitly the difference between pixels and photosites? the foveon isn't the only sensor where the notion is illuminating. it confuses me why people routinely refer to sensor photosites as equivalent to final image pixels.
oh, and everyone knows you can get dozens more people into the vw, as long as they wear rubber noses. sheesh.
Posted by: xtoph | Tuesday, 11 September 2007 at 04:17 PM
Pixels vs. photosites? Not sure why that differentiation should fit into the pixels article as, unlike pixel count and bit depth, it has no quantifiable value.
Sure, it has subjective importance such as noise level, but that is not a standard, measurable value as sensor noise varies between manufacturers and even between identical chips.
Posted by: chuck kimmerle | Tuesday, 11 September 2007 at 07:44 PM
What about 3-CCD video cameras that may (or may not) have a "pixel shift"? Do the ones with pixel shift record really more information and/or do they have three times more megapixels? Seems quite tricky to me...
Posted by: Tõnu Tamm | Tuesday, 11 September 2007 at 08:35 PM
Dear Tonu,
I don't do video. Couldn't tell you if the 'pixel-shift' ones collect more spatial information or not.
pax / Ctein
Posted by: Ctein | Tuesday, 11 September 2007 at 08:55 PM
Dear X,
My columns are limited to 500 words, give or take 20%. If you want additional topics covered in such a short piece, tell me what I should have cut out, because the article ain't gonna get longer!
pax / Ctein
Posted by: Ctein | Tuesday, 11 September 2007 at 08:57 PM
Pixel will always remind me of a certain Jack Russel Terrier.
That's because that's the name our technical librarian gave to her little dog at a software company I worked at back in 1990, when the term was still somewhat unusual.
It might be worth noting (or perhaps not -- this is a little geeky) that a pixel is best thought of as a concept; a point having no real physical dimensions.
This fellow from Microsoft has a real bee-in-his-bonnet about this and wrote about it awhile back: ftp://ftp.alvyray.com/Acrobat/6_Pixel.pdf
Posted by: Mike Sisk | Tuesday, 11 September 2007 at 10:38 PM
Yes, all pixels are Not Equal, Some are better than others.
In almost all lenses, the center pixels are higher value than those at the edges. This is especially true with extremely wide angle lenses, fish eyes being the most circumspect. The level of resolution, and light convergence is typically optimized toward the lenses center and usually with a average F stop of 8 or so. And this also applied to silver halide particles, pixels be damned.
But, I quickly tire of these discussions, try to avoid them for the most as it really does not matter. A great picture slightly out of focus, or off exposure, or off in many other ways is still a great picture. A so..so picture is always that.
Composition always beats technique, always.
Regards,
Robert Harshman
Posted by: Robert Harshman | Tuesday, 11 September 2007 at 11:04 PM
Dear Mike S,
Oh, that's a GREAT paper-- thanks for pointing to it.
Alvy's absolutely right. Pixels aren't actually little tiles, they're point samples located at the vertices of a regular lattice that 'grids off' the image area. From an image computation and analysis point of view, this is really important. But I think saying that to the readers of this column would cause most of their eyes to glaze over.
On the other hand, if you're the kind of person who's ever wondered how it's even possible to run a filter in Photoshop with a radius of less than one pixel, there's your answer.
pax / Ctein
Posted by: Ctein | Wednesday, 12 September 2007 at 02:01 AM
Mr. Harshman,
While "circumspect" is a nice pun on (or pithy definition of) the fish eye lens look, I assume you meant something more like "suspect" (as in "fishy").
As to the rest: Agreed!
With respect,
robert e
Posted by: robert e | Wednesday, 12 September 2007 at 02:03 AM
Mike S -- I echo Ctein's thanks. That paper caused me to slap my forehead and say "duh!" (A metaphorical slap, anyway.) Thinking of pixels as SAMPLES makes it all a lot clearer. I've done some 1-D digital signal processing; funny I never made the logical leap to the 2-D image case. That paper caused the light to come on in the dimensionless point that is my brain.
Posted by: Jon Bloom | Wednesday, 12 September 2007 at 05:42 AM
Photosites have a quantifiable value -- color resolution. A standard Bayer sensor has one photosite per pixel with twice as many of these capturing green as captures red or blue. A Foveon sensor has three photosites per pixel, one of each color. Each photosite captures information for exactly one color. And so the Foveon technically has twice the color resolution per pixel for green subjects and four times the resolution for red or blue subjects.
Due to excellent interpolation routines, this isn't that important for most full color photographs. However, there are times when you're photographing a subject that is almost completely red or blue.
http://megapixelated.com/2007/08-24%20Rustic%20Overtones/slides/IMG_0712.html
This shot, for example, was taken with a 10 MP Bayer sensor and was murder to process because there was twice as much detail in the white illuminated part of the musician's hair as there was in the red illuminated part. I eventually had to put two pixels worth of chroma blur over the entire image, wiping out the detail but making everything more uniform.
Of course, this shot was also taken at ISO 1600 and is relatively low noise -- a Foveon shot of the same scene would be unusable.
Posted by: dasmb | Wednesday, 12 September 2007 at 10:39 AM
Thanks for the informative post.
Posted by: todd | Wednesday, 12 September 2007 at 03:50 PM
So Ctein, I really appreciate your comments and I have read you for years. But for an older photographer such as myself, is there any way to relate the pixel subject/explanation analogously to the old finer grain information of the 35 mm vs 8 by 10 film era?
I hope the answer to my question isn't simply "no".
Posted by: Paul Bailey | Thursday, 13 September 2007 at 07:15 AM
Dear Paul,
Well, the simple answer is "no." The complex answer is "yes." You can relate the two but it's complicated and multifaceted. I've written whole articles on the subject, and I ain't talking 500-word ones.
Grains (or dye clouds) in film are more akin to ink droplets in an inkjet print than they are to pixels.
Film grain is visual noise-- the finer the grain, the lower the noise level. Pixels aren't noise (although they can convey noise). You can have a very low-pixel count camera with very low noise that produces more 'grainless' results than an 8x10 view camera.
pax / Ctein
Posted by: Ctein | Thursday, 13 September 2007 at 08:37 PM