« The Sony A900: Kang 200-20 | Main | Ain't Any '-mercial,' Just Info »

Saturday, 01 November 2008

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00df351e888f8834010535d0e125970c

Listed below are links to weblogs that reference JAlan's Question:

Comments

You're wrong on this. If you take a 20MP picture and downsize it to 10MP, the downsized image will likely be a lot better than one taken from a 10MP one. The reason is that you had a lot more information going in, so you'll get a lot more sharpness on the image. When you upsize you don't have any more information than with what you started, so the algorithms can't do a good job. Just try it and you'll see. You also have to look at the size of the sensor as it compares to the size of the image. The P&S ones have a really small sensor, so they produce a lot of noise. You can't just compare pixel by pixel. I like the dpreview charts that show the pixel density:
http://www.dpreview.com/reviews/specs/Canon/
You'll see why some cameras such as the original Canon 5D had such great image quality, and why some cameras like the new Canon 50D isn't as good as the old Canon 40D which actually has fewer pixels.

I would change your wording slightly, Mike. I think Sony is counting on the Siren Song of a big sensor to cause people to buy the A900. I would wager that a very small percentage of your readers have a printer wider than 13 inches.

"But I keep trying to wrap my mind around why anyone who never prints their pictures would ever need a camera of more than 6 MP—really, 4 MP would be more than enough—and I can't come up with any rational reason—unless, of course, they just love to compare the resolution of their camera with other models in online forums!"

-- That would be because manufacturers won't let me buy a 6MP DSLR. Other than the D40, there is possibly no other DSLR with less than 10MP and even that is becoming a rarity. Seeing how a 6MP APS-C DSLR has almost the same pixel density as a D3 / D700, I wonder why manufacturers don't make a 6MP DSLR with the same low light capabilities of a D3.

I wonder if behind JAlan's question is something about tonal subtlety, dynamic range and other factors beside pure resolution? No idea what the answer is though.

Hugh

Well, here are some reasons why people who never print might want to take very high resolution images:
1) Technology moves on. Today our comptuer displays may have 2MPix, heavens know what they will offer in a couple of years. So it might be prudent to record high resolution files for the future, ultra-detailed (virtual?) displays or 3D glasses.
2) Some people like to magnify their pictures when viewing them on display. For example, when I show my parents family photos on a 22-inch screen they quite ofen ask me to zoom on someone's face etc.
3) Downsizing pictures to fit them on screen may hide some image imperfections like noise (although lower resolution sensors with larger pixels may generate a cleaner image to begin with).

Don't forget about the difference between Bayer pixels and non-interpolated pixels. If you had a 10-12 MP Foveon-style sensor, you'd probably be getting as much real resolution as you do out of the Sony.

It really is all about the print size; otherwise there's little justification for a high-megapixel D-SLR over one with a lower pixel count. Those big files impose significant processing and storage overhead. You need more cards, more disk space, more ram, and you'll spend a lot more time watching the little hourglass icon in Photoshop. Start stitching those big files and it's easy to get over a gigabyte for one image. This only makes sense if you plan on exploiting all that resolution by printing really big.

Mike's right on this score; if you downsize the file from a (say) ~20 mp camera to ~10 mp, you're throwing out data to make it smaller, and giving up any edge in the process. It's not exactly linear; not all 10 mp files are equal. Pixel pitch, Bit depth, color rendition, strength of the anti-aliasing filter, de-mosaicizing algorithms etc. all have an impact. 10 megapixels from a big D-SLR sensor with big pixels, 14 bits per channel and good noise characteristics shot through a great prime lens will beat the tar out of 10 megapixels from a miniscule point & shoot sensor at 12 bits per channel through a cheap built-in zoom lens. But there's nothing otherwise magical about higher resolution files per se.

Love your stuff and, yes, "seat-o'-the-pants" rules are about pants, not math. But, in this case danger lurks.

For example, my older Pentax DL makes images that are about 2000x3000 pixels. At 300 ppi (one standard for a good print) it should make good unresampled prints 10" wide from its 6 Mpixel image. My brand new Pentax K200D's image size is 2592x3872, yielding a 300 ppi image about 13" wide from its 10 Mpixel sensor.

This shortfall of your "seat-o'-the-pants" rule results from the fact that image-pixel count scales with the AREA of the image and the area of the image scales as the SQUARE of the LENGTH of one side.

If it's any comfort, my pants don't fit so well either.

The siren is singing about the 5d mark II to me. Somebody tie me to...to...my desk at least.

"I do always wonder about photographers who have to have the latest, biggest and mostest cameras, but who never print. That is, they store, view, and share their pictures digitally. I have a 20" monitor that displays 1680 x 1050 pixels, or about one and three-fourths megapixels. You might want to crop sometimes, so having a few extra megapixels might not hurt."

As pointed out, ideally you want to have somewhat higher resolution (about 1.5x) than your final use to enhance sharpness (it basically allows you to be selective in only sharpening "real" edges). But yes, it doesn't make sense. I was perfectly happy with the 8mp resolution from the Canon 350D I started out with, and the 10mp from my K10D has never, not once, not a single time been a limitation for me since I started using it. And I have on one occasion printed big - B1 size - with files from it with perfectly good results (they are in fact hanging as posters at work right now, and unless you go far enough in to closely scrutinize individual edges you can't see any aliasing or pixellation).

One of my pet peeves right now is in fact that I have no means of choosing a modern lower-resolution camera. I would much, much prefer the better low-light performance and better dynamic range that a modern 8mp sensor would be able to give over a 15mp one.

I've started using film more seriously now, medium format film, and the reason is not resolution. Film and digital all give me all the resolution I would ever need. But MF film gives me wonderful dynamic range - great tones, very forgiving for exposure errors - that digital just doesn't. You want to make me really happy? Release a digital Pentax 645 at 16mp - optimized for the low noise, high sensitivity and dynamic range such large sensor sites would allow. Never going to happen, I know.

Don't forget that printing technology is marching on (look at the Epson Stylus Pro 7900/9900) so with more and better quality pixels, you'll end up with higher quality prints at the same size as you're printing today. Not so much more resolution as greater smoothness and more film like.

The output files from a client's Canon 1Ds Mark III were the first from a DSLR I've seen that got me excited. I made some really nice 10.4"x15.5" prints from these. I can't wait until these sensors appear in a full-frame rangefinder with quality glass.

Pat Cooney,
The points you make are very fair. However as with most of my analysis and things like "seat-o'-the-pants" rules, this one takes into account how photographs are actually seen and used.

I don't know about you, but when I look at different sized prints, I look at them from different distances. I have a 33"-wide print on my wall, and I find that the optimum or "natural" viewing distance for me--that is, where I naturally stand when I want to view it best--is about three and a half feet away, and when I "peer" at the details in it I tend to lean in and put my eyes about 18" from the print surface. Such a print "needs" less detail to hold up than, say, a 6"-wide print, which I will normally look at from 8 or ten inches away and "peer at" from a distance of perhaps six inches, and my aging eyesight limits my desire to see it even closer than that. The distance at which I tend to "peer" at the details of a very large print seems abnormally distant for viewing a small print.

I'm speaking mainly as a print maker, and more than that as a print *viewer*, but my sense is simply that you need more pixels for fewer inches as the print size gets smaller, and vice versa, and my admittedly rough "rule" takes that somewhat into account, although I'm sure not really very rigorously in any mathematical sense. Plus, I admit that my laboratory is myself, so to speak, and my own visual tastes; so my "rules" aren't for anybody else unless they want them to be.

Mike J.

Going by pixel density we see that the new Canon 5D Mk II comes round to the same as my old 20D. So I just need to walk a little closer to get a cropped view of the same scene with the same pixels.

Regarding the comments about not being able to buy 6 MPix cameras - I have never understood at all why people only think about buying cameras new. Just about all the cameras I've ever bought in my life have been second hand. They have always worked just fine - cameras don't stop working the moment an owner decides to sell them. Granted you can't today buy a 6 MPix with entirely the same features as the newest models. Is that important? Depends. I know for me it's a low priority. I shoot 99% of my photos using almost none of the funky selling features that the 20D once boasted as latest and greatest.

I agree very much with your words about the megapixels. However they don't at all stop me from wanting more square centimeters. I very much like the images rendered by my lenses, and I always lust after sensors that capture a larger portion of them. I can't help but think that part of the reason that sensors continue to pack even more pixels in a smaller area is because that's the traditional method for improving most all the other kinds of integrated circuits, and the peole who make sensors bring that paradigm (and its accompanying economic arguments) with them from their other work.

It's funny how people are always coming up with arguments supporting their need for newer and bigger cameras. Very often this has nothing to do with the-, or their actual photographic/image quality.
It's a marketing driven phenomena to which xx,xxx% of amateur photographers will gladly subside. Since the introduction of Mega-megapixel cameras, full sensors 1.000$ L lenses I do so see the quantity of images improving bur definitely not the quality. I still think it's the atmosphere of an image that counts and not the level of technical perfection thereof...... Perfection in my eyes is still boring.

Apart from the ‘how many mega pixels do I need’ discussion. A multi image stitch will often be better than a single shot. For the simple reason that there is less of a lens limitation. I the center part of a lens is better than the edges (which is almost always the case), using this center part several times for the same picture will give a better result. And if you go beyond what currently is available in sensors, this becomes more and more true. It is not impossible to make a 100Mp image in a single go, but it requires a very good lens. While stitching that amount of pixels out of say 12 shots, does not asks much of a lens.
I do also agree that you only need so much resolution for very large prints. And even then. I often print 24”x30” from a 12Mp capture, and I doubt if most viewers would notice if I would have used 24Mp. I would see it, and like it. But I am not so sure it adds so much. Will depend on the subject. What I mean to say that I as a maker can really enjoy the finest of detail in a print. But if one takes a little more distance (in more than one way), it is only one of many factors, of which most have very little to do with the camera used.

If a given imaging sensor, e.g. Nikon DX format cameras, retains the same overall dimensions, does increasing the number of megapixels mean that the individual photodetectors must get smaller?

What effect does this have on image quality?

Hi Mike,
Thank you for the very thorough answer.
An observation I've made and many I'm sure on this forum probably as well, is that certain large MP cameras (like a Canon XSI, d700, othes I'm sure) create great images even when rezzed down fairly small sized file to shall we say 800x600 pixels with a resolution of 72, viewed on the screen, compared to let's say a 6MP cameras. All things being equal if the pixel density is the same from any camera, then a rezzed down image from any of them, regardless of the native camera sensor size, should be the same.
However this does not seemed to be the case in many case.
We've all sen beautiful small images on the web from very large sensored cameras that you don' seem to see from a 6mp camera, or am I missing something. As a n example, your shot of the field with the people and dogs, even though a small rezzed down image, seem to convey what I'm talking about. Would a Minolta 7D do the same? ( I use that camera as an example beacuase I know you have used that camera extensively.
Could there be something else going on?
Is it posible when you start with many more pixels (more data and information) that even when when rezzed down to equivalent, same small file from another camera, you now have more detailed data on the image even though the final small file has the same pixel count from any size sensor.
Or is it something totally different, like sensor's sophistication in resolving, color management, dynamic range, etc. Meaning if 2 sensors from a 6mp vs. 24mp, if they have the same sensor characteristics, would then rezzed down to that same small file, they would look pretty much the same...?
It's definitely not about the sensor size as proven to me using my older redundant and obsolete Olympus E1, a 5MP camera, using a lens adaptor to mount Contax Manula Focus lenses. Using the 24mm f/2.8 Contax lens, that camera/lens combination creates beautifully colored, high resolution, wonderful tonality and dynamic image that has been truly unique to all the DSLR's (considerable) that I've used. Those images just seem to blow most others away, all be it in moderate sized pictures.
My sense is that ultimately it's as much about the sensors charachteristics with any given lens as much as the sensors MP size in a camera when viewed or printed in reasonably moderate sized prints.
Thanks again Mike for the very thorough answer.
Ciao,
Alan

As a couple of previous comments have mentioned, downsampling will make a much crisper, more detailed image from a Bayer camera.

In truth, a 24MP camera downsized to 6MP will make a really cracking file with much better per-pixel detail than a 6MP camera's native image.

This is mostly because bayer cameras interpolate (guess) color resolution across 2 to 4 pixels.

Personally, I have no trouble at all telling 6MP, 10MP, and 24MP output at print sizes as small as 11x14. At 13x17 the differences are profound and obvious, at least for "nose in the print" types like myself.

I'd say it's definitely a good thing to develop larger sensors. As far as I understand it, this is essential for being able to have decent out-of-focus/shallow depth of field? Which, for me, is an important creative option. That's why medium/large format camera pictures look so much different.

Choice, that's what is missing.

I agree with all of Mike's points and I'd like to be able to choose a camera with latest features and less megapixels. I bought a second DSLR for features not resolution, which is more of an annoyance. There is so much great technology being wrapped around sensors to wring more from the pixels - why can't that be put to work on fewer of them?

With film I can pick fine or coarse grain and a bunch of types. I quite happily use ISO400 in daylight because of the image qualities it gives. I print small.

Some photographic subjects don't lend themselves well to closeups (birds and insects come to mind) and make it necessary to severely crop the original image. Needless to say, this lack of detail often makes for poor quality prints and sometimes even poor screen or web images.

If an upgrade to 23 or 50 megapixels from my current 10 were affordable, I would have one on order right now.

From the viewpoint of someone who likes to (try to) take pictures of wretchedly crepuscular songbirds, sensor pixels are cheaper than long glass.

4MP out of the middle of the frame is a perfectly usable web-resolution image; the distance at which I can usefully get that is much further with a 14MP camera compared to an 8MP camera. (Which are my two actual data points, leaving aside the LX2 as unsuited for the purpose.)

People who shoot for stock - photolibraries, in other words - usually need a high pixel-count. Most photolibraries these days want files between 48 and 54mb and recommend shooting on at least a 12mpixel camera.

There are tens of thousands of stock shooters, worldwide.

Dear Paul,

IF, and I emphasize "IF," everything else were equal, smaller sensor pixels means higher noise (both statistical noise and amplification noise, two different things) and less uniformity from pixel per pixel because of manufacturing tolerances.

That's the theory. The practice is entirely different. Sensors change and improve from generation to generation; different manufacturers of sensors use different designs that have different characteristics; different camera manufacturers implement the electronics and signal processing algorithms that use those sensors differently, et cetera. It's like trying to decide if apples are better than oranges are better than pears based on the size of the fruit.

I have seen a very few cases where these characteristics of image quality actually do scale with the sensor size. In most cases they don't. A recent example: I compared a Fuji S100fs to a Nikon D200. The pixels in the Fuji are six-seven times smaller (by area) than the ones in the Nikon. But low-light performance was not 2.5 stops worse, it was only one stop worse. Judged solely by image noise and uniformity, the Fuji pixels were acting as if they were three times the area they were, relative to Nikon's.

Pixel size is simply not a reliable guide to image quality. Actually, the cost of the camera is a much better rule of thumb (but photo geeks don't want to hear that).

What you really need to do is compare test results for different cameras if you want accurate information about particular cameras.


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================


This post triggered an idea with me, which I think if it works could be interesting. Now that we have all these sensor shifting cameras, what would happen if a manufacturer would create a camera where for each exposure it actually takes 4 exposures, but shifts the ccd one pixel left, right, up and down. After that the 4 exposures would be combined to create a picture with 4 times the sensor resolution. We would only need a 6Mp sensor (large photosites) to create a 24Mp file, the exposures can be done partly electronically by opening the shutter, taking the 1st exposure, shutting down the sensor quickly, writing the first exposure to memory, shifting the sensor, power on to the sensor for the 2nd exposure, etc. and only after the last exposure the physical shutter closes. Of course we would need a very fast data pipeline.
Am I right about this or does it only create larger virtual photosites, meaning more dynamics?

Cheers,
Frank

Mike,

been reading your blog for ages but never commented, so here goes, my first time.

One aspect that people tend to forget with multi image panoramas and 'smaller' pixel sensor cameras (say 10 - 15 Mpixels SLR bodies) is that the lens on the camera is not being pushed to its resolving limits in order to make this huge image, which would be the case if it were attached to (say) a 24 or 50Mpixel sensor and a super wide angle lens in order to achieve the same result in a single, cropped, exposure. Your beach pano of 14,633 pixels wide was taken with a lens that (most likely) comfortably handles the 14.6 Mpixel resolution of the Pentax K20D that you shot with. Phase One have recently released a 60 Mpixel (the P65+) back and I wonder what the resolution would be from any lens, no matter the cost, that would be wide enough to capture the same scene in one exposure. In this case, you would only have an image that was 8984 pixels wide.

Multi-shot panos, even with the negatives of trying to cope with moving objects and variations in light levels, are able to make huge, high resolution images, with reasonably economical equipment that easily challenges medium format, large sensor wide angle single shot images cropped to pano dimensions.

Of course, if you want to make panos with mf backs, then that's a different story.

Thanks again for your great web site, Mike.
cheers

For my particular application of wedding photography, I'm now joining the chorus of those saying 'enough megapixels,' as 10 are plenty for me.

It's the new sRAW modes on the latest Canon DSLRs (5DII in particular) that intrigue me now. I would love to shoot most of the day at the 10 Mp size, and then maybe jump up to 20 for the formal session.

The anecdotal evidence that I've seen suggests that the sRAWs have much higher sharpness per pixel than the full resolution RAWs, and an on-screen comparison seems to bear this out. They appear to store a full RGB triplet per pixel, as they are down-rezzed after the Bayer interpolation, and the file size (1/2 the size for 1/4 the pixels) agrees.

I've been meaning to get around to a proper print-based comparison between my 40D's 10 Mp RAW and 2.5 Mp sRAW. Is this something that would interest anyone? Care to suggest a methodology: scene type, print size, etc?

Me again. I have to agree regarding Steve Bruhn's above comment about lens quality as a (newish?) limiting factor.

The dpreview coverage of the 15 Mp 50D states minimal gain in real resolution over the 10 Mp 40D, in spite of using some of Canon's best primes. Add to this the increased ISO noise, and I'd suggest we have hit the wall for APS-C sensor quality.

I'll be moving to 35mm DSLRs as soon as I can justify it. :)

Dear Frank,

It not only works, it's been used in scientific cameras for at least 25 years, at 4x4 (16 sub pixels) and even higher resolutions. I think there's also been at least one commercial studio back that used this technique (although I wouldn't swear to that). The concept is frequently implemented in one dimension in flatbed scanner (the ones that report optical resolutions that aren't symmetric, like 1200 x 2400 or 1200 x 4800). You've even seen it in some of the Mars lander and rover photos; the ones labeled as "superresolution."

It's also known as "dithering" and "supersampling." It's worked well in cameras, but usually not so well in flatbed scanners, because most of them don't have the accuracy of focus needed to resolve sub-pixel detail-- a failure in implementation, not concept.

There are two disadvantages to this technique for ordinary photography. The minor one is that your bit depth goes down as the logarithm of the number of pixels. If you're quadrupling the number of pixels, you lose two bits of tonal depth. At 16 X scanning, you're losing four bits.

The major drawback is that your exposure times are much longer. It takes a short-but-finite time to reposition the sensor, so a 4X sampling takes more than four times as long as a single exposure (even if reading out the data from a single exposure occurred infinitely fast, which it doesn't). The motion blur caused by the longer effective exposures is much less acceptable, because it's a series of sharp displaced images, a quadrupling of the original image rather than a simple less-noticeable smearing. So the technique is only suitable for subjects that don't move.


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Ctein, you have to compare sensors of the same generation and process. Nobody would argue that a newer sensor will tend to be better than an older one. The argument is that given the same generation and same type of sensor hardware, there is a direct relationship between sensor site size and noise and high-ISO characteristics. This relationship fully holds.

What it means is that if manufacturers would make lower-resolution versions of the high-resolution sensors they sell us, the lower-resolution sensors would be much better in these respects. And if they'd give the camera attached to those sensors the same amount of processing power the difference would be even higher (less sensor sites means a higher computation budget per site for noise reduction and so on).

Dear Janne,

Unfortunately hardly anyone who talks about big sensors versus small sensors qualifies that by saying they are only talking about the same generation and type of sensor. You and I are very much in the minority. And that's why it's necessary for me to repeat this over and over again, to correct the rampant and dominant misinformation that gets promulgated.

Even then there is not a direct relationship as you assert. How the signal is processed has a huge impact on the quality of the image you see. And that processing is both distinctive and proprietary to each camera manufacturer.

So, even if you find two cameras by two different manufacturers that have exactly the same sensor of the same generation, you don't have any particular assurance that they will perform the same. More often than not, they won't.

The "ifs" you bring up don't mesh with the reality of what's available. That means attempting to define image quality by sensor size for real, existing cameras (as opposed to wish-fulfillment ones), is substantially inaccurate.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================


Paul,

To answer your first question, yes. Increasing the number of megapixels results in smaller photosites. Usually.

The caveat is that a sensor isn't like an image. A sensor isn't a continuous field of pixels/photosites running seamlessly from one to the next. Each photosite is housed within small electronics, and thus there is something of a buffer between actual pixels. Technology is allowing this to shrink, so that, for example, the actual photosites on the 14MP Pentax K20D aren't that much smaller than those on the 10MP Pentax K10D.

Differences in all sorts of technologies account for different image quality. The Nikon D700 has (according to dpreview) 1.4 MP per cubic cm. The Nikon D1 had only .7 MP per cubic cm. Those D1 pixels were much bigger, but I don't think anyone is going to argue that the per-pixel image quality of the two is even in the same ballpark. Nine years of technology upgrades will do that.

Differences in sensor technology (CCD, CMOS, nMOS, Foveon), image processing electronics, and a hundred other things will always muddy the water in such cases. That no two cameras with different sensors ever have the same pipeline behind them will always leave questions over where the difference is occurring. But it is a matter of science that if we could assume the same exact technology front-to-back and only vary the megapixel count on two different identically-sized sensors of the same type (CMOS, for example), made in the same way, then the lower megapixel sensor will deliver superior results in terms of noise, dynamic range, and a number of other variables at the cost of absolute resolution.

Dear Ctein,

I don't understand. Why does the bit depth have to go down when you move the sensor? Surely you're reading off the same sensor pixels with the same image processing hardware.

Dear Frank,

Very simplified explanation is that to compute the value of the subpixels, you need to take the differences of full-pixel readings in different positions. The differences are smaller, but the least-significant-bit error doesn't shrink. So the relative error grows, and that's what's defining your real bit depth.

pax / Ctein

Ah, I see - you can extract a lot of extra numbers but the amount of meaningful difference doesn't rise quite as fast. Thank you.

The comments to this entry are closed.