When you're shopping for a camera, you want to know how that particular model performs. Trend lines and averages are useful for winning bar bets or figuring out where the industry might be going. They're a very bad way to buy a camera. People here are smart enough not to buy a camera based solely on technical paper specs, right? Well, deciding that a camera is or isn't for you based entirely on a trend line or the average performance of cameras in its class is even worse. You're not even basing your decision on the paper specs for that specific camera but how some hypothetical camera of that general class might perform, if it performed like the average of all the cameras that came before it. Does that make sense?!
What you should be looking at is not averages and trends, but scatter. What you're really interested in is how individual models may differ from that average. If the scatter is very small, than the average or trend is a good indicator. If the scatter is large, it isn't.
Back when cameras moved from six megapixels to eight megapixels (and from 8 MP to ten) I warned people that the performance of cameras varied enough from model to model that the best of the six megapixel cameras were going to resolve comparably to the worst of the eight megapixel cameras. Sure, on average eight resolved better than six, but the average was of no help in deciding whether to buy a particular model.
I still read frequent comments in these pages from readers who pronounce that a camera with some spec (megapixels, sensor size, pixel count, bit depth, you name it) is ipso facto inferior to a camera with a different spec solely because of some average performance indicator. That's the same mistake. As I said, it may win bar bets, but it's not a way of ensuring that you're buying the camera that's best for you.
Let's take a look at a couple of the graphs on DxOmark to further illustrate what I'm talking about:
Both of these graphs show trends in noise and pixel count. Note how shallow the trends are compared to the scatter of the individual points (camera models). Even within a single manufacturing year, the variations in camera specs are greater than the trends over four or five years. This year's camera may not only be no better than last year's, it may be worse than one released several years earlier.
Since both plots used introduction date, you can match up pixel counts with noise between them. Sometimes there's a good correlation; sometimes it's really lousy. For example, in the first half of 2007, there is a very nice match between more pixels and worse noise. On the other hand, three cameras introduced in the first half of 2004, with a threefold difference in pixel count, all have essentially the same noise level.
The third graph normalizes everything to pixel pitch, which is the new fave metric. And, yes, the data clusters more tightly, but there are still variations of several dB from the average. In fact, if one ignores the two very poorly performing cameras in the early years, there's no obvious trend at all; it's pretty much a flat line with any given model varying by a couple of dB. While useful for analyzing trends, it won't tell you which particular camera model to buy.
As the folks at DxO said, "Another thing to notice is that for a given release year, the dispersion across cameras is of approximately the same magnitude as the improvement over the past 5 years. This means that ...the differences among current DLSRs can be larger than the overall technology gain over the past five years."
They also note, later on, that Canon, on average, has the best pixel performance, but the single best performing full frame DSLR model happens to be a Nikon.
That's it for today's sermon. Go and spend your money wisely!