Mike's post some time back about the deficiencies of CD audio had more to do with photography than you might think. Audiophiles have long been aware that the 44KHz sampling rates for digital sound are not adequate for full fidelity. Mathematically-savvy audiophiles and signal specialists even understand why.
Only a few folks realize these problems occur in digital imaging (both in-camera and scanning film) as well.
The problem lies in the popular misunderstanding of the Nyquist-Shannon criterion. That's the assertion that in order to record frequencies of X, you must sample at 2*X. It even seems intuitive; you need two pixels to convey a line pair, one pixel for white and one pixel for black.
The problem is that the mathematical theorem doesn't apply to the real world. For the math to work, there can be no spatial (or audio) frequencies of X or greater in the image (or signal) you're trying to digitize. The samples you take must be instantaneous. Finally, to accurately reconstruct the original signal from the samples, you must do an infinite amount of calculation.
The third problem is the lesser one. The first two are killers. The only frequency-limited signals are near-constant ones. Any signal that varies rapidly in space (or time) has frequency components that exceed the sampling limits. Those create false frequencies when you reconstruct the image. Realistically, you're safe if the signal is constant for 15 or 20 constant wave cycles, but most real world images (and audio) have content that changes a lot faster than that.
Audio can approximate instantaneous samples. Photography gets it wrong; a single pixel records the average signal strength over the entire sample window. That creates even more problems. I earlier described a line pair represented by one black pixel and one white pixel. Well, suppose you shift the whole scene by one half pixel (see figure 1). Now each pixel is seeing half a black line and half a white line; every pixel comes out 50% gray. This is not good!
Figure 1. Two pixels can't unambiguously resolve a line pair. When the white and black bars are lined up with the pixels (top figure), the pixels can record the bars with 100% clarity. But, when the bars are 90 degrees out of phase with the pixels, all the pixels will see half a white bar and half a black bar and report a uniform gray.
Most of you have seen this in action; it's the moire patterns you get when the fine detail you're trying to photograph approaches the pixel size (figure 2). Putting it in mathematical terms, phase differences are getting confused with amplitude differences.
Figure 2. Sampling failure in action. This resolution target was scanned at 300 ppi. The scan indeed "resolves" around 300 lines (150 line pairs) per inch, but there's substantial moire, inversion and distortion because of aliasing. It would be a lot cleaner, although not perfect, at half that frequency—150 lines per inch.
You don't notice it much with randomly oriented fine detail, but the distortions still exist. Fortunately our eyes aren't very sensitive to them; unfortunately, our ears are. That is why digital photographs can look so subjectively good and digital audio sound so subjectively bad.
The real-world fix is "oversampling" (really, it's merely sufficient sampling). Sample at four times the frequency you want to record and it's pretty good. Sample at 8 times and it's really good. But sample at the Nyquist-Shannon limit of only two times that frequency? Don't expect "high fidelity"; it's simply not possible.