You think I'd lie about something like that?
Noise is the bête noire of both the idealist and the fetishistic pixel peeper, but in the real world appropriate levels of noise improve signals.
Ears use noise to improve their sensitivity. There's a signal threshold below which a hair cell in the inner ear won't respond (else we'd be deafened by the din of molecular vibration). Yet, those cells respond to audio signals several dB below that threshold, because random noise in the system mixes with the signal and occasionally kicks it up above the cell's threshold. The response isn't anything like clean, but a static-degraded response is better than none at all.
Most computer printers produce no more than a handful of gray levels. The way they generate the illusion of continuous tone is by injecting arithmetical noise (random numbers) into the process of deciding whether or not to lay down an ink droplet. The probability that any droplet will be laid down depends on the strength of the signal, but it wouldn't be a probabilistic matter at all were it not for the noise.
Digital cameras benefit greatly from a little noise. I can illustrate why in the accompanying figure. The top bar is just an ordinary continuous gradient from black to white: this is our stand-in for a real-world scene. I've expanded the shadow detail, bracketed by the blue lines, to full image width in the second row, for scale. Since the tones are all near-black, you probably can't see any difference on your screen, but I'll fix that.
In the bottom three rows, I've expanded the gray scale only in the illustration, so you can see the gradient in the shadows. In other words, the gradient in the third row is really the same as the second row—you're just kind of viewing it through a contrast-magnifying glass.
Now, what happens when you digitize that shadow range with a hypothetical noise-free camera? You get the fourth row in the illustration. (remember that this is still really covering the same tonal range as the second row.) The continuous grayscale ends up being mapped to a baker's dozen of sharply defined gray levels. Were a photographer trying to enhance the shadow detail in this "scene," they'd have spurious banding and very poor tonal rendition.
Suppose the camera were a little noisy, as it will be in the real world? That's what the bottom row shows: I injected a low level of noise into the original gradient before digitizing it. That is, the fourth and fifth rows have had exactly the same conversions and presentations. The only difference is in the system noise.
I trust I've made my point.
Too much noise is a bad thing...true of most things in the real world. But little bit of noise? That's a very, very good thing!
Featured Comment by richardplondon: "I think it sheds more light to talk about quantisation than digitisation. Every aspect of what we do digitally with images is subject to this.
"But: we have no completely clean data, analog or digital, anyway—unless we make it so artificially (through active noise reduction). Photons are not nicely behaved—they are little scamps. Only a very tiny degree of jitter is ever deliberately introduced in digital applications (to reduce quantisation artefacts), and I don't think this is being proposed here; not necessary.
"I read this useful article as signifying that we should not officiously clean up digital source data with noise reduction, to the point where we would start to see these other unpleasant artifacts.
"Shot noise, for example, is a true record of the way that some actual light sprinkled onto the sensor. It is authenticity—reality—just as film grains are. Of course we all hope at the same time that read noise and other processing errors are improvable, just as with film grain size/clumping."