The recent death of Willard Boyle, Nobel Prize winner for the invention of modern digital photography (a.k.a. the CCD) has brought out a certain amount of armchair quarterbacking. Understand that this is entirely normal when Nobel prizes are involved. Sometimes it is just sour grapes and sometimes it is entirely justified. I have reason to think this time it's more the former.
I have some modest knowledge, because I also invented the CCD.
The only minor problem with me laying claim to fame is that I was two years late! Unlike the Olympics, second place in science does not get you a medal. Had I done my homework properly I could've saved myself the trouble of even inventing it.
That's where part of my doubt about the sour graping comes from. I wouldn't have found this in the earlier literature if it weren't being discussed as an imaging device. There were a zillion electronic devices being invented and the term "charge coupled device" wouldn't have meant a thing to me.
The other reason for doubting the sour grapes is that it's a terribly obvious use of the device. Quite possibly Boyle did invent it as a potential memory device, but the imaging and light sensitivity properties of silicon were well known and regularly used in the industry, and I doubt that there would've been more than 10 minutes conversation at lunch the first time he talked about it with his colleagues before somebody, him or someone else at the table, would've said "That could also be a great device for capturing and storing an optical signal." And everyone's reaction would've been, "Of course!" It's a no-brainer.
The consequences are also pretty much no-brainers. I'm sure I wasn't the only one to design an entirely modern digital camera back in 1971. It's a simple intellectual exercise once you imagine the sensor exists. I'd be amazed if similar designs didn't exist at Bell Labs and a number of other places. Now, what all of us did was put those designs aside as being somewhere in the distant future, until someone more obsessed than us (Steve Sasson at Kodak) decided to build a prototype in 1975. For which he deserves full credit—it's easy to scribble cool stuff on paper, it's a lot harder to build. But the concept was neither new nor radical.
Now inventing the CCD itself, that's not a no-brainer, which is why Boyle deserves his prize.
Me, I came to the CCD from entirely different route. I was actually trying to solve a photography problem. As I've mentioned previously, solar astronomy is always starved for light, because you're throwing away 99.99% of the photons to look at the very few that you care about. In 1970 we were pushing hard up against the limits; it was routine to develop the film to gamma-max to extract both maximum sensitivity and maximum contrast from the image.
I knew that electronics was already much much more efficient than film at using photons. None of the electronic technologies I knew of, though, were suitable for serious scientific imaging. So I decided to look at the problem sideways; what sort of basic photoelectric devices were there, and could I come up with a new one, since the existing ones were all unsatisfactory?
Okay, voltage/current sources—that's obvious. Solar cells of any kind. Well known, so scratch that. What other basic electrical components are there?
Resistors? Unfortunately, also well-known: the cadmium-sulfide photocell. Nothing helpful there.
Inductors? Hmmm, I'd never heard of a photo-inductor. I also couldn't imagine how to make one.... Or how I would use it if I could (still can't). Table that one.
That leaves capacitors among the basic electrical devices. I'd never heard of a photo-capacitor either. But, wait—I know how to make one, and it's easy! Take a silicon photocell, but instead of hooking up the lower region to an electrode to drain the photocurrent, just let those electrons pile up there. A depleted region will store electrons just like a capacitor, up to a point. Now, make this a potential well, so the electrons will stay trapped there until the well fills up or they get shoved out. Add a gate electrode so that you can change the potential on the well; then, after you've collected the exposure signal, you change the differential voltage on the region and you can dump the electrons out to a signal line.
This was also stuff that everyone already knew how to integrate, as field effect transistors and as CMOS. I could put a whole bunch of these little silicon photocells backed up by collection wells on a single chip. It's simple fabrication. For a modest number of cells, I could have individual readouts, but for large arrays of them it would make more sense to set them up like a clocked shift register so that each batch of electrons went, in bucket-brigade fashion, down a row of cells to the output electrode. That would keep the number of electrodes reasonable even if the cell count got very high.
That's the way invention works. Inspiration, followed by the realization that it can be implemented...followed by implementation. I stopped at realization, once I discovered I was two years behind. And as for the digital cameras, everyone stopped at realization, until Sasson at Kodak moved to implementation.
From there it's a mere hop, skip, jump, and another third of the century to get us to where we are today.
Ctein's column appears on TOP on Wednesdays.
Note: Links in this post may be to our affiliates; sales through affiliate links may benefit this site. More...
Original contents copyright 2011 by Michael C. Johnston and/or the bylined author. All Rights Reserved.