By Ctein
The recent death of Willard Boyle, Nobel Prize winner for the invention of modern digital photography (a.k.a. the CCD) has brought out a certain amount of armchair quarterbacking. Understand that this is entirely normal when Nobel prizes are involved. Sometimes it is just sour grapes and sometimes it is entirely justified. I have reason to think this time it's more the former.
I have some modest knowledge, because I also invented the CCD.
The only minor problem with me laying claim to fame is that I was two years late! Unlike the Olympics, second place in science does not get you a medal. Had I done my homework properly I could've saved myself the trouble of even inventing it.
That's where part of my doubt about the sour graping comes from. I wouldn't have found this in the earlier literature if it weren't being discussed as an imaging device. There were a zillion electronic devices being invented and the term "charge coupled device" wouldn't have meant a thing to me.
The other reason for doubting the sour grapes is that it's a terribly obvious use of the device. Quite possibly Boyle did invent it as a potential memory device, but the imaging and light sensitivity properties of silicon were well known and regularly used in the industry, and I doubt that there would've been more than 10 minutes conversation at lunch the first time he talked about it with his colleagues before somebody, him or someone else at the table, would've said "That could also be a great device for capturing and storing an optical signal." And everyone's reaction would've been, "Of course!" It's a no-brainer.
The consequences are also pretty much no-brainers. I'm sure I wasn't the only one to design an entirely modern digital camera back in 1971. It's a simple intellectual exercise once you imagine the sensor exists. I'd be amazed if similar designs didn't exist at Bell Labs and a number of other places. Now, what all of us did was put those designs aside as being somewhere in the distant future, until someone more obsessed than us (Steve Sasson at Kodak) decided to build a prototype in 1975. For which he deserves full credit—it's easy to scribble cool stuff on paper, it's a lot harder to build. But the concept was neither new nor radical.
Now inventing the CCD itself, that's not a no-brainer, which is why Boyle deserves his prize.
Me, I came to the CCD from entirely different route. I was actually trying to solve a photography problem. As I've mentioned previously, solar astronomy is always starved for light, because you're throwing away 99.99% of the photons to look at the very few that you care about. In 1970 we were pushing hard up against the limits; it was routine to develop the film to gamma-max to extract both maximum sensitivity and maximum contrast from the image.
I knew that electronics was already much much more efficient than film at using photons. None of the electronic technologies I knew of, though, were suitable for serious scientific imaging. So I decided to look at the problem sideways; what sort of basic photoelectric devices were there, and could I come up with a new one, since the existing ones were all unsatisfactory?
Okay, voltage/current sources—that's obvious. Solar cells of any kind. Well known, so scratch that. What other basic electrical components are there?
Resistors? Unfortunately, also well-known: the cadmium-sulfide photocell. Nothing helpful there.
Inductors? Hmmm, I'd never heard of a photo-inductor. I also couldn't imagine how to make one.... Or how I would use it if I could (still can't). Table that one.
That leaves capacitors among the basic electrical devices. I'd never heard of a photo-capacitor either. But, wait—I know how to make one, and it's easy! Take a silicon photocell, but instead of hooking up the lower region to an electrode to drain the photocurrent, just let those electrons pile up there. A depleted region will store electrons just like a capacitor, up to a point. Now, make this a potential well, so the electrons will stay trapped there until the well fills up or they get shoved out. Add a gate electrode so that you can change the potential on the well; then, after you've collected the exposure signal, you change the differential voltage on the region and you can dump the electrons out to a signal line.
This was also stuff that everyone already knew how to integrate, as field effect transistors and as CMOS. I could put a whole bunch of these little silicon photocells backed up by collection wells on a single chip. It's simple fabrication. For a modest number of cells, I could have individual readouts, but for large arrays of them it would make more sense to set them up like a clocked shift register so that each batch of electrons went, in bucket-brigade fashion, down a row of cells to the output electrode. That would keep the number of electrodes reasonable even if the cell count got very high.
That's the way invention works. Inspiration, followed by the realization that it can be implemented...followed by implementation. I stopped at realization, once I discovered I was two years behind. And as for the digital cameras, everyone stopped at realization, until Sasson at Kodak moved to implementation.
From there it's a mere hop, skip, jump, and another third of the century to get us to where we are today.
Ctein
Ctein's column appears on TOP on Wednesdays.
Send this post to a friend
Please help support TOP by patronizing our sponsors B&H Photo and Amazon
Note: Links in this post may be to our affiliates; sales through affiliate links may benefit this site. More...
Original contents copyright 2011 by Michael C. Johnston and/or the bylined author. All Rights Reserved.
Worked on something similar back in the late 60's first digital light meter that was in the film plane and measured time+intensity giving you a good to go reading.--When one of the first epson ink jet came out, I put it to work making really bad inkjet prints. Boy did I wan't to get out of the darkroom.
At the photo show that year in NY the Epson guys said -- It can print photo's, we never though of that...
Did a lot of work in the 3D TV area also -- their just getting there.
Posted by: Carl L | Wednesday, 01 June 2011 at 06:39 PM
Now that was a good read, a larger window into all things Ctein.
Posted by: Steve Weeks | Wednesday, 01 June 2011 at 10:13 PM
I keep toying with how you could make a device on which each sensor receptor site would record the time that it took for x number of photons to strike it rather than count the number of photons that strike a receptor site in x amount of time.
Getting a clock register for each site would be tricky but then you could all sorts of neat things like have short exposure times in the highlights and long exposures in the shadows with no noise or clipping and an exposure range that would maybe not be unlimited but would be pretty big.
You could change the f opening during exposure so the the highlights had more depth of field than the shadows. All sorts of neat stuff.
Posted by: hugh crawford | Thursday, 02 June 2011 at 01:36 AM
Not intending to rain on High's idea, (and certainly don't know anything about the engineering he's supposing) but wouldn't changing the aperture during exposure change the focal plane as well as depth of field?
Of course, since we're talking hypothetically, I suppose that could be accounted for in software.
Patrick
Posted by: Patrick Perez | Thursday, 02 June 2011 at 04:50 PM
Dear Patrick,
The position of the plane of sharpest focus only changes with aperture due to uncorrected spherical aberration. In a high-quality lens this focal plane shift can be ignored.
~~~~~~
Dear Hugh,
That's an interesting idea. NASA has several designs for extreme-range sensor arrays (in excess of 20 stops exposure range) for use in exo-planet observing space telescopes. I don't think any of them gate the receptors using count rates rather than integrated flux, but that might be a failure of my recollection.
In any case, I think it's a rather interesting idea. Another application would be a way to build “characteristic curves” into the sensor response. You like a characteristic curve with brilliant midtones but a really long shoulder so that you don't blow out highlights (although they'll be very low in contrast)? Adjust the gating so that it's fairly minimal for lesser exposures and kicks in rapidly in the highlight regime.
Fun and games in the gedanken lab!
pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================
Posted by: ctein | Thursday, 02 June 2011 at 05:14 PM
Without using a locomotive or a flashlight -
Maybe instead of having a clock register for every photodetector you could read out the entire array very quickly but only one bit deep, sort of like the way one bit audio sampling works
Oh and before anyone say "Gotcha!" I'm well aware that the quantization error would be the greatest in the highlights and the lowest in the shadows, the opposite of the way it is in current detectors (no pun intended) but you would have a lot of time to over sample the highlights.
Better yet you could compensate by making the reverse bias higher during the exposure. At the beginning of the exposure the photodetectors would be relatively insensitive but a few clock ticks into the exposure you would increase the reverse bias so that by the end of the exposure you would
get avalanche breakdown in the photodetectors. (Unless that's what you mean by adjusting the gating.)
Posted by: hugh crawford | Thursday, 02 June 2011 at 10:46 PM
Oh by the way, the reason that I have been thinking about this is because I have always been fascinated by the way some photosensitive materials become less sensitive to light the more exposure they get*, printing out paper being the best known example. I tried mimicking that effect with various compensating and exhaustion development schemes when I was using film.
* yes, at a certain point all photosensitive materials become less sensitive to light the more exposure they get or suffer reciprocity failure, but not in their normal use.
Posted by: hugh crawford | Thursday, 02 June 2011 at 11:03 PM
Perhaps you (and several others) did come up with the same idea a few years too late, but you have to understand that at this point in time your claim comes off sounding a little like Al Gore's "I invented the internet" claim. I think sometimes it's more prudent to just accept that somebody else got there first and resist that urge to say, "Me too!"
Posted by: John Roberts | Friday, 03 June 2011 at 05:38 AM
dammit. I thought I invented the digital imaging device in 1977. You could take a 2K byte dynamic RAM in a ceramic package, (carefully) pop the metal top on it, focus an image on the die, write it with all 1's, stop the dynamic refresh, wait (a really long time) and read what was left. Voila' you had an, albeit poor, image. Used it for very early home-brew robotic vision.
Posted by: Gregg | Friday, 03 June 2011 at 09:27 AM
Dear John,
A-- Al Gore never claimed that. Context matters.
B-- If that is how my "claim" sounds to you, then you have badly misread what I wrote. Try reading the paragraph following that sentence over again. Very carefully, and with thought.
Again, context matters.
C-- I'm not running for office, and therefore I do not have to worry about every little nuance and turn of phrase, for fear someone (other than me) will decide it isn't exactly what they want to hear.
Y'see, context... oh, you know what I'm gonna say [VBG].
~~~~~~
Dear Gregg,
Heheh... and that's why you ain't rich, either. (Unless you are, for other reasons.)
I think one of the very mildly interesting things about the integrated circuit revolution is that it offered many different technical paths to imaging almost immediately. I know of a couple of other solid-state or hybrid schemes from that era.
pax / Ctein
Posted by: ctein | Friday, 03 June 2011 at 12:51 PM
I understand. A buddy and me "invented" the variable valve timing that's ubiquitous on cars today back around 1986 --- well before it showed up on production cars, I think--- over a couple of lunch breaks while bench racing ( the act of talking about how to make a car go faster ). Then again I'm sure VVT was already well underway by others before that.
Posted by: David | Friday, 03 June 2011 at 06:01 PM
Dear David,
Exactly! This column isn't about the primacy of invention, rather it's about the process.
You don't have to be first... or unique... to go through the process. You just have to be there.
There are a handful of things I've invented first. The process wasn't any different, just the chronology.
pax / Ctein
Posted by: ctein | Saturday, 04 June 2011 at 07:49 PM
Regarding Hugh's comment, do multi-anode microchannel arrays provide a similar capability? I'm thinking in particular that the MAMA detector on the STIS instrument on Hubble is able to image in "time-tag" format. (Perhaps there are are many more with it that I'm not aware of—I know the Cosmic Origins Spectrograph on Hubble also has time-tag capability, but it's a pure spectrograph with no imaging mode.)
I've never worked with time-tag data, but from what I've gathered in the data handbooks, a time-tag data product is not an image, but an event stream, tagging each photon detected with a timestamp. According to the STIS data handbook, the time resolution is 125 microsec, or 1/8000 of a second. However, the MAMA detector on STIS is for fairly faint sources (> 50 cts/s/pixel), and I know next to nothing about the engineering behind it.
But when I first learned that this type of data product existed I was bowled over by the possibilities, as you've both mentioned them!
Posted by: Mark | Wednesday, 08 June 2011 at 01:19 PM