« William Claxton, 1927–2008 | Main | Random Excellence: Martin Waugh »

Monday, 13 October 2008

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00df351e888f883401053585cbef970c

Listed below are links to weblogs that reference Black Silicon:

Comments

Am I right that this would amount to 6 to 9-stop difference, or am I miscalculating? A 6-stop difference would make an ISO 25600 picture as clean and sharp as an ISO 400 picture. A 9-stop difference would make it possible to take an ISO 1638400 (!) picture with a quality similar to today's ISO 1600 (still very usable). Handheld moon- and starlit landscapes with formula 1 races in the background, here we come. :-)

Very interesting. A lesson not just for Government science but all those who fund research. Time was when boffins were employed to be curious - to go find out stuff. Now it all has to have some predetermined purpose, so we're losing a whole lot of basic science as well as the giant leaps. The world will eventually be a poorer place as a result.

Note that the "100 to 500 times more sensitive" statement comes with big fat qualifiers. It doesn't mean that sensors made of the stuff can take photos at ISO16000000. At that exposure, you'd be photon shot noise limited, even if you had perfect quantum efficiency and zero read-noise. What SiOnyx actually refer to are two things, increased infra-red sensitivity out to 2500nm (rather than ~1000nm for conventional silicon), and charge multiplication through a photo-avalanche process at low voltage. Electron multiplying CCDs (EMCCDs) already do this through a series of impact ionisation pixels just before the sensor node. The signal referred read-out noise does drop considerably (but Poissonian statistics in the multiplication process partially counteracts this) but the dynamic range doesn't improve since that is limited by the charge-voltage amplifier and the maximum charge capacity of the sense node. See http://www.emccd.com/ for more information on EMCCDs.

The charge multiplication effect may pose a problem in that the charge capacity of small CMOS pixels is limited. The dynamic range would be badly affected. One method to overcome this may be to dynamically control the bias voltage and hence the charge multiplication so that this is only active in very low light (at the expense of headroom).

There may be possible improvements in sensors due to this technology but not to the extent that the phrase "100 to 500 times more sensitive" might imply. The process of creating the sulphur incorporated silicon surface would also have to be made compatible with sensor semiconductor processing as well so that is a possible hump in the road. I see the greatest gains to be had in IR detectors and sensors rather than in the visible spectrum. Getting silicon to be sensitive beyond 1 micron is the more impressive feat IMHO.

I know nothing about this kind of thing, so these are actual questions. If one problem with current sensors is that the sensor wells are so small (at the smaller end) that there aren't enough photons coming in to get good color and DR discrimination; and yet smaller wells are needed to get the level of resolution that everybody wants...how does increased sensitivity (whatever that means in this context) supply more photons to the black silicon? Or does this mean that bigger wells (say, the D3's) will get more and more ISO range, while resolution remains lower than, with, say a 1DsIII? Or would black silicon mostly be of use for MF, where the resolution is good enough because of the acreage on of the sensor, but they could really use better ISOs?

CCDs already have a quantum efficiency in the tens of percent. I don't see how there's room for 100x improvement. See, for example:

http://www.ccd.com/ccd101.html

The 500x apparently includes ranges where regular silicon isn't sensitive. For the visible range, the NY Times summary says 2x improvement in sensitivity, or about 1 stop, so don't get your hopes too high.

First, for those who care about such trivia, a "femtosecond" laser is one that produces pulses shorter than a picosecond (1 trillionth of a second or 1000 femtoseconds), but it isn't necessarily as short as one femtosecond. 10-100 femtoseconds is more common. Slooooooow [vbg]

From what I can glean from the online articles, I wouldn't expect to see a 100-500 visible light speed increase with the same noise levels. From my reading there are three effects going on. The first is that the silicon is more efficient at capturing photons. The second is that it captures them well over a wider range of wavelengths (energies). The third is that the structure acts like an avalanche diode when biased with low voltage: a single photoelectron generates several dozen detection electrons.

The third part is where most of the improvement is coming from. Think of it as being like a sensor with a built-in light amplifier. It doesn't necessarily mean that the noise characteristics will be improved; the noise may increase along with the strength of the signal. Or not. I'm just saying, don't make assumptions about that. You may indeed get extremely high sensitivity digital cameras, but it may be kind of like using Tmax P3200 was.

The first two characteristics will give you a real speed increase. I think it's safe to say you can look for a factor of two improvement and maybe even a factor of four. I don't think it's likely to be better than that.

Not that this is anything to complain about! Suddenly getting the same noise at ISO 1600 that you previously got at 400 is nothing that anyone would turn their noses up over... especially if the camera can also be used at extremely high speeds without processing artifacts driving you nuts.

Or another way to look at it: you could have half-scale sensor cameras with as good image quality as full-frame ones or full-frame cameras with the same quality as medium-format digital backs. I don't think any of us would turn up our noses at that, either.


~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================

So how would you deal with taking pictures in bright light? Either the camera would need a built in ND filter that you could flip in or out of the optical path, or maybe the sensor could have a matrix of normal pixels interspersed with "black" pixels and the firmware could do some sort of load balancing between the two. Presumably the second idea would be better, since it would give something like the equivalent of, let's say, seven stops dynamic range for the normal pixels and six additional stops from the "black" pixels, which puts us right up on par or better with B&W film.

More sensitive does NOT mean larger dynamic range. DR is a measure of the range of intensity a sensor can record. What the SiOnyx sensor does have is a larger spectral range. Something that is more sensitive starts recording meaningful signal sooner, but may (though does not have to) saturate sooner.

Unless I am badly misreading the press release, this will not do much for us visible light photographers, though it may do wonders for the IR gan, IR vision, heat seeking missles, IR telescopes, etc, etc,

I've been doing the same thing in my basement for the last two years using a turkey baster powered by a 3-pound sledge. I'm surprised they've taken such a high-tech approach.

Dear John,

The answer to your question is that when people talk about "not enough photons coming in", what they really mean is not enough light coming in. They aren't really talking about photons or counting statistics, they're just talking about the amount of light sensor intercepts, combined with its efficiency at using that light. In other words, it's a correct but not scientifically precise statement.

We are quite a ways from running into fundamental photon counting statistical limits. That's why you can still see successive generations of cameras that have higher ISOs and/or smaller pixels and have no worse noise nor poorer color than their predecessors.


Dear Jerry,

One possible solution is that the bias voltage on the detector would be dropped to reduce the avalanche effect. Most of the increase in sensitivity that's being reported is due to internal amplification of the photo signal. It's not much different from a conventional camera, where the way you get higher ISOs is to turn up the gain on the amplifier. In this case the amplification is internal to the sensor, but the principle is the same.

Internal amplification can have some substantial advantages over ordinary external amplifiers. Aside from better noise characteristics, there already exist scientific research chips that have on-chip gain that is adjustable on a pixel by pixel basis. The pixels that receive a great deal of light reduce their sensitivity automatically. Experimental sensors of this design exhibit considerably more than a 20 stop exposure range.


~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================

So — save your money and wait for this stuff to reach the market. When will that be?

Well, since I used to work with the things and I need to do something with my useless knowledge, here's a little more trivia about the femtosecond (fs) pulses mentioned; perhaps this will give a little more physical insight into how short these pulses are... As many of you know, light travels about 300 million meters in one second (in a vacuum); that's around 186,000 miles for the Imperial unit crowd. To put that in more understandable terms, that's about 7.5 times around the earth at the equator.

In 0.5 seconds, light travels, well, half that far, or 3.75 times around the earth. We can keep dividing up that second and looking at how far light travels in each of those fractions-of-a-second; we'll do this until we've divided the second into chunks 100 fs long. How far does light travel in 100 fs? About 30 microns, or roughly the width of a hair.

Put another way: to get an idea of how many 100 fs divisions there are in one second, head to the equator. Once you're there, pull a hair out of your head and lay it on the ground with the width running east-west. Now pull another one out and lay it on the west side of the first. Pull another one out and lay it just to the west of the second. Continue this procedure until you've circled the globe 7.5 times... At this point you will be very, very bald, but the number of hairs you've laid down is equal to the number of 100-fs chunks of time there are in one second.

Now, didn't that help clear things up? :)

--Jerry: if I recall correctly, the two-well approach you mention is basically how Fuji increases the dynamic range of their sensors.

Dark currents a bigger problem in black silicon?

Derek -

that's all very well to say, but if we were to try that it would be really hard to do; the hairs would doubtless get blown all over the place by wind. Also I think that the bodies of water lying along the equator would pose insurmountable obstacles to your ill-thought out experiment.

Robert

I liked the "very bald" part. Maybe you could cut the hairs into extremely short slices and make them last longer that way....

Femtosecond: length of time your 15-year-old remembers what you just asked him to do.

Mike J.

I thought a femtosecond was a sacrifice bunt in softball?

;-)
Adam

Dear John,

I'm going to go out on a limb and *guess* the dark current might even be less of a problem. Unless the material has an inherently higher dark current, which is something I don't know.

The reason is the internal amplification. The avalanche effect works kind of like a photomultiplier tube; there is an increasing cascade of electrons that is distributed in space. So if a thermal electron appears right at the top of the silicon where a photo electron would appear, it would get amplified the same as the photo electron would. But if it appears deeper in the bulk material, it's further down the cascade and it doesn't get amplified as much. So, on average, thermally generated electrons don't get amplified as much as light-generated ones.

That's my theory, anyway, and I'm sticking to it [ ignorant grin ].


~ pax \ Ctein
[ please excuse any word salad. MacSpeech in training! ]
======================================
-- Ctein's online Gallery http://ctein.com
-- Digital restorations http://photo-repair.com
======================================


The comments to this entry are closed.