Written by Ctein, TOP Technical Editor
Happy holidays to one and all! As long-time readers know I'm inclined to write an X-mas column with the emphasis on the “X.” That is, the essential but unknown part of the equation. The stuff we don't know is as interesting as the stuff we do.
(I just like this photo. A whole lot.)
This year, I'm going in a slightly different direction. The answers are fundamentally unknown, because they are speculations about the future. I can make well-educated guesses. My crystal ball is, though, unreliable.
The inspiration for this column is a thread I read not too long back on another site I frequent which was discussing whether the new 20-megapixel Micro 4/3 cameras were a waste of time. I'm not going to tell you which site or what thread, because within a page it had degenerated into the unfortunately-common ignorant blather that pollutes such discussions. I was mentally composing a rebuttal when I realized it would make a nice X-mas column. I will say one thing about that other thread: I think it should be against the law for people to invoke “laws of physics” unless they, in fact, hold degrees in physics! You can be pretty sure that anyone who says that, and doesn't, is going to be wrong.
All right, then. How far might image quality be pushed, without resorting to nonstandard technologies? Let's just stick with the usual Bayer array cameras for the moment and get to the more exotic later. I'll be talking about Micro 4/3 because that's what I currently use and care about. We could have this conversation about any format, and a lot of what I say will apply across formats.
One last thing before I get started: in the interests of holiday harmony, please suppress the impulse to write comments in the vein of “I don't need X; I don't see why anyone would need X.” Take my word for it that someone out there needs X, even if it's not you. In the interests of peace on Earth, goodwill towards bloggers, let's not go there, okay? Thank you!
Pixel count seems to be what everyone goes at first. How much can that get pushed before we really do run up against physical limits where we're sacrificing quality for quantity? My best estimate, at the moment, is that Micro 4/3 sensors can pick up another two stops in sensitivity before they start to approach real physical limits. Maybe three, but I'm not sure of that.
In practice, that means you could ultimately quadruple the pixel count of the OM-D cameras to 64 megapixels with no loss of sensitivity or increase in noise compared to today's 16-megapixel models (actually, the perceived noise would be a little better, because it would be finer). Another way to parse it would be that if you keep the pixel count where it is currently, you can expect to see half the noise at any given ISO, or a 4X speed increase for the same noise level.
Or you could split the difference, at 32 megapixels and twice the ISO, with modestly lower noise.
We've got a way to go in terms of pixel counts before we are forced into unpleasant tradeoffs. Understand, the engineering and technology has to advance to allow this, but there's nothing in the laws of physics that prevents that. It's just steady progress.
Well then, how about exposure range? Currently, we are running around 12 stops and change. That's pretty good! Exposure range isn't directly a function of pixel count, but there are some connections. Again, my best guess would be that along with upping the pixel count, you can probably squeeze another two stops of exposure range out of the conventional technologies. Call it 14 stops.
The bit depth? That's really a function of the readout electronics. We are nowhere near close to theoretical limits. It's what's cost-effective to make. You want true, clean 14-bit output? You'll get it. Sixteen bit? Hang in there.
Okay, so pretty damned impressive sensors are in our future. Can we really make use of them? Are lenses good enough? Oh yeah, really, they are! Too long to discuss here, but you can read this column and the links back from there and learn more about this subject than you ever really wanted to know.
I can tell you for a fact that the 45mm and 75mm Olympus lenses will hold up just fine, corner-to-corner, when a 32-megapixel camera comes along. I can't swear to it for a 64-megapixel camera; I can't reliably extrapolate my existing test data that far. It wouldn't surprise me. In any case, we are future-proofed for at least a factor of two improvement in resolution, and better lenses come down the pike every day.
Of course, sane people don't demand corner-to-corner pixel-level sharpness as a criterion for a useful camera system. I'm just saying that even if you do, I've got to proofs of existence in my kit, and I know there are others out there.
To infinity...and beyond!
What happens if we don't restrict ourselves to the current conventional camera technology? In that case, there are another two stops of sensitivity and efficiency waiting for you before we hit physical limits. Those Bayer arrays we use today are only about 25% efficient, at best. Those colored filters toss out out three-fourths of the light before it even gets to the sensor. At some point we're going to have commercially feasible detectors that are truly panchromatic: they'll detect photons of any color and measure their energy as they detect them. That stuff exists in the lab; it's not particularly useful for commercial sensors. (No, Foveon sensors are not the answer. They're inefficient, and the other technical problems that they have aren't going to be solved with the level of industrial investment and interest out there. They'll go on as a niche product but, basically, they're the Wankel rotary engines in the world of digital cameras.)
As for exposure range, the sky is the limit. There already exist logarithmic-response sensors in industrial cameras that have over a 20-stop exposure range. There's a whole bunch of reasons why they're not likely to appear in conventional cameras anytime soon, not the least of which is that you need a huge bit depth in the readout and conversion electronics to make them useful for pictorial photography. That's not a physics limit, that's just my opinion that I don't think it's the development path we'll see. But, we might!
Some clever folks at MIT have demonstrated different approach. They've built a system where, when a pixel gets saturated with light, it dumps its charge and starts filling up again. Meanwhile, a counter records how many times a pixel dumps its charge during the course of the single exposure. Hence, instead of each pixel being limited to a charge range of 0.0 to 0.9999, it can, in effect, catch a range from 0.0 to N.9999, where N can be a fairly large number. Three or four stops additional range is no problem.
Further down the pike, it's going to get a lot more interesting. The past year has seen the development of electronics that can operate at optical frequencies. Currently they are working in the infrared, but I have no doubt it will push down into the visible within a year or so. That's a monstrous game-changer; it's like going from crystal radio sets to high-frequency vacuum tubes that let you build REAL radio electronics. The same thing's going to happen in the optical part of the electromagnetic spectrum. It'll be unimaginable.
That's your really, really big X for the holidays.
Now, go do whatever it is you do on December 25 and have fun doing it! That is an order!
UPDATE 12:26 p.m., December 25, 2015: I just gotta share this! My Xmas gift from my Other Significant Other, Laura.
Paula and I both agree this wins the Gift Of The Day (and there were some strong contenders).
Ctein
©2015 by Ctein, all rights reserved
Links in this post may be to our affiliates; sales through affiliate links may benefit this site.
(To see all the comments, click on the "Comments" link below.)
Featured Comments from:
John Camp: "One thing that Ctein somewhat brushed past is the issue of 'human use.' Possibly the most revolutionary and fast-developing device of our time is the computer—but in terms of mass/commercial/consumer computing, there really hasn't been much in the way of advancement in the past five years, because the very large majority of people don't need and won't use those advancements. In fact, we seem to be going somewhat backwards, with the widespread adaption of less capable machines, that happen to have the added capability of making phone calls and taking low-res photos. There has been some advancement in peripherals, like displays and linking cables (Thunderbolt) and flash drive speed, but those improvements really couldn't be classified as essential for most people. For 99% of users (the number pulled from between my buttocks), a 2010 screen provides about as much function as a 2015 screen. The potential for much faster products is there, but 99% of computer users don't need it, so they won't buy it unless the cost is low enough and they're replacing a computer that no longer serves their purposes.
"I think the same is true of cameras and printers—the high-end cameras now serve almost all purposes that cameras need to serve for 99% even of high-end users. Sure, there'll be 1% who want that OM 64-MP sensor...but it's the 99% who'll have to finance that research and development, and I have some reason to think that they won't be interested in doing that. In other words, the stuff we have now is good enough for 99% of photo purposes, and I think that 99% will increase to 99.1, 99.2 and so on because display is shifting more and more to digital, and useful digital displays don't require extreme resolution. Will we get the 64-MP OM? Maybe, but if so, I think it'll be a side effect of other advances, with the other advances (whatever they are) providing the financing for the spinoff."
Ctein replies: John, I think you're mostly correct, but...the effect of horsepower races and prestige and bragging rights on product development is not ignorable. I thought digital printer improvements would pretty much grind to a halt when they equaled (and surpassed) chromogenic prints by pretty much every meaningful measure. Seeing as chromogenics satisfied 99.999% (conservatively) of photographic needs. That happened a good decade ago. There's no real consumer-driven need for printers to have gotten better (folks like me and Charlie Cramer are an insignificant market share). That hasn't stopped steady improvements—it's a horsepower race. Similarly, the market for 120 format cameras and films collapsed before the turn of the century. Film and camera makers continued to churn out new medium-format product because it was considered the prestige market.
Wayne Fox: "I enjoyed the article.
"Regarding the idea of most people not needing it (I assume we are discussing buyers of photo gear which means they are after more than their very capable smart phone can deliver), I have a slightly different perspective. To me it’s about most captures not needing it—but then some do. As one who produces prints for photographers, at least 20–30 large prints or canvas wraps a month leave my store which suffer from serious quality issues because someone wants a bigger print than the detail in the image can pleasantly render. Many small prints also suffer because they have been cropped and not enough data is left to render the image at the desired size.
"So to me it's not 'if' we need need, it, but 'when' we need it. When I'm out shooting, most of the things I shoot could have been easy handled by a less capable device, but then a few times a year I get an image that looks best printed large (which happens to be what 'floats my boat' so to speak). I've had discussions with many customers who say they are completely happy with their current gear, because they never print larger than 13x19, then they show up and want a 40x60. We never know when we are going to capture that special image or need to do some serious cropping. Ironically this means our gear is good enough for most of what we shoot, but perhaps isn’t good enough for our best work."
"As far as the computer analogy, I agree for most needs were met by computers a few generations ago. But I’m not sure we are using less capable computing devices, it's more about a new class of device supplying what we need in a more convenient form factor. A current iPad Pro has more computing and graphics power than most computers of just a few years ago, so the device is still continuing to get more powerful, but the form factor has changed the game. To compare that to cameras, perhaps the 99% number regarding photography works if you consider everyone who takes pictures (which is most of the human population).
"But just as computers can be broken down based on needs, there are niche markets of people who need more computing power. And fortunately there are enough photographers who want more than a cell phone can deliver so we have a niche market size large enough to appeal to several companies to supply those needs. Once we narrow our field to this group I don’t think the 99% number is accurate (again because I think at that point we are talking about captures which need it, not people).
"I don't know whether a game changing new form factor is in store for photographers (wouldn't surprise me) but as the equipment becomes more capable through progression as outlined by Ctein, many who don't think they need it will find on occasion they wish they had it. The good news is if you don’t want or can’t afford to be on the bleeding edge in gear and don’t rush out and buy the latest and greatest, at least there are enough photographers who do to keep driving the technology forward. This means eventually almost everyone will find occasion to be using better gear than they have today, and will find on some occasions, some captures benefit."
So does this mean there won't be a third edition of Digital Restoration from Start to Finish?
Too bad, because I use my copy a lot, but a lot of new stuff has happened since it was printed.
[Ctein replies: Mark- no! I am hard at work on the 3rd edition. I've got a bunch of cool new tricks. I just won't be selling copies of the book, myself.
In truth, the manuscript was due in two months ago (sigh). My editor is being very understanding. Publication will be sometime next year. It is unlikely there will ever be a 4th edition. ]
Posted by: MarkR | Friday, 25 December 2015 at 11:13 AM
I'm looking forward to reading about what your eyes (and tests) say about the new Canon printer compared to the Epson P800 for bringing some of these technical camera and lens improvements to life.
And beyond that, do you have any thoughts on the direction printers (and print materials) are likely to go (especially given the the proliferation of online viewing at perhaps the expense of printing)?
[Ctein replies:Jeff- at this point I have no plans to test the Canon printer. Print quality will improve slowly and incrementally in the future because all the low-hanging technological fruit has been picked. There are no fundamental limits to the sharpness, color gamut or density range, just a lot of hard work to make it better. There is definitely room for perceptual improvement, though.
I haven't looked into the demand for printing, lately. As of two years ago, the demand for photographic prints (using the term inclusively) of all types was still steadily growing. Which came as a big surprise to me. It may have finally leveled off.]
Posted by: Jeff | Friday, 25 December 2015 at 11:34 AM
Two things strike me as odd -
First, I thought that lately the "X" in "X-Mas" stood for Fuji sensors.
Second, I loved my Wankel rotary engines! :)
Good column, very informative - thank you!
[Ctein replies: Earl- "X-TransMas?" OK, sure, why not! Folks love the Foveon sensors, too. Unfortunately, the collective industry does not share the affection.]
Posted by: Earl Dunbar | Friday, 25 December 2015 at 12:52 PM
Great column debunking the faux physicist trolls.
Recent Olympus product developments also suggest an additional parallel approach that sidesteps current engineering and cost limitations.
The E-M1's new in-camera focus-stacking mode and the E-M5 II's high-resolution sensor-shift mode both indicate that hand-held deep-focus and high-resolution modes are imminent due to faster sensor read-out speeds and more sophisticated in-camera algorithms
Posted by: Joe Kashi | Friday, 25 December 2015 at 03:59 PM
Ctein I wonder if you have seen Bill Claff's dynamic range charts based Bose on DXO data and his own file measurements. He includes "ideal" performance for each sensor size ( which I'm guessing does not account for any major technology change).
http://www.photonstophotos.net/Charts/PDR.htm
[Ctein replies: John- That data is wildly at odds with any measurements I've made, in which I am quite confident, as well as those of sites I have reason to trust technically (like DxO). His values for exposure range are way low. Now, it's easy to get erroneously low values for exposure range; it's very hard to get erroneously high ones. So, I believe me. I don't know what his assumptions are, nor what his methodology is. I don't really care; it's not my problem to figure out why his results are wrong. ]
Posted by: John Krumm | Friday, 25 December 2015 at 04:01 PM
I got a copy of Saturn Run for Christmas. That picture of the mug isn't a plot spoiler, is it? : /
[Ctein replies: Roger- Oh, totally. That ruins the book. Don't waste your time. Get the receipt and exchange it for something worth your while, like a nice Lee Child novel]
Posted by: Roger Bradbury | Friday, 25 December 2015 at 06:13 PM
It's my impression (based on far from rigorous noodling around), that even some of the better old Takumars from the early 60s seem to out resolve the current m4/3 sensors, at least in the centre of the frame, so I'm quite happy to trust Cteins argument.
Posted by: Nigel | Friday, 25 December 2015 at 08:10 PM
My real X is: Exactly how many significant others do you have, Ctein???
[Ctein replies: Miserere- Exactly?! That would depend on how significant the significance would have to be. There's kind of a descending scale, there. Where does one draw the line?]
Posted by: Miserere | Friday, 25 December 2015 at 09:30 PM
"in terms of mass/commercial/consumer computing, there really hasn't been much in the way of advancement in the past five years, because the very large majority of people don't need and won't use those advancements. In fact, we seem to be going somewhat backwards..."
If a smartphone is a personal computer, then people are still demanding, and buying, faster computers. In the last five years, there has been a big advance in processor speeds and screen densities in the computers that most people buy and use.
I agree with the larger point though: "Good enough" has arrived for the majority of people using desktop and laptop computers. I think it may be only a few years off in smartphones. I think phone and tablet processor speeds could level off at around the same speeds that legacy computers are at now.
I think it is worth contemplating when and how "good enough" will happen with digital cameras for the majority of users. When that happens, will some manufacturers offer "halo" camera equipment like the LaFerrari or Porsche 918? Maybe so.
Posted by: Bruce McL | Friday, 25 December 2015 at 11:05 PM
Computers are far from "good enough". However, there have been major shifts how people measure "goodness". Compared to a three year old laptop, my phone runs a general-purpose benchmark about as quickly, but has more radios (cellular and NFC in addition to Bluetooth and wi-fi), many more sensors (GPS, accelerometer, compass, fingerprint sensor, a rear-facing camera), and it's much more power efficient.
To say that computers aren't advancing, you have to define either "computer" or "advancement" excessively narrowly.
Posted by: Ben Rosengart | Saturday, 26 December 2015 at 12:09 AM
The paragraph that starts with "Further down the pike......"
Have no idea what it means. A detailed explanation would be
helpful.
[Ctein replies:Dauga- I imagine it would, but a detailed explanation would run many, many pages. Instead, let me make another broad comparison and that will have to suffice. Currently, photonics by and large can't operate at optical frequencies. It's limited to clever versions of “Oh, look, there's a bunch of photons. Oh, look, there there isn't.” It's like the optical equivalent of Morse code—dots and dashes. Compare Morse code to what you can do with real radio communications. That's what you can do with light when you have electronics that can work at optical frequencies. It's going to be an incredible game changer, and we are almost there.]
Posted by: daugav369pils | Saturday, 26 December 2015 at 12:17 AM
I'm currently recovering from shock at the performance levels of my Xmas present iPad Pro. That thing has a multi-core (2) Geekbench 3 processor/memory rating of 5,498. Which roughly converts to 71,474 Vax 11/780 million instructions per second (1978 MIPS). It drives a 2732 by 2048 pixel display of 264 pixels per inch, achieving frame rates of 80+ per second. It weighs 1.57 pounds and is a smidge over 1/4 inch thick. An introduced-in-2011 iPad 2 has a Geekbench 3 rating of 263 and a 1024 x 768 pixel display of 132 pixels per inch. In the last 4 years that's a factor of 20 increase in processor performance and a factor of 7 increase in display resolution.
Closer to home, a late 2015 iMac 5K (5120 x 2880 display resolution) appropriately configured for Photoshop at around $4K will easily outperform a 2013 Mac Pro at around $8K without display. All the better to support 83MB raw files from a Sony A7RII.
Believe me, mass consumer/commercial computing is getting a lot faster. Huge gains are also being made in overall as well as wireless and home delivery of internet bandwidth and latency, and server cloud-based scalability of software as a service. Is a 2015 iPhone 6s faster and more capable than a 2010 iPhone 4? (that's rhetorical) The vast majority of people do indeed use these advancements, and just accept them
Getting a new computer is bit like having cataract surgery - you just don't notice how much you are accommodating lens flare and yellow shift until you get new corneas. Similarly, when reboot times drop from minutes to seconds because of solid state memory, and backups happen automatically overnight - then you notice how much you were accommodating that old machine.
Posted by: Don Craig | Saturday, 26 December 2015 at 01:56 AM
Being mostly concerned with printing up to 12x18" to a "high enough" standard, I am very happy with my clutch of ageing but capable EM5 mk1's.
I just left (today) my long term job in a bricks and mortar camera shop and one thing I wont miss is the constant myth busting required to stop people buying huge, over pixelled SLR cameras with a steep learning curve, just to take snaps of their kids.
Glad we have more to come, i'm happy with now.
(Last sale on my last day was a D750 to a couple who, knowing nothing, googled "best SLR" and came up with that, but at least they went out with a 50 f1.8 and not a monster zoom!).
Great and timely article, as usual.
Posted by: Rod Thompson | Saturday, 26 December 2015 at 03:19 AM
Those colored filters toss out out three-fourths of the light before it even gets to the sensor.
Nothing to do with the subject, but why do Americans have a 25 cent coin called a quarter but refer to 3/4 as three fourths? We would say three quarters.
Just another one of life's irrelevant mysteries which I think about!
[Ctein replies: Steve- Americans say it both ways: "three-fourths" and "three-quarters" are synonyms. I don't know if there are local regional or cultural preferences for one or the other, but I hear both all the time. I don't use one consistently over the other; it's whatever pops out of my brain at the moment of utterance.]
Posted by: Steve Smith | Saturday, 26 December 2015 at 05:27 AM
The Fuji/Panasonic organic sensor is claiming very high DR, despite the colour filter. This seems to be the result of having a much higher saturation capacity of 4X compared to a photodiode (and presumably very low read noise).
A typical 16MP 4/3" photodiode would have a capacity of around 25,000 photoelectrons and a theoretical DR of around 14.6EV, so 100,000 would be two stops more or 16.6EV, which sounds reasonable (ignoring read noise).
This would be in line with the best current FF sensors. Using the same tech would push them up to around 18.6EV.
A 20bit ADC seems a bit of a reach, but assuming there is some read noise, 16bits would still be a major improvement.
4X the capacity would also indicate roughly double the SNR and double the tonal range (using the DxO definition of tonal range).
Expected production in 2-3 years.
(Note, rumours quoting 120dB DR from Fujirumours are not so far substantiated, and the 88dB DR quoted on the link does not specify the sensor size).
[Ctein replies: Steve- forgive me, but I am going to use you as the trigger for a rant. FERGODSAKES, PEOPLE, DON'T SAY “DYNAMIC RANGE” IF YOU'RE TALKING ABOUT EXPOSURE RANGE! This is a perfect example why. You are sometimes talking correctly about sensor dynamic range and other times not. Dynamic range does not convert to exposure range. I wrote two columns explaining why, which people should reread, including the comments before they want to argue with me (and I will be able to tell if you try to fake it, trust me).
An increased sensor dynamic range may or may not result in an increased exposure range. Example: as you know (but most readers won't) most sensor noise is a bulk phenomenon, so increasing the depth of the pixel well to accommodate more photoelectrons without doing anything to improve the noise characteristics means that the noise floor will increase (almost) in proportion to the dynamic range. There will be little or no improved exposure range. In practice, there would be little point to engage in such a redesign, except as an intermediate step. I'm just providing it as a really simple example. Really, folks, go read the previous two columns.
In the meantime, everyone, JUST STOP IT!
Whew. I feel better.
You mentioned the readout issues, which are indeed significant for very-long exposure range systems. As I've explained before, exposure range and bit depth are independent metrics. Exposure range is like the total height of the staircase and bit depth the number of steps in that staircase. If you don't have enough bit depth, the individual steps become inconveniently or even unworkably high. You'll wind up with problems similar to what happens when you work in a very large color space, like Wide Gamut or ProPhoto RGB when you've only got eight bits per color channel: Visible quantization. Also, adding more bits to your readout electronics doesn't help unless they're clean bits. A 16-bit readout with the four lowest order bits being noisy isn't better than a clean 14-bit readout.]
Posted by: Steve Jacob | Saturday, 26 December 2015 at 10:22 AM
"No, Foveon sensors are not the answer. They're inefficient, and the other technical problems . . . they're the Wankel rotary engines in the world of digital cameras."
I remember naively looking under a Mazda hood and being disappointed there was no Wankel engine there. Here's why:
http://www.autoblog.com/2013/11/19/mazda-ceo-says-no-more-rotary-wankel-engines/
"While the Wankel rotary engine does indeed make a lot of power in a small, lightweight package, it does so while burning lots of fuel and emitting lots of noxious gases into the atmosphere, at least when running on gasoline . . . it is possible that we'll see a rotary return on an alternate fuel sometime in the not-too-near future."
But the advantages have to be so great that the market will be there. Research continues.
Posted by: wts | Saturday, 26 December 2015 at 03:13 PM
Ctein, it's a bit late to suggest that now, I'm three quarters/three fourths of the way through the book! I've already got a nice Lee Child novel this Xmas from the same person who gave me a copy of Saturn Run. : ]
Posted by: Roger Bradbury | Saturday, 26 December 2015 at 03:44 PM
"I can tell you for a fact that the 45mm and 75mm Olympus lenses will hold up just fine, corner-to-corner, when a 32-megapixel camera comes along."
It's already come along, with the High Res Mode of the E-M5 II. While not ready for prime time with subjects that aren't static, it certainly is higher than 32 MB resolution, and useful for many subjects.
From tests, it's clear that the 64 MB Raw files aren't quite the overall equivalent in actual resolution of detail to the Pentax 645Z 50 MB Bayer array sensor. It does nicely out resolve the 645Z on small, fine, repetitive detail.
OTOH, the HR Mode is far superior to the 36 MP Nikon D800 for small and/or repetitive detail, as the D800 has terrible moire effects, and the Oly in HR Mode does not.
The test shots on Imaging Resource appear to have been shot with the older, 4/3 50/2 macro lens. While it seems up to the job, we can't really know if a sharper lens might make the sensor look a little better.
While I imagine that at least the 45/1.8, 75/1.8 and 60/2.8 Macro are fully ready for at least 32 MB, how may one be sure? I have all three lenses and the E-M5 II, but wouldn't know what to compare them to.
Posted by: Moose | Saturday, 26 December 2015 at 05:37 PM
Well put. I feel that a higher resolution micro 4/3 sensor is inevitable, the technology has been maturing and the advantages of higher resolutions are obvious even if the final image will be downsampled.
However, I've been for a while bothered that the discussion about cameras is still largely about resolution, which I feel is something that is increasingly reaching a point of sufficiency for the vast majority of users. I would like to see more focus on the ability to reproduce color and capture demanding exposure ranges; when using a small sensor camera, the limitations in those areas are the ones that bother me most.
Posted by: Oskar Ojala | Saturday, 26 December 2015 at 05:59 PM
I'm not sure if daugav369pils wanted further explanation of the paragraph or the idiom. "Further down the pike" can mean 'further down the road' - pike being short for turnpike (a toll toad). Merry Christmas!
Posted by: Mike G | Saturday, 26 December 2015 at 07:23 PM
Hey! I shoot with Foveon sensors and drive a Mazda RX-8. What are you saying? A happy X to you.
[Ctein replies: Michael - That you are a member of an extraordinarily select group. ]
Posted by: Michael Bearman | Saturday, 26 December 2015 at 09:08 PM
Dear Ctein (happy to continue offline if needed, because this subject is very interesting...)
FERGODSAKES, PEOPLE, DON'T SAY “DYNAMIC RANGE” IF YOU'RE TALKING ABOUT EXPOSURE RANGE!
I'm not sure where I did.
I accept that exposure range is not the same thing as DR for all the reasons you explained, but what you describe seems to relate to tonal range as much as dynamic range (both as defined by DxO)
Ie. the ability to detect tonal changes at signal levels less than the noise level given more than 1 pixel to look at. It's an interesting perspective, and I would like to know how it relates to DxO's definition of tonal range at much higher signal and noise levels.
For my part I was just trying to gauge roughly what the potential improvement could be in terms of engineering DR, SNR etc. since this is what Fuji (and DxO et al.) keep talking about, right or wrong.
20EV capacity with a noise floor of 12 electrons (read and thermal noise) yields around 16.4 EV of DR, so a 16 bit ADC would cause very little quantisation. Of course, 20 e of downstream noise may make the improvement even less noticeable, around 15.4 EV, but this is still higher than the results obtained for a Nikon D4 (for instance).
The other issue is tonal range (as defined by DxO as the number of noise limited tonal graduations between noise floor and saturation). This is mainly shot/PRNU noise limited and likely to be much less than 16 bits. Probably slightly less than 10 bits, although that is still usefully more than the 8 bits or so that is quoted for a D4 (for instance) given the prevalence of 16bit editors and high bit depth printers (even if you may still see issues on an 8bit display).
However, you have made me wonder if this definition of tonal range is also a nonsense, in the sense that if you can distinguish tones at separations less than noise, then tonal range is also less useful as a measurement.
Nevertheless, as a yardstick, I would assume that improved DR and tonal range would nevertheless yield a likely improvement in exposure range.
Please feel free to set me straight (by email) if I made an incorrect implication or if this assumption is wrong.
PS. I have read Emil Martinec's excellent paper many times and it is indeed very useful. The maths is happily not too strenuous.
[Ctein replies: Steve- I think we can leave this be, because we're in substantial technical agreement. One minor note and one correction. The minor note– when you talk about dynamic range in "EV," you are conflating dynamic range and exposure range even if you don't mean to be, because EV is a measure of exposure. It perpetuates the confusion.
The minor correction– if "tonal range," as used by DxO is as you say (I haven't looked into it), then it's another way of measuring of how many stairs there are in the staircase; whereas exposure range is about the total height of the staircase. So, they'd be different things.
Fun, as always, Steve!]
Posted by: Steve Jacob | Saturday, 26 December 2015 at 10:27 PM
The minor note– when you talk about dynamic range in "EV," you are conflating dynamic range and exposure range even if you don't mean to be, because EV is a measure of exposure. It perpetuates the confusion.
OK, I can give you that one ;-)
Unfortunately this has been more or less fixed in people's minds by DxOMark. Nevertheless, it's useful in the sense that DR in EV can never exceed the bit-depth, both being Log2(signal).
Thanks for the reply, and still interested in your assessment of DxO 'tonal range' if you ever take a look at it at some point.
Posted by: Steve Jacob | Sunday, 27 December 2015 at 01:13 AM
"The test shots on Imaging Resource appear to have been shot with the older, 4/3 50/2 macro lens. While it seems up to the job, we can't really know if a sharper lens might make the sensor look a little better."
Ahem - there is no sharper lens.
Posted by: Adrian | Sunday, 27 December 2015 at 04:57 AM
Any digital camera developed after 2011 is capable of outperforming it's chemical counterpart 2 to 1. In resolultion, speed, editability (is that a word), ISO and what not. A humble 550D with some good glass can do tricks these days, for which you would be burned for using witchcraft back in the 80th. I predict (with a trusted palantir at my side) that in the future camera's will be seen more and more as an input device (Yeah, I updated the BIOS of my brand new motherbord end of info and much reyocing).
My camera's are used for:
1) Creating panorama's of Gpixel dimensions using Kolor AutoPano Pro.
2) Creating 3D scans of anything ranging from a 4 cm matchbox car from the 60th up to the facade of the Xanten cathedral, using Agisoft.
3) Being integrated in my SLS David Scanner
4) Creating HDR images and panorama's using Photomatics and Magic Lantern (and a 1965 Gitzo Tripod).
And about the 20 Mpixel sensor....why not, we don't have to shoot at 3200 iso when shooting landscapes!
Greets, Ed.
Posted by: Ed | Sunday, 27 December 2015 at 01:44 PM
A comment on John Camp's comment...
The big advancement in "mass/commercial/consumer computing" in the last 5 years has been in power consumption. The major manufacturers are responding to the consumer. The mass consumer doesn't want more computing power (currently) and instead wants a smaller form factor (requiring lower power consumption). Many of these tablets have more computing power than desktop computers of 10 years ago while drawing a very small fraction of the power (single Watts versus hundreds of Watts).
Another funny note...
One of the more interesting things about the consumer market (although not "mass" consumer market) has been the quest to satisfy "gamers". Gamers are hard-core computer consumers that demand very high computer graphic rendering performance. The insanity of the demands of these people have driven the performance of graphics cards into the stratosphere. Now, the research community is using the high end consumer graphics cards to study neural networks (this is a class of programming processes that have allowed for the rapid improvement in voice recognition, image identification, Google car, etc.). Interestingly, Adobe has taken note of the improved power of graphics cards as well and, if available, Photoshop will use the graphics card to perform some of its image manipulation processes.
It turns out that graphics cards fit into the "lower power consumption" paradigm. High end graphics cards typically have thousands of processing cores and yet draw only hundreds of Watts of power (as compared to hundreds of kilo Watts of power for a system with an equivalent number of CPU cores).
Posted by: jeffrey K Hartge | Sunday, 27 December 2015 at 03:21 PM
Granting that it may be technically possible to increase MFT sensor pixel counts considerably, I think it is important to bear in mind that if you want to make use of these higher resolutions and take pictures that are uniformly sharp when viewed at the level of individual pixels, then several other factors come into play that can make this impractical or difficult.
1) Depth of field, when calculated with a circle of confusion commensurate with the inter-pixel spacing becomes very narrow. Not a problem for photographing flat objects with a macro lens, but a very real problem trying to take landscapes with sharp foregrounds and backgrounds. Focus stacking can help in some situations.
2) Images at higher resolutions are more sensitive to blurring from subject and/or camera motion. This requires a very solid tripod, avoiding shutter shock, and being very conservative about shutter speeds to stop motion.
3) Any flaws in your optics will be magnified so you need to use your best lenses.
4) Blurring due to diffraction becomes visible at the pixel level at lower f stops as you increase the sensor resolution. For the current 16MP sensors, diffraction already produces noticeable blur above f/5.6. By forcing you to shoot wider open to avoid diffraction blur, you further reduce DOF and make it hard to take sharp landscapes.
Jonathan Sachs
Digital Light & Color
[Ctein replies: Jonathan- how nice to hear from you! (Jonathan and I are old friends) For folks who are new to the photography game, I'll point out that John's admonitions are the same ones given to folks trying to do really sharp film-based photography. They are neither pros nor cons, they are just the way optical photography works. I'd also like to add that there are lots of good reasons beyond resolution why you want a whole lot of pixels, considerably more than your lens' blur circle commands.
For folks who don't know, Jonathan created Picture Window, which is my favorite non-Photoshop image processing program. People who find Photoshop's interface and design logic opaque and not particularly congruent with the way they've thought about printing should check out Picture Window at the http://www.dl-c.com website. ]
Posted by: Jonathan Sachs | Thursday, 31 December 2015 at 12:00 PM
John Camp's comments about computers miss an important point. The computer, for most people, is simply a device to perform various tasks. The power for performing those tasks has migrated increasingly to datacenters such as those owned by Google, Facebook, and Amazon. Datacenter-based computing has definitely increased in both the kinds and complexity of tasks as well as overall capacity, and continues to do so. It allows us to utilize thousands or even tens of thousands of powerful server computers for brief tasks like locating the nearest pizza restaurant. The benefits of datacenters are less obvious to photographers, perhaps, because network bandwidth limits how much image processing we can do in the cloud. For practical purposes, we need image storage to be local, with the cloud as one possible back-up destination. The analogy with computers breaks down with cameras precisely because a camera, to capture images, needs to be physically located very near the subject, at least near as compared to a datacenter that might be 10 states away or even at the antipodes. So to improve our uses of computers, we can shift much of the burden away from the device and into datacenters, whereas to improve cameras, the improvements need to be in the physical device in the hands of the photographer.
Posted by: Bill Tyler | Thursday, 31 December 2015 at 02:16 PM