« Sony A900 vs. Fuji X-T1, a Surprising Conclusion | Main | Merry Christmas! »

Friday, 25 December 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

So does this mean there won't be a third edition of Digital Restoration from Start to Finish?

Too bad, because I use my copy a lot, but a lot of new stuff has happened since it was printed.

[Ctein replies: Mark- no! I am hard at work on the 3rd edition. I've got a bunch of cool new tricks. I just won't be selling copies of the book, myself.

In truth, the manuscript was due in two months ago (sigh). My editor is being very understanding. Publication will be sometime next year. It is unlikely there will ever be a 4th edition. ]

I'm looking forward to reading about what your eyes (and tests) say about the new Canon printer compared to the Epson P800 for bringing some of these technical camera and lens improvements to life.

And beyond that, do you have any thoughts on the direction printers (and print materials) are likely to go (especially given the the proliferation of online viewing at perhaps the expense of printing)?

[Ctein replies:Jeff- at this point I have no plans to test the Canon printer. Print quality will improve slowly and incrementally in the future because all the low-hanging technological fruit has been picked. There are no fundamental limits to the sharpness, color gamut or density range, just a lot of hard work to make it better. There is definitely room for perceptual improvement, though.

I haven't looked into the demand for printing, lately. As of two years ago, the demand for photographic prints (using the term inclusively) of all types was still steadily growing. Which came as a big surprise to me. It may have finally leveled off.]

Two things strike me as odd -

First, I thought that lately the "X" in "X-Mas" stood for Fuji sensors.

Second, I loved my Wankel rotary engines! :)


Good column, very informative - thank you!

[Ctein replies: Earl- "X-TransMas?" OK, sure, why not! Folks love the Foveon sensors, too. Unfortunately, the collective industry does not share the affection.]

Great column debunking the faux physicist trolls.

Recent Olympus product developments also suggest an additional parallel approach that sidesteps current engineering and cost limitations.

The E-M1's new in-camera focus-stacking mode and the E-M5 II's high-resolution sensor-shift mode both indicate that hand-held deep-focus and high-resolution modes are imminent due to faster sensor read-out speeds and more sophisticated in-camera algorithms

Ctein I wonder if you have seen Bill Claff's dynamic range charts based Bose on DXO data and his own file measurements. He includes "ideal" performance for each sensor size ( which I'm guessing does not account for any major technology change).

http://www.photonstophotos.net/Charts/PDR.htm

[Ctein replies: John- That data is wildly at odds with any measurements I've made, in which I am quite confident, as well as those of sites I have reason to trust technically (like DxO). His values for exposure range are way low. Now, it's easy to get erroneously low values for exposure range; it's very hard to get erroneously high ones. So, I believe me. I don't know what his assumptions are, nor what his methodology is. I don't really care; it's not my problem to figure out why his results are wrong. ]

I got a copy of Saturn Run for Christmas. That picture of the mug isn't a plot spoiler, is it? : /

[Ctein replies: Roger- Oh, totally. That ruins the book. Don't waste your time. Get the receipt and exchange it for something worth your while, like a nice Lee Child novel]

It's my impression (based on far from rigorous noodling around), that even some of the better old Takumars from the early 60s seem to out resolve the current m4/3 sensors, at least in the centre of the frame, so I'm quite happy to trust Cteins argument.

My real X is: Exactly how many significant others do you have, Ctein???

[Ctein replies: Miserere- Exactly?! That would depend on how significant the significance would have to be. There's kind of a descending scale, there. Where does one draw the line?]

"in terms of mass/commercial/consumer computing, there really hasn't been much in the way of advancement in the past five years, because the very large majority of people don't need and won't use those advancements. In fact, we seem to be going somewhat backwards..."

If a smartphone is a personal computer, then people are still demanding, and buying, faster computers. In the last five years, there has been a big advance in processor speeds and screen densities in the computers that most people buy and use.

I agree with the larger point though: "Good enough" has arrived for the majority of people using desktop and laptop computers. I think it may be only a few years off in smartphones. I think phone and tablet processor speeds could level off at around the same speeds that legacy computers are at now.

I think it is worth contemplating when and how "good enough" will happen with digital cameras for the majority of users. When that happens, will some manufacturers offer "halo" camera equipment like the LaFerrari or Porsche 918? Maybe so.

Computers are far from "good enough". However, there have been major shifts how people measure "goodness". Compared to a three year old laptop, my phone runs a general-purpose benchmark about as quickly, but has more radios (cellular and NFC in addition to Bluetooth and wi-fi), many more sensors (GPS, accelerometer, compass, fingerprint sensor, a rear-facing camera), and it's much more power efficient.

To say that computers aren't advancing, you have to define either "computer" or "advancement" excessively narrowly.

The paragraph that starts with "Further down the pike......"
Have no idea what it means. A detailed explanation would be
helpful.

[Ctein replies:Dauga- I imagine it would, but a detailed explanation would run many, many pages. Instead, let me make another broad comparison and that will have to suffice. Currently, photonics by and large can't operate at optical frequencies. It's limited to clever versions of “Oh, look, there's a bunch of photons. Oh, look, there there isn't.” It's like the optical equivalent of Morse code—dots and dashes. Compare Morse code to what you can do with real radio communications. That's what you can do with light when you have electronics that can work at optical frequencies. It's going to be an incredible game changer, and we are almost there.]

I'm currently recovering from shock at the performance levels of my Xmas present iPad Pro. That thing has a multi-core (2) Geekbench 3 processor/memory rating of 5,498. Which roughly converts to 71,474 Vax 11/780 million instructions per second (1978 MIPS). It drives a 2732 by 2048 pixel display of 264 pixels per inch, achieving frame rates of 80+ per second. It weighs 1.57 pounds and is a smidge over 1/4 inch thick. An introduced-in-2011 iPad 2 has a Geekbench 3 rating of 263 and a 1024 x 768 pixel display of 132 pixels per inch. In the last 4 years that's a factor of 20 increase in processor performance and a factor of 7 increase in display resolution.

Closer to home, a late 2015 iMac 5K (5120 x 2880 display resolution) appropriately configured for Photoshop at around $4K will easily outperform a 2013 Mac Pro at around $8K without display. All the better to support 83MB raw files from a Sony A7RII.

Believe me, mass consumer/commercial computing is getting a lot faster. Huge gains are also being made in overall as well as wireless and home delivery of internet bandwidth and latency, and server cloud-based scalability of software as a service. Is a 2015 iPhone 6s faster and more capable than a 2010 iPhone 4? (that's rhetorical) The vast majority of people do indeed use these advancements, and just accept them

Getting a new computer is bit like having cataract surgery - you just don't notice how much you are accommodating lens flare and yellow shift until you get new corneas. Similarly, when reboot times drop from minutes to seconds because of solid state memory, and backups happen automatically overnight - then you notice how much you were accommodating that old machine.

Being mostly concerned with printing up to 12x18" to a "high enough" standard, I am very happy with my clutch of ageing but capable EM5 mk1's.
I just left (today) my long term job in a bricks and mortar camera shop and one thing I wont miss is the constant myth busting required to stop people buying huge, over pixelled SLR cameras with a steep learning curve, just to take snaps of their kids.
Glad we have more to come, i'm happy with now.
(Last sale on my last day was a D750 to a couple who, knowing nothing, googled "best SLR" and came up with that, but at least they went out with a 50 f1.8 and not a monster zoom!).
Great and timely article, as usual.

Those colored filters toss out out three-fourths of the light before it even gets to the sensor.

Nothing to do with the subject, but why do Americans have a 25 cent coin called a quarter but refer to 3/4 as three fourths? We would say three quarters.

Just another one of life's irrelevant mysteries which I think about!

[Ctein replies: Steve- Americans say it both ways: "three-fourths" and "three-quarters" are synonyms. I don't know if there are local regional or cultural preferences for one or the other, but I hear both all the time. I don't use one consistently over the other; it's whatever pops out of my brain at the moment of utterance.]

The Fuji/Panasonic organic sensor is claiming very high DR, despite the colour filter. This seems to be the result of having a much higher saturation capacity of 4X compared to a photodiode (and presumably very low read noise).

A typical 16MP 4/3" photodiode would have a capacity of around 25,000 photoelectrons and a theoretical DR of around 14.6EV, so 100,000 would be two stops more or 16.6EV, which sounds reasonable (ignoring read noise).

This would be in line with the best current FF sensors. Using the same tech would push them up to around 18.6EV.

A 20bit ADC seems a bit of a reach, but assuming there is some read noise, 16bits would still be a major improvement.

4X the capacity would also indicate roughly double the SNR and double the tonal range (using the DxO definition of tonal range).

Expected production in 2-3 years.

(Note, rumours quoting 120dB DR from Fujirumours are not so far substantiated, and the 88dB DR quoted on the link does not specify the sensor size).

[Ctein replies: Steve- forgive me, but I am going to use you as the trigger for a rant. FERGODSAKES, PEOPLE, DON'T SAY “DYNAMIC RANGE” IF YOU'RE TALKING ABOUT EXPOSURE RANGE! This is a perfect example why. You are sometimes talking correctly about sensor dynamic range and other times not. Dynamic range does not convert to exposure range. I wrote two columns explaining why, which people should reread, including the comments before they want to argue with me (and I will be able to tell if you try to fake it, trust me).

An increased sensor dynamic range may or may not result in an increased exposure range. Example: as you know (but most readers won't) most sensor noise is a bulk phenomenon, so increasing the depth of the pixel well to accommodate more photoelectrons without doing anything to improve the noise characteristics means that the noise floor will increase (almost) in proportion to the dynamic range. There will be little or no improved exposure range. In practice, there would be little point to engage in such a redesign, except as an intermediate step. I'm just providing it as a really simple example. Really, folks, go read the previous two columns.

In the meantime, everyone, JUST STOP IT!

Whew. I feel better.

You mentioned the readout issues, which are indeed significant for very-long exposure range systems. As I've explained before, exposure range and bit depth are independent metrics. Exposure range is like the total height of the staircase and bit depth the number of steps in that staircase. If you don't have enough bit depth, the individual steps become inconveniently or even unworkably high. You'll wind up with problems similar to what happens when you work in a very large color space, like Wide Gamut or ProPhoto RGB when you've only got eight bits per color channel: Visible quantization. Also, adding more bits to your readout electronics doesn't help unless they're clean bits. A 16-bit readout with the four lowest order bits being noisy isn't better than a clean 14-bit readout.]

"No, Foveon sensors are not the answer. They're inefficient, and the other technical problems . . . they're the Wankel rotary engines in the world of digital cameras."

I remember naively looking under a Mazda hood and being disappointed there was no Wankel engine there. Here's why:

http://www.autoblog.com/2013/11/19/mazda-ceo-says-no-more-rotary-wankel-engines/

"While the Wankel rotary engine does indeed make a lot of power in a small, lightweight package, it does so while burning lots of fuel and emitting lots of noxious gases into the atmosphere, at least when running on gasoline . . . it is possible that we'll see a rotary return on an alternate fuel sometime in the not-too-near future."

But the advantages have to be so great that the market will be there. Research continues.

Ctein, it's a bit late to suggest that now, I'm three quarters/three fourths of the way through the book! I've already got a nice Lee Child novel this Xmas from the same person who gave me a copy of Saturn Run. : ]

"I can tell you for a fact that the 45mm and 75mm Olympus lenses will hold up just fine, corner-to-corner, when a 32-megapixel camera comes along."

It's already come along, with the High Res Mode of the E-M5 II. While not ready for prime time with subjects that aren't static, it certainly is higher than 32 MB resolution, and useful for many subjects.

From tests, it's clear that the 64 MB Raw files aren't quite the overall equivalent in actual resolution of detail to the Pentax 645Z 50 MB Bayer array sensor. It does nicely out resolve the 645Z on small, fine, repetitive detail.

OTOH, the HR Mode is far superior to the 36 MP Nikon D800 for small and/or repetitive detail, as the D800 has terrible moire effects, and the Oly in HR Mode does not.

The test shots on Imaging Resource appear to have been shot with the older, 4/3 50/2 macro lens. While it seems up to the job, we can't really know if a sharper lens might make the sensor look a little better.

While I imagine that at least the 45/1.8, 75/1.8 and 60/2.8 Macro are fully ready for at least 32 MB, how may one be sure? I have all three lenses and the E-M5 II, but wouldn't know what to compare them to.

Well put. I feel that a higher resolution micro 4/3 sensor is inevitable, the technology has been maturing and the advantages of higher resolutions are obvious even if the final image will be downsampled.

However, I've been for a while bothered that the discussion about cameras is still largely about resolution, which I feel is something that is increasingly reaching a point of sufficiency for the vast majority of users. I would like to see more focus on the ability to reproduce color and capture demanding exposure ranges; when using a small sensor camera, the limitations in those areas are the ones that bother me most.

I'm not sure if daugav369pils wanted further explanation of the paragraph or the idiom. "Further down the pike" can mean 'further down the road' - pike being short for turnpike (a toll toad). Merry Christmas!

Hey! I shoot with Foveon sensors and drive a Mazda RX-8. What are you saying? A happy X to you.

[Ctein replies: Michael - That you are a member of an extraordinarily select group. ]

Dear Ctein (happy to continue offline if needed, because this subject is very interesting...)

FERGODSAKES, PEOPLE, DON'T SAY “DYNAMIC RANGE” IF YOU'RE TALKING ABOUT EXPOSURE RANGE!

I'm not sure where I did.

I accept that exposure range is not the same thing as DR for all the reasons you explained, but what you describe seems to relate to tonal range as much as dynamic range (both as defined by DxO)

Ie. the ability to detect tonal changes at signal levels less than the noise level given more than 1 pixel to look at. It's an interesting perspective, and I would like to know how it relates to DxO's definition of tonal range at much higher signal and noise levels.

For my part I was just trying to gauge roughly what the potential improvement could be in terms of engineering DR, SNR etc. since this is what Fuji (and DxO et al.) keep talking about, right or wrong.

20EV capacity with a noise floor of 12 electrons (read and thermal noise) yields around 16.4 EV of DR, so a 16 bit ADC would cause very little quantisation. Of course, 20 e of downstream noise may make the improvement even less noticeable, around 15.4 EV, but this is still higher than the results obtained for a Nikon D4 (for instance).

The other issue is tonal range (as defined by DxO as the number of noise limited tonal graduations between noise floor and saturation). This is mainly shot/PRNU noise limited and likely to be much less than 16 bits. Probably slightly less than 10 bits, although that is still usefully more than the 8 bits or so that is quoted for a D4 (for instance) given the prevalence of 16bit editors and high bit depth printers (even if you may still see issues on an 8bit display).

However, you have made me wonder if this definition of tonal range is also a nonsense, in the sense that if you can distinguish tones at separations less than noise, then tonal range is also less useful as a measurement.

Nevertheless, as a yardstick, I would assume that improved DR and tonal range would nevertheless yield a likely improvement in exposure range.

Please feel free to set me straight (by email) if I made an incorrect implication or if this assumption is wrong.

PS. I have read Emil Martinec's excellent paper many times and it is indeed very useful. The maths is happily not too strenuous.

[Ctein replies: Steve- I think we can leave this be, because we're in substantial technical agreement. One minor note and one correction. The minor note– when you talk about dynamic range in "EV," you are conflating dynamic range and exposure range even if you don't mean to be, because EV is a measure of exposure. It perpetuates the confusion.

The minor correction– if "tonal range," as used by DxO is as you say (I haven't looked into it), then it's another way of measuring of how many stairs there are in the staircase; whereas exposure range is about the total height of the staircase. So, they'd be different things.

Fun, as always, Steve!]

The minor note– when you talk about dynamic range in "EV," you are conflating dynamic range and exposure range even if you don't mean to be, because EV is a measure of exposure. It perpetuates the confusion.

OK, I can give you that one ;-)

Unfortunately this has been more or less fixed in people's minds by DxOMark. Nevertheless, it's useful in the sense that DR in EV can never exceed the bit-depth, both being Log2(signal).

Thanks for the reply, and still interested in your assessment of DxO 'tonal range' if you ever take a look at it at some point.

"The test shots on Imaging Resource appear to have been shot with the older, 4/3 50/2 macro lens. While it seems up to the job, we can't really know if a sharper lens might make the sensor look a little better."

Ahem - there is no sharper lens.

Any digital camera developed after 2011 is capable of outperforming it's chemical counterpart 2 to 1. In resolultion, speed, editability (is that a word), ISO and what not. A humble 550D with some good glass can do tricks these days, for which you would be burned for using witchcraft back in the 80th. I predict (with a trusted palantir at my side) that in the future camera's will be seen more and more as an input device (Yeah, I updated the BIOS of my brand new motherbord end of info and much reyocing).

My camera's are used for:

1) Creating panorama's of Gpixel dimensions using Kolor AutoPano Pro.

2) Creating 3D scans of anything ranging from a 4 cm matchbox car from the 60th up to the facade of the Xanten cathedral, using Agisoft.

3) Being integrated in my SLS David Scanner

4) Creating HDR images and panorama's using Photomatics and Magic Lantern (and a 1965 Gitzo Tripod).

And about the 20 Mpixel sensor....why not, we don't have to shoot at 3200 iso when shooting landscapes!

Greets, Ed.

A comment on John Camp's comment...

The big advancement in "mass/commercial/consumer computing" in the last 5 years has been in power consumption. The major manufacturers are responding to the consumer. The mass consumer doesn't want more computing power (currently) and instead wants a smaller form factor (requiring lower power consumption). Many of these tablets have more computing power than desktop computers of 10 years ago while drawing a very small fraction of the power (single Watts versus hundreds of Watts).

Another funny note...
One of the more interesting things about the consumer market (although not "mass" consumer market) has been the quest to satisfy "gamers". Gamers are hard-core computer consumers that demand very high computer graphic rendering performance. The insanity of the demands of these people have driven the performance of graphics cards into the stratosphere. Now, the research community is using the high end consumer graphics cards to study neural networks (this is a class of programming processes that have allowed for the rapid improvement in voice recognition, image identification, Google car, etc.). Interestingly, Adobe has taken note of the improved power of graphics cards as well and, if available, Photoshop will use the graphics card to perform some of its image manipulation processes.

It turns out that graphics cards fit into the "lower power consumption" paradigm. High end graphics cards typically have thousands of processing cores and yet draw only hundreds of Watts of power (as compared to hundreds of kilo Watts of power for a system with an equivalent number of CPU cores).

Granting that it may be technically possible to increase MFT sensor pixel counts considerably, I think it is important to bear in mind that if you want to make use of these higher resolutions and take pictures that are uniformly sharp when viewed at the level of individual pixels, then several other factors come into play that can make this impractical or difficult.

1) Depth of field, when calculated with a circle of confusion commensurate with the inter-pixel spacing becomes very narrow. Not a problem for photographing flat objects with a macro lens, but a very real problem trying to take landscapes with sharp foregrounds and backgrounds. Focus stacking can help in some situations.

2) Images at higher resolutions are more sensitive to blurring from subject and/or camera motion. This requires a very solid tripod, avoiding shutter shock, and being very conservative about shutter speeds to stop motion.

3) Any flaws in your optics will be magnified so you need to use your best lenses.

4) Blurring due to diffraction becomes visible at the pixel level at lower f stops as you increase the sensor resolution. For the current 16MP sensors, diffraction already produces noticeable blur above f/5.6. By forcing you to shoot wider open to avoid diffraction blur, you further reduce DOF and make it hard to take sharp landscapes.

Jonathan Sachs
Digital Light & Color

[Ctein replies: Jonathan- how nice to hear from you! (Jonathan and I are old friends) For folks who are new to the photography game, I'll point out that John's admonitions are the same ones given to folks trying to do really sharp film-based photography. They are neither pros nor cons, they are just the way optical photography works. I'd also like to add that there are lots of good reasons beyond resolution why you want a whole lot of pixels, considerably more than your lens' blur circle commands.

For folks who don't know, Jonathan created Picture Window, which is my favorite non-Photoshop image processing program. People who find Photoshop's interface and design logic opaque and not particularly congruent with the way they've thought about printing should check out Picture Window at the http://www.dl-c.com website. ]

John Camp's comments about computers miss an important point. The computer, for most people, is simply a device to perform various tasks. The power for performing those tasks has migrated increasingly to datacenters such as those owned by Google, Facebook, and Amazon. Datacenter-based computing has definitely increased in both the kinds and complexity of tasks as well as overall capacity, and continues to do so. It allows us to utilize thousands or even tens of thousands of powerful server computers for brief tasks like locating the nearest pizza restaurant. The benefits of datacenters are less obvious to photographers, perhaps, because network bandwidth limits how much image processing we can do in the cloud. For practical purposes, we need image storage to be local, with the cloud as one possible back-up destination. The analogy with computers breaks down with cameras precisely because a camera, to capture images, needs to be physically located very near the subject, at least near as compared to a datacenter that might be 10 states away or even at the antipodes. So to improve our uses of computers, we can shift much of the burden away from the device and into datacenters, whereas to improve cameras, the improvements need to be in the physical device in the hands of the photographer.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007