« TOP's Most Popular Small Camera(s) | Main | The Devil is in the Detail (New Nikon D3s) »

Tuesday, 13 October 2009


More than tricks to change the way a shot is taken, what is interesting to me in this approach is that you could change the user interface of your camera. Recently I looked at the CHDK firmware hack for my Canon G9 hoping to find a way to change the buttons and dials behaviour (e.g. change the ISO dial with an aperture dial). No way.

No no no. I don't want my toaster or my car to be open-source, infinitely programmable, either. Sure they might all have microprocessors, but it's painful enough looking after one computer. And mine's an apple, not an ubuntu box, for a reason! The other things are tools, appliances, and must kindly do what they're supposed to do and shut up.

OK, more seriously, what app exactly would you like to have on your camera, which you might reasonably need someone to program for you, rather than it just being built in? We have auto-bracketing, that's their example. I don't want a web browser, nor games, thanks.

If you build it, will they (Canikon et al) come?
Look at what we have today - the big boys are still using their own proprietary software even after the introduction of DNG.
The only way this guy is going to revolutionize the field is by getting one of the giants to adopt his concept or by making his own camera to take on the world. Does anyone see this happening in our lifetime?

It's quietly amusing that the big feature they talk about - in-camera HDR - is already available on the Pentax K-7 & K-X. I do wonder what would happen if one of the manufacturers released a software kit for people to write their own firmware. Maybe the work required to make it impossible to break the camera would make it impossible to do anything useful, but if possible it would certainly make the camera more attractive to the Digital-DIY inclined.

The object of this exercise is not to "take on the big boys", or to provide the average consumer with a programmable box. The point is to provide digital/computational photography researchers and enthusiasts with a portable, affordable platform on which they can test new algorithms. This is currently hindered by the absence of an open camera platform. The hope is that this will lead to many new techniques for creating images, which in turn may one day feed into industry.

The idea of open source operating system software is enticing. The reality is that companies often surround the open source core with software that may be called open source but is essentially tweaked into a more or less closed system. Sure the software licensing fees are lower, but support, if you can find it is generally worse than the alternative. Sure you have the source code and you can tailor it to your exact needs. Good luck with that one.

I'd be much happier if some camera company would provide an API or plug-in interface that would allow modules to be integrated with the camera firmware to provide enhanced functionality. I'm not talking about API's for remote control and downloading images but interfaces that allow the basic functionality of the camera to be accessed. Don't hold your breath on that one either.

The negative comments could equally apply to mobile phones (I just want to make calls), yet without doubt, downloadable applications from 3rd party sources are a key to the success of the iPhone.

So third party lens makers could supply correction modules. Maybe someone could write an app to turn the Canon direct print button into mirror lock up :).

Perhaps you could buy modules from Adobe, Capture One et al which would read and manage the data from the sensor so that camera and post-processing software were perfectly matched.

Looking at it the other way round, this is also a way to achieve a minimalist camera - make video, picture styles etc. separate apps, and you get a camera with everything you need and nothing that you don't use.



The idea sounds good, as sounds Linux.

Yet I remain skeptical; for a camera to actually work in the hands of John Smith, just like for a computer to work in the hands of John Smith, I guess you need a stronger focus than is usually found in open initiatives.

But that's just me, and I wish them the best luck for this project.

As expiring_frog points out, it's a research platform, not a consumer product. If nothing else it should be clear once you look at the specs and the price.

With that said, as a sometime programmer and open source user I would have a blast if I could add and edit code to my DSLR in a supported manner. For an easy example, you could add a real interval timer for all kinds of stop-motion or multiple exposure effects.

But beyond that, realize that you'd have access to all kinds of interesting data. You can read and write to various ports, so you could implement sound-triggered strobe firing for catching fast events, for instance. And you'd have access to rough distance estimation from all focus points via the focus subsystem, and perhaps also relative orientation of the camera from the anti-shake gyros. Combine with live-view processing and you could create a function to, say, shoot an image as soon as something small is moving within an area of the image - catch insects or birds in flight, or a runner crossing the finish line, with the winning runner in focus no matter what lane.

Frankly, I'd probably have as much fun programming my camera as using it.

I wrote software for 25 years in my previous life, and I never bought into the open-source movement. I may not have read or understood enough about it, but it always sounded more like religion to me than anything else. I never thought of writing programs as anything other than the 21st century equivalent of blacksmiths and machinists.

I have always thought that the people of my father's generation, machine-press and lathe operators and the like, watched me work till midnight writing code WHILE NOT BEING PAID OVERTIME and simply shook their heads at how stupid I could be.

Sorry for the rant.

My main complaint with ALL the home computers I have ever owned is that they WEREN'T appliances. I don't want a work-in-progress on my desktop when all I want to do is look at web sites and send email, so I certainly don't want an infinitely modifiable camera, refrigerator, or anything else. Aside from that, when the thing breaks, who fixes it? And besides that, how many photographers want to spend their days learning how to write software? Most people can't figure out how to use all the bloated features in MS Word.

Don't get me wrong, it's an interesting exercise, and were I younger with more time on my hands, I might be curious to fiddle with it, but that's about it.

Levoy uses some pretty ordinary HDR as his example for the TV viewers, but the computational vision work that goes on at Stanford and elsewhere is much more far-reaching. It is showing up in dedicated vision systems for cars, for example, not just smile-detection for P&S cameras. There is work on ways of storing data for 3D representation of images that allow rapid reconstruction of views from all angles. These folks need such a camera, and don't care what it looks like.

HP tried to sell an open operating system for small digital cameras about eight years ago, but the computing power for such a general purpose system didn't match the imaging power. Applications such as

* read a sign
* translate its text into your language
* advise on what to do next, or at least construct a metadata label

could have been done with the image quality of a P&S in 2002, but you couldn't write or run the software. Now we we see these on the IPhone or the Android. Even though cell phone cameras with the >2MP resolution and autofocus have been available for over four years, only these two platforms have reduced the programming complexity to where we will see a rich portfolio of downloadable consumer applications hit the market.

The real question for Levoy, et al., is why they aren't just doing their experiments on an Android (Google-sponsored open source programmable phone with camera, accelerometers, and gps).


In the example of the child near a bright window given in the original article, maybe the photographer just needs to learn about fill flash. People would rather push a computer button then learn a skill.

It seems to me, that intelligent people, inventors and the like will always be trying to relieve the human burden, so wearisome it is.

I'm sure a computer programme has been invented to replicate the brush stroke of Da Vinci or Van Gogh, and I'm all for better exposed pictures of cats and dogs on the web.

It'll keep me coming back to the web every time. It will also inspire me to make more and more pictures of brick walls. My collection isn't complete as I haven't an M9 and associated lens to shoot my favourite walls with.

With a camera like this I could reach new HDR peaks and push on to the dark side of the moon, cool...

I will be thrilled when I can switch out the manufacturer's in-camera processing software with Adobe. If I could use my own ACR presets I'd be much happier with in-camera jpegs.

Doesn't it look remarkably like a large-format camera? Given how some photographers love to tinker, I expect this will have a fine future.

All this would be a non-issue of the manufacturers made their SDKs available. Yes, folks, they DO have an SDK [software development kit —Ed.] for each model of digital camera.

Wouldn't it be nice to be able to program the "Got Print?" button into something actually usable?

Maybe someday there will be a class-action lawsuit to gain access to these SDKs.

I much prefer a "closed", well-designed, well-supported system that's been carefully manufactured. When it fails I want to know who's responsible for fixing it. And I'm willing to pay, indeed I DO pay, a significant premium to be served with such products and services.

"Open" stuff is the wave of the future...and it always will be.

I think that the programmable camera is a really cool idea. Will I use it? Probably not but since other people find that it is cool will make some great apps that I will use (read NetScape, Napster, etc.) Things will come out of the tinkerers that we will all use and wonder how we ever lived without it. It will be just like film. How could I have ever lived without Tri-X and KodaChrome 25 and HP-5? My wife on the other hand just needs to have some 200 speed print film and she is happy with the results. And really why would any one really want to carry around all those extra lenses in their camera bag, I mean, all that extra weight? ;-)

I think the point of the this project is to provide a platform so that researchers don't have to re-invent the wheel just to try out some hubcap ideas. There is no reason to have to design an entire camera if you just want to try out a prototype sensor for instance.

The HDR application in the article is sort of a lame example. If I were working on a computational photography project such as a flutter shutter or a synthetic aperture like in the camera array in the background, I'd want to start with something like this.

Speaking as someone who used to build cameras for fun, just to see what image some random piece of glass made or because no one was selling stereo Polaroid ultra wide angle pinhole cameras the CHDK stuff is fun. I've been doing some fun motion detection timelapse stuff with it. It's more like having a box of busted speed graphics than it's like having a milling machine and a block of derlin though. The iPhone and Android apps are pretty cool, the ones that use the motion/vector sensors in particular are very interesting.

Speaking of camera hacking and hotrodding, I wish there were a way to reprogram the Pentax K10d to not let you release the shutter at the moment that the anti-shake system wakes up and shifts the sensor.

WOW! Open Source Photoshop and Lightroom in your camera! WiFi to your printer, display (live wallpaper) and on-line storage. No more computer!!!!! That's a good thing.

I find it incredible how people dish cool new technology.

Why learn to photograph at all, when you could just learn the skill of painting? The act of making a photograph using _any_ camera implies building on many years of research into physical, chemical and mathematical processes, designed to make the act of freezing a moment that much simpler. Did we protest as much when the rangefinder came along so we didn't have to estimate distance, or when the SLR came along so we could see exactly what we were framing, or when film got less and less grainy and finally progressed beyond 25 ASA, or when someone had the bright idea of inventing that very fill flash that folks are so fond of?

Indisputably, the difference between a good and a bad photograph lies 9 inches behind the lens (as Marc points out in one of those articles). But better photographic equipment does not somehow destroy artistic purity -- if so then it's been destroying purity from the day photography was invented.

The point is not that you write the firmware. The point is that anybody can write the firmware. See the combination of CHDK and Canon P&S cameras as an example.


@scott kirkpatrick: That's because they already do work on a Nokia N95. http://graphics.stanford.edu/projects/camera-2.0

Well, I remember I posted pretty much the same idea here some time ago on a post on camera control layout and such. It wasn't that ridiculous, after all.

Hi Mike,

Here is some creative use of using commonly available camera hardware (wii controller) to help researchers develop software and concepts..


Just for fun, here're a couple of little things you could put onto an open camera, given enough processing horsepower.

For folks complaining about the lack of shake reduction in their cameras:

And this for dialing down the flash:

I'd sooner have a digital Pentax MX and leave the innards to my imagination

If this project does nothing other than show Canon and Nikon how much they have yet to optimize their opportunity to capture market share with useful features for every niche market them it will have served its purpose.

Computational photography is the future of imaging, like it or not. To date, neither Canon nor Nikon has really even begun to realize just how far they could take their computers with a lens attached.

Just the announced threat of a Red one camera forced them to add video to current SLRs; hopefully this project will force them to open up their code with at least a plug-in architecture and start to realize the market share gains possible with such a flexible system.

Digital anything is fundamentally about raw data and how you can capture/view and alter it to see what you need/want. Once you've opened that box, there is no going back. At some point in the future you will see an "Adams"-look filter built into cameras. And don't whine about that--if you really want to be pure, dust off an old film camera with manual everything--no light meters allowed--and while you're at it, make your own film, developer and glass plates too;)

HDR is hardly supported by Pentak K-7, to say so is to not understand HDR or to not have read the specifications in detail. It's kind of there, but not really even close. Nikon and Canon don't seem to care or have a clue about HDR with the exception of the 1D series which can do a reasonable bracket without any external control.

But, HDR is just one small step in computational photography made possible by attaching a lens to a computer. Bracketing of all sorts--focus, WB, ISO, mutli shot for resolution enhancement, and on could and should be in current model SLRs, but the big players don't seem to really understand what they have built.

Hopefully this project will wake them up!



However, it's good for engineering student to learn

Sounds to me to be a solution in search of a problem. You can do the combining in Photoshop and achieve the same results. Or, you can learn how to take photos and correctly expose the photo in the first place.

The idea of open source firmware for the camera is interesting though. This way we would't be dependent on Nikon or Canon for the latest upgrades and we could get the functionality we want. There would be a nice community of developers working on some neat functions if the firmware were open source.

The article makes it sound as if camera manufacturers aren't already making the effort to do all these things. Especially the part about trying to "push traditional camera makers to incorporate more of these flexible ways of producing images in their cameras". Sounds great except that the traditional camera makers are probably three steps ahead already.

The real value and hope of this is that it takes the initiative away from japanese consumer electronics by offering the possibility of an alternative.

At the moment oriental companies control totally the digital hand camera market and are very conservative in that they all make more or less the same product, namely, the black plastic SLR of the last 40 years, with a sensor inside.

In this way, japanese manufacturers simply make what they want to and it's for us to desire it. With this move, for the firtst time there is the hope that people in the west can use "off the peg" components to make an alternative with a European cultural bias,...something that at present is only represented by digital leica.

"The point is not that you write the firmware. The point is that anybody can write the firmware."

That's what I took it to mean. People would share apps between them, and you could just find the app you want, download it, try it, and keep it or discard it depending on how you liked it. I suppose programmers could write their own, but I wasn't thinking that most photographers would do so. I was thinking more along the lines of plugins for Photoshop.


I don't understand why there are so many negative comments on Open Source.

I have a mac for convenience (It's only good looking and well made, I love the metal finish!-) and...LINUX too with an UBUNTU sticker near the apple!-)
Basically your OS X is Linux based !-). For some people it's enough. They can make what they want. okay...

But think about Open source as a new possibility to spend your money on support and not on Licenses!!! A possibility to pay people and not licenses. For research and education It's clearly a better approach.
Programmers have to be paid and they are paid for the work they make and the support to customers. Nobody said they didn't had to be paid!

For simple things like e-mail, web browsers, media player, you have the best on open source! For networks too!

For photographers, GIMP is a "photoshop" clone and you don't have to crack an "illegal" version putted on the web only to make customers be addicted to "one way of thinking":

"Learn on our cracked version then you will never want to use anything else because you are...lazy. You will promote our softs free of charge...even in art school hey,hey"

They are photo browsers to organize your works too. And If you want to work in RAW, buy a pentax K7, K20, K10, K200, KX, K2000 even in second hand. They have DNG fully supported by open raw processing softwares.

If lots of people begin to pay 10 to 20 dollars for a version of a basically good software with the possibility to ask developers what THEY need/want in the program. This is a different way of thinking.
YOU improve the software, ergonomics, design, functionalities of a program.

It's like the 1000 friends approach for artworks.
It makes developers do what they want and what they learned to do. What they like too.

Stop thinking egoistically. Bigger, fatter and a lot more is better...

"you know, the pros are using photoshop, that's why I need photoshop..."

So only art and photography schools were teaching how to use GIMP and co...
They are a lot more arguments, but I stop here. It's your turn to find informations and stop being...lazy

Please... I just want to press the putton and take a picture........ (sob)

Dear Mike:

The computer scientists at Stanford University work (in this example on merging multiple frames for HDR and inceased DOF) on a quest that today’s hardware can not support for all photography situations (low light, moving subjects). Merging several frames in one is not a novel concept. Digital camera manufacturers had been heading in this direction of incorporating various image editing options in the onboard camera processor. If it was doable to produce reliable and consistent output now then the camera manufacturers had integrated the features in their offerings.

It is undeniable that having an open standard and access to cameras’ firmware could set off an avalanche in digital camera innovations in various aspects of photography: sky is the limit! However I can not see the camera manufacturers allowing access to their products. Tinkering with the software will certainly allow low end cameras perform on par or even better than their flagship models. Blocked by the manufacturers’ features for entry level cameras will become unlocked and compete against their advanced photo cameras… I do not see it happening at any time soon.

Thank you,
Brian Stone

It's pretty clear to me. The main reason for this camera is for somebody to do something that existing cameras don't do. Something radically new. Something that you and I don't need to do, have never thought to do, but somebody really smart is going to think of something marvelous, and make this camera do it.

I write software all day at work. I do photography to relax and stop writing software.

If I had to fix bugs in my camera software I think I'd just go back to film full time, I still use my ZI as much as my Canon 5dMkII :-)

People on another list I'm on have been asking for LARGER exposure bracketing steps (for shooting for HDR) for literally years, yet camera manufacturers haven't done it yet. And people ask why we might want control of our own cameras? That's one trivial example -- but it's been a festering sore spot for a long time now, and it's trivial to fix; it's just that the owners of the closed platforms don't care enough to bother.

It's unlikely that the hardware will be competitive with the high-end of commercial DSLRs (or digital medium format) ever; it's unlikely that anything directly descended from this Stanford camera will be used by large numbers of people who are primarily photographers. But it will very probably be widely used by people doing unexpected and interesting things, and it will probably be used by a good chunk of the next wave of innovators in photographic hardware.

Ken: My experience with "closed" "well-supported" systems is that I get much less good support on them than I do on "open" systems. Nearly always, my best advice doesn't come from the company, it comes from some other user of the product. The more open the product is, the more the other users know about it. And "closed" product support people know you're a captive audience, and give the support you'd expect, in my experience. Especially these days. Also, I've had several products hit an untimely "end of life" because they were closed systems that the manufacturer chose to cease supporting. Nothing I could do about it!

Since I spent more time as a programmer than as a photographer, it sounds interesting. One alternative would be to hack into the Nikon, Canon, etal. digital computer programs and rewrite the firmware. Another alternative would be to continue using film.

I would think that the Chinese knockoff artists will soon come up with cheap firmware improvements for all of us.

One or two posts here are based on a misunderstanding of what camera makers' SDKs are for. I have used two, from Olympus and Nikon, and both are for writing apps that run on PCs (Windows only in the case of Olympus; Windows/OS X for Nikon). They definitely are not for reprogramming the camera itself.

That's where the research camera described here is completely different.

The Nikon SDK seems to be freely available to anyone; Olympus charges $30; Canon requires approval, which I didn't attempt to get; others may or may not have SDKs. Note that you have be an advanced programmer to use these SDKs. It's not like programming HTML or Excel macros. The SDK docs are terrible (as we always expect with an SDK). Nikon has zero support; Olympus has a forum which is somewhat helpful.


Mr Frog,
That motion deblurring stuff looks like fun. I'll have to try the non-blind deconvolution. Too bad the blind deconvolution executable is not in a public directory, but I think I can come up with a blur kernel from the image.

Thanks for the link.

Of course you could capture the pitch-yaw-roll information from the accelerometers or MEMS gyros or whatever Pentax and Sony use and write that and the movement of the sensor to the raw file data. Then the raw converter would not need to derive the blur kernel from the image itself but could build the blur kernel from the captured motion vector data, much like the camera makers are doing now with distortion and chromatic aberrations stored in a lookup table for each lens.

On second thought you would probably want to capture light intensity over time, so that flickering light sources didn't mess up your blur kernel, and if you were going to do that you might as well just design the sensor to continuously read out very short exposures then combine them perhaps obviating the need for the blur kernel in the first place.

So I guess what I'd want to do is be able to capture the motion data from the anti-shake system, and non of the camera manufactures are exposing that(in the software development sense of the word).

Naw, what I _really_ want is a big dumb array of cheap cameras and some software to do some synthetic aperture stuff, so that I could design the camera and determine the shutter speed(s), f/stop(s), plane(s) of focus, and aperture shape (or shapes if I decide to make a stereo image from the data) after the moment of exposure. Yeah, that's what I want for christmas.

Having recently updated my iPod Touch it was amazing to me that a two-year old product could suddenly become "new again". That said, I'm constantly dismayed to find that a site like CHDK doesn't offer any hacks for my "aging" 5D. Can you imagine being able to turn your still camera into a still/video camera [supposedly someone did this with a 40D] or any of the other myriad possibilities? The one big "hack" I'd like to see is interchangeable sensors. Now that would be a step in the right direction!

OS X is Unix not Linux based. In fact the Linux founder Linus has meeting with Jobs and rejected his idea of open source. Still, OS X is largely worked under Open Source software e.g. Safari use an Open Source engine Webkit.

Open source in server platform is main stream now.

Open source started when one guy cannot find a printer adapter for an old printer and cannot help (imagine older Epson printer for a while under OS X 10.5 which you can get the catridge but not the driver).

It is a good strategy as your Apple has demonstrated.

Programmable camera is not a new idea. HP tried it and there are hacking framework. The issue is whether open source can some value added. For just one chip, Leica M9 told us that you does not need that much for a fully functional camera. The baseline is low in feature for a camera (look at 8x10). The medium feature is simply like the current one i.e. make some common of feature like intervalometer and Raw available etc. The higher end feature like autofocus, ISO. ... et.c seems to related to hardware.

Multiple chip or new type of camera/camcorder would be interesting.

It seems to me that the best use of this is kind of technology, besides interface modification, is to allow users to script their own automatic exposure modes to do exactly what they want. I shoot manual in large part because I often have a complex mental picture of my priorities in terms of which of my preferred values for various settings allow flexibility. I generally don't trust any of the off-the-shelf automatic modes of the cameras I buy to capture my preferences here, but I suspect I could script something pretty close to them with a little introspection.

Ideally, a camera would let me quickly install new scripts for this off of my PC, and maybe even make minor edits to installed scripts on the fly through the camera's controls, and it looks like the folks at Stanford are getting us closer to a world where this is possible. The business about HDR is a distraction.

I find it ironic that the very people who are complaining that they want their camera to "just be a camera" are exactly the same people who always complain that the new camera from Big Camera Company Inc. wasn't "designed by photographers". Want your new camera to fit to a specific photographic workflow? A reprogrammable camera gives you exactly that.

@Robert Roaldi

>>>when the thing breaks, who fixes it?<<<

When something breaks, you do what one does with everything these days: you RECYCLE it.

Let's define it more clearly: "Recycling" an electronic device means shipping one that's outlived its useful lifespan (12-24 months) to a "developing nation" where small children with bloody fingers will strip the copper wires from the device to pay for their school books.

Then, an intrepid journalist with the latest version of a similar device will take perfectly exposed pictures of the children so we can feel sorry for them. But not so sorry that we stop recycling.

It's the circle of, ummm, life.

Hooray! |

Herman wrote:
"Since I spent more time as a programmer than as a photographer, it sounds interesting."
Herman, since I spend more time as a programmer than as a photographer, it sounds absolutely dreadful !!! :)

@hugh crawford: Good point, that solution seems to require just an accelerometer, not the anti-shake actuators themselves. As you observe, non-blind deconvolution is likely to yield significantly better results. This will not handle _subject_ motion, though. You might be able to use tracking data from the AF sensors for that.

A sequence of very short exposures (without motion data) does handle that issue as well. The problem could be that the images are not bright enough to be registered accurately against each other. Perhaps you could jack up the sensitivity and remove noise in the merge process.

BTW, as far as the synthetic aperture stuff goes, note that you can put a microlens array into the frankencamera. Not the same baseline as the camera array (and currently a really small sensor as well), but you can probably do some of the after-the-exposure stuff.

"People would rather push a computer button then learn a skill. "

That's because people, by and large, are extremely sensible.

The overwhelming majority of people don't want to be skilled photographers, they simply want nice photographs.

I have only 24 hours in a day, not to mention a life. I have no interest nor desire to learn a new skill if I can have a button do it for me.

There is no sin in learning a new skill, if that's what floats your boat. But it is not a virtue, either. It is only a tool.

pax / Ctein

Mike, that's exactly the way I understood it, too. Plugins for Photoshop or extensions for Firefox. Or, indeed, apps for iPhone but without the manufacturer's approval. :-)

It would not be necessary for the big companies to open their code. If one small company (say Sigma) did, it would show the big companies what was possible and what people want. Further, it would allow smaller companies to spend $$ improving hardware issues, and not as much on software/firmware issues.

Mac OSX is Unix-based (not Linux based)... a distinction that really is without a difference. IIRC OSX is based on BSD Unix, just as NextOS was. Hmmm... Apple, Next, Steve Jobs... could there be any connection? Naw....

I don't see the utility of this camera for general image processing. Just bring the raw file into a desktop and do whatever. I can see the utility of it for trying out ideas that may make it into firmware one day, e.g., better (more natural) in-camera HDR. Doesn't the Ricoh CX1 digicam use multiple image grabs off of the live sensor to generate its HDR images? Why wouldn't the same thing work with a dSLR and its much larger sensor?

I echo another commenter's suggestion of using an Android mobile phone w/camera for experimentation. Everything is available and open... and the fact that you can either make your own OS image or download others' images to your mobile phone is pretty durn neat.

The comments to this entry are closed.