« On Porn | Main | Lost and Found »

Wednesday, 19 January 2011

Comments

I started to wonder: how do you calibrate and set the aim of all those tiny little cameras in the array (RAIC?). Then I realised: you don't. You move a target around in front of the camera, watching where it occurs in each camera's individual output, and mark each one with an angle offset.

Then if you are feeling very frisky, and slightly dissatisfied with what ain't broke, you do this at two measurement distances. Then you use differential information to track the incoming light rays, and make a refocusable ( plenoptic ) image [grin].

Cool, I would never have thought of that. I was just thinking of an ipad in a wooden box (with a focusing cloth!) to fool your subjects and possibly yourself into a different way of working... but this is much more interesting.

And is f/2 really like f/2 in all the scenarios? I think that in terms of noise yes. In terms of DOF, well now I need to figure it out, but for the first scenario, if you spaced the grid out more widely you'd get less of it, so there must be at least one more variable. I think the % overlap would matter too, which suggests that you could have a DOF slider in lightroom, trading off against noise in post...

Fascinating. Technological evolution does not stop, usually about the time people think we've hit a dead end or plateau someone rewrites the rules. This could be what the next generation of ultra-high-resolution photography looks like.

My mind is already buzzing with thoughts of sensor arrays, using commodity 3mp sensors with single color filters instead of Bayer arrays, then putting whole sensors in a Bayer pattern with enough overlap to do multi-image noise reduction.... many interesting possibilities. Differential or zone focus would be like a schizophrenic 3d tilt-shift mechanism limited only by processing power and software algorithms.

The relative inexpense of each sensor unit brings these possibilities easily within competitive range of current high end DSLR's or digital medium format.

Fascinating concepts, but correct me if I'm wrong: this offers no capability for multiple focal lengths and no capability for traditional view camera movements. It is basically a very high resolution fixed-lens camera in a tablet form factor, right?

I could see myself buying one, just making sure I'm understanding what we're looking at.

How might one expect the final resolution to scale with the number of input images? Naively, I might expect the number of resolved pixels to scale with the square root of the number of independent images, because it seems like every "add a bunch of data sets together to get a better data set" procedure scales like that. I ask, of course, because, if my wild-assed guess is right, then Method 1, where the number of resolved pixels self-evidently scales linearly with the number of sub-images, will get significantly better final resolution. (N.B. I won't exactly fall off my chair in shock if I'm wrong.)

I can't picture this. Am I the only one here who needs a picture? We are photo fans after all.... ;)

This a really interesting idea. I can also imagine a version 2.0 of this idea where the direction each camera is pointed is controlled by a motor so that the FOV can be modified on the fly like a zoom lens. This would probably have to go with #3 for the processing.

With separate lenses, the possibility also exists for capturing some depth information to create a 2.5D scene. That information could then be used to set a DOF after the fact and even apply a blurring algorithm to match the bokeh from the lens of your choice.

As far as processing goes, who says the camera has to do the processing? Send the image data over the air to a supercomputer that will have the image ready for you to download from anywhere (even back onto your camera).

Current cameras have only just begun to scratch the surface of the possibilities of computational imaging. Kudos to Sony for their multi-shot HDR, noise reduction, and panorama sweep, but there is so much more that can be done (beyond Fart Ilters).

I'll confess I'm merely a rank amateur hobbyist with a low end DSLR, so perhaps I can ask a couple of stupid questions?

Isn't a view camera something with adjustable bellows to work with plane of focus issues? The tilt/shift/swing etc allows light to fall on a flat sensor plane at varying distances from the lens. I can't quite see how your design allows for that - aren't all of the lens/sensor distances the same? I can only imagine the product of such an integrated array of multiple cameras with such short focal lengths would be an image with literally nothing out of focus.

Such an image could be great for many scientific applications - and I know you are very expert in astro-photography - but surely it wouldn't work in the marketplace as a general camera. More to the point, could you not achieve the same end - perfect focus through all planes of the final image - using different techniques such as stacking images made from a single high-end camera?

Dumb Question:
so how would this work with technical movements-- I'm presuming two reasons for a view camera-- a bigger capture surface, or the use of shifts, tilts etc... Would your proposed sensor matrix replace the film plane (much as a digiscoping adapter) or is it just a big "synthetic" camera

Not sure if my previous question came thru:

How would this matrix of sensors support technical movements?(tilts, shifts)

For me the big attraction of a view camera comes from the way these movements alter the image-- i.e. the view camera as a kind of primordial optical workbench.

And this is why we'll never actually find out whether our DSLRs last 20 years!

Well, maybe we will anyway; no doubt there will be people nostalgic for the pristine directness of capturing in image on one single sensor chip all at once, and repulsed by the idea of a photograph assembled computationally from splotches of light, who will want to resurrect the old hardware to use.

I love this idea. The possibilities of stitching together many low-quality images into something of high quality. It sort of evokes what RAID stands for (redundant array of inexpensive disks): using commodity hardware to create something that is much more powerful and fault tolerant than its single components. Love it!

Don't options 1 and 2 have difficulties with parallax? When super resolution images are captured you try to have a common nodal point, as I understand it, that might be tricky in a flat panel.

I'd really like practical large format sized sensors. Perhaps there's some flat panel display technology that can be run "in reverse", in the way a CRT can be used as a scanning tube?

- Steve

As a gigapan user since the beta, I'd recommend that. The longer focal length gives you much better control over depth of field, something that the other solutions don't give you.

I like your thinking. It would indeed be a marvelous tool and solution.

My dream is smaller. I like the 8x10 ground glass and hence do not mind to have a large screen like ipad. But I want also the lens movement. A couple of ideas

---- small technical camera module first.

What one need is start little -- have a module (replaceable) but with a computer control lens movement built-in. A lot of mini-Tachihara and Deardroff. Just not the flat plane thing you assume above. A few micro lens over the sensor well (like M8/Kodak one, with or without infrared, b/w sensor etc.)

The challenge is how to easy control the perspective and how small the housing can be for the micro motor. The adjustment is more you touch a few point to indicate the focal plane and the system align for you. (One current company do that for its lareg format system as in its brochure.)

The module means that you may start with a plug in module with a tight technical camera. But late change to micro 3/4, ... whatever to say a lens fl. distance adjustable for any lens.

The selling point is the big ipad screen, modular but not much weight. And the ipad iOS v118.3 or Android v217.5 LGpad would be useful on its own.

The screen may need a bit shade ... I think I might use my dark cloth but not every potential buyer has it. The iOS 118.3 shall by then allow us to use the hardware switch (if there is still some) so that it can reverse everything (or just fix the orientation for you rotate it yourself so that everything is in reverse -- it helps composition surprisingly and I am not joking. Just please the focal plan adjustment is total automated.).

----

For a "8x10" system along your thinking, I wonder other than heat/power, what is the weight could be.

If it is quite heavy, then a modular system so that the sensor can fit in a 4x5 or 8x10 camera might be another way to go. A separate system will hang on your Chinese copy of XXX tripod (to save some bucks ...) to provide the battery and processing power, and sort of hot source away from the sensor.

For the video, I think this time I ask for something not reverse as the technical movement is now done by me. May be a kind of wifi video fit, like the one I got with my LG 3d TV, with a strange LG box that wifi to it. Seems work even for 3d.

----

Come to that how about 3d? in fact "4d" as 3d + time = video?

----

It is 8 a.m. here. I have a dream last night, it is about my Tachihara fighting with Deardroff, trying to marry my old girl Nikon D300 with a concubine of a Canon S95. I told them the girls do not mix very well.


There was a demo earlier this year of a plenoptic lens solution. Is this basically the same as what you are proposing?

http://www.youtube.com/watch?v=jS7usnHmNZ0

Dear folks,

Many questions about swings and tilts and depth of field, so I'm just going to address them generally:

1) Trading swings and tilts for portability is normal in the view camera business. The more portable and compact a field view camera is, the more restricted those movements are. In this case we're talking about an extremely compact device that weighs less than 2 pounds; that might be an entirely acceptable trade-off. Although it may very well not even be necessary (see point 3).

2) Only a very small fraction of view camera photographers use swings and tilts to minimize depth of field; those controls exist primarily because otherwise the depth of field can be unacceptably shallow with large-format cameras. If adequate depth of field exists without those movements, most view camera photographers will be quite happy.

If you're not one of them, well, sorry.

Again, though, see point 3.

3) Computational photography allows for a myriad of software ways to control depth of field ("plenoptic" imaging is just one flavor of computational photography). What's optimal for this camera is, as I said, unclear, but I wouldn't write off the possibility that you might have great control over depth of field regardless of movements.

4) The other reason for swings and tilts is controlling perspective distortion. Quite unnecessary if you have megapixels to burn and the basic tools of Photoshop.

On a related point, a couple of people have asked about the relationship between f-number and depth of field in this camera. Except for my first design (the image stitching), it's a very complicated analysis. The camera would have very high inherent depth of field, though, regardless of the f-number of lens. Again, subject to point 3.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

One of the strengths of view camera work, for me, is viewing and composing with an inverted image. Now how in the world are you going to do that with iPad?

But seriously, some time ago (maybe two years?) I read that one of the sensor manufacturers had tested or conceived a 4x5 stitched array that was do-able at the time, albeit cost prohibitive. But now, why can't one of the manufacturers make a 4x5 array?

Do the camera modules focus separately?

If so, you could probably implement more or less proper view camera wiggles with some combination of whatever orientation sensor is built in to your iPad and some UI on the touch-screen side. Then you calculate where the desired synthetic plane of focus should be relative to the tablet, focus various modules to approximate that (you get a set of in-focus "tiles" all parallel to the tablet body, but arranged in space to approximate the desired focal plane) and synthesize from there.

So do this you'd probably want some compromise between models 1 and 2, with substantial but not complete overlap between all the camera modules.

You could probably in fact just use a "front standard wiggles only" user-experience model, and use perspective correction in post to synthesize the rear standard wiggles.

Dear Karl,

Personally I always found the inverted image in a view camera to be immensely irritating, but it's the easiest thing in the world to have a software switch that presents you with the image right side up or upside down. Remember, there's no direct connection between the cameras and the screen.

~~~~~~

Dear Bryan & Andrew,

No, the camera modules don't focus or move separately. In fact, they don't focus at all. Each and every one of them is fixed-focus, fixed-pointing. That's what makes them small and inexpensive. A tablet view camera that had independent controls for each camera would be prohibitively expensive and difficult to build with today's technology.

That isn't to say you couldn't have modules set for different best focus distances in the array. One of the many questions that would be investigated while optimizing a design.

~~~~~~

Dear Steve,

Stitching (option 1) doesn't have a big problem with parallax, unless it's extreme. In this design each camera would be imaging only a small part of the field; the parallax shift from adjacent cameras, which would be imaging adjacent parts of the viewing field, would be negligible.

You are correct that it's messier for superresolution reconstruction (option 2), but it's doable. In field reconstructions (as opposed to controlled laboratory conditions), one of the things the software is responsible for is matching up subject features in the images to allow the sub-pixel differential analysis.

~~~~~~

Dear James and Jeremy,

Those are not dumb questions at all. I think they've been mostly addressed by my other answers to people.

If you like manipulating the physical camera, as opposed to working with software, then this would very much not be the camera for you.

BTW, not all view cameras have bellows. Or swings, shifts, and tilts. In this case, software and image synthesis obviates the need for of those.

~~~~~~

Dear Arg,

No, you really don't need a picture. Visualize an iPad. Alter it by adding tripod socket (s) to the edge, and on the backside is a square array of little lenses peeping out of the back, spaced a few millimeters apart, hundreds of them.

It's that simple!

~~~~~~

Dear Nicholas,

Oh, that's a really good question. For stitching, yeah, the number of output pixels scales roughly linearly with the number of input pixels. There's a bit of wastage from overlap between tiles, but that's it.

For superresolution imaging, it's more complicated. It doesn't go quite as badly as the square root of the number of cameras. If I had to make a completely wild ass guess, without sitting down and doing the math, I'd use a scaling factor of the two thirds power of the number of cameras. in other words, a 100 camera array would provide about 20 cameras worth of resolution and a 1000 camera rate would provide 100 cameras worth of resolution. I'm envisioning a few hundred cameras as being the feasible and affordable number, but who knows?

For computational photography, it totally depends on what you're massaging the data to enhance. I wouldn't even hazard to guess.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear folks,

Here's a question that I don't know the answer to, and I'm hoping there's someone working in the industry who's reading this who does. What do integrated camera modules cost when you are only buying them in modest quantities (tens or hundreds of thousands versus millions to tens of millions)?

Also, what kind of setup costs get charged on custom designs for such modules?

For the sake of conservatism, I started off assuming off-the-shelf modules in this design, under the assumption that anything else would be prohibitively expensive. So no nonstandard optics, no nonstandard sensor arrays, etc. I don't actually know if that's true.

And, for all I know, even off-the-shelf modules are prohibitively expensive when you're not buying a zillion of them at a time. While I'm sure this design is technically buildable, I have no idea if it's economically buildable even though my hunch is that it is.

Anybody out there able to offer any help? thanks!


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Seems like a bad trade-off for film and a real view camera. Hell, even a Toyo 45A with somewhat limited movements but TXP or TMY-2 loaded is more inspiring.

@ctein Controlling focal plane with minimum depth of field is a key feature. Also some sort of perspective control is essential. They are done in real time (but fix spot). Not sure a high resolution with a huge depth of field incoming signal can really be blur and adjust in real time to achieve this. Otherwise, the viewing platform i.e. the iPad is not good enough. The advantage of the iPad VC is it is multi-spot and if "large EVF" is not burnt out/not cathing up, it might even can catch movement. But the multi-spot cannot help if the real time feed is not there.

@Karl, the ipad has orientation lock. Hence, if the camera program can detect it, it is very easy to fix the orientation issue to have a reversal image.

Dear WeeDram,

Have you ever heard the joke,

"How many art directors does it take to change a lightbulb?"

"Ummm, does it HAVE to be a lightbulb?"

That's how I feel whenever I write about digital cameras and someone tells me how much better it would be to use film.

You wanna use film, be my guest. I don't. I am not the least bit 'inspired' by a film view camera.

pax / Ctein

@ Ctein,

thinking more about this, I think the first stumbling block will be the sensors.

You asked about cost - my company did have some dealings with a large vendor (we put together a tech demonstrator for discreet wide area surveillance using a network of the little sensors hidden in bricks etc and wi-fi'd into a central processor / viewing station. It was a couple of years ago so costs are not right up to date).

Volume sensors from the current generation could be had for about £20 in from 5000 units upwards. Perhaps £30 each for 1000 sensors. That was true for the 8 MPix generation. In addition, they weren't that small - the lens / sensor assembly was about the size of a sugar lump on the engineer's bench. Not an issue for our purposes, as the lens was flat and we were burying the depth, but it could be for your iPad.

Now, you can buy 5 MPix sensors for less than $28-33 a piece (min order 500 sensors). Have a look at the specs at this link http://uk.alibaba.com/product/399479856-5-Megapixels-CMOS-Sensor-12MP-Max.html That's for a hidden camera with all sorts of stuff you don't want or need, so you could probably get the sensor / lens assembly cheaper.

You could ease down the MPix scale - you mention kilopixels per sensor. I suppose you could set each sensor to a VGA setting, but I think you're then trading resolution off very quickly.

I suspect that for your image processing purposes you will need RAW data, not JPG. Unless you want to write your own RAW code, you will have to find one of the higher end sensors that can deliver RAW. I don't recall RAW being a feature of small sensors before 2 or 3 years ago, but I could be wrong.

Hope this is helpful, even if the data is a couple of years old and resurrected from some notes I took at the time. BTW, I'm not an engineer, and this wasn't my project - I was part of an internal red team doing due diligence on the feasibility before the company spent too much money on the project.

A view camera without tilt shift would be like chocolate without cacao. Sorry it doesn't work for me. Now make a sensor the size of the back of the iPad so you can slip into the back of a view camera like a film holder and keep it under $5000.00 I'll line up for one.

If I had to use a digital view camera I'd probably stick with the Silvestri Bi Cam, and a tethered monitor. Won't take too long for the live view image to get more usable, then we'll be cooking with gas.

Of there's still film. Portra 400 is pretty dope!

Ctein, I believe you have a genuinely genius idea on your hands. Perhaps more (Moore) than you realise.

On the issue of cost however, I may have a left field solution. Give it mass market appeal.

I believe the insistance on an iPad is a distraction. Indeed an iPad should be one control and review device, but why not provide apps on multiple devices which could operate remotely, making the system entirely modular.

The iPad (or iPhone or Android or other device) would merely control the camera and achieve framing and focus by means of a relatively low-res version of the final image. Smaller devices like iPhones which have less computational power would only need a lower res image anyway. Zooming in to achieve focus is still entirely feasible.

Then, when you return home with your "raw" images, you can upload them to a real computer with a serious processor and hey presto. Genuinely large, high quality images with huge development lattitude including DOF, DR, etc. - and all from a device you can slip in a shirt pocket next to your iPhone.

If you want to integrate camera and device, you merely need to provide a dock for various devices (orderable separately) which attaches to the back of the camera.

I could easily imagine the camera module being available in various formats, square, 4X3 etc, and various resolutions (12 - 120 cameras). The smaller ones would still produce amazing quality and be entirely pocketable.

So, there you have it, a totally shirt-pocket friendly camera solution that can produce MF or LF quality images for moderate cost.

We have also, neatly, pushed photography back into the realms of Moores Law. Dependent only on the computational power available you could build "compound eye" cameras using thousands or millions of micro modules in many configurations (even spherical).

There you go. Now you have a multi-billion $$ idea, I will happily take 10% of your 1%. :)

Cheers!
Steve

What do integrated camera modules cost when you are only buying them in modest quantities (tens or hundreds of thousands versus millions to tens of millions)?

quantity 1 $10 , 100 @ $8

http://www.sparkfun.com/products/8668

This story "Scientists develop flexible sensor to allow simple zoom" is another approach

It might be a really interesting exercise to think about how one might replicate movements with Method 2 or 3; if the cameras in the array were each set to focus to a different distance, there might be enough data there to do the job really well. The limitation might be the nigh-infinite depth of field that comes with using sensors smaller than the cross-section of a grain of rice.

"Ummm, does it HAVE to be a lightbulb?" Heh.

While I like the iPad idea, I'm more interested in what a person could do with something smaller? Could you aim for the quality of 4x5 chrome film, or better, in a device with a 4x5 screen -- small enough for a large pocket. Or maybe the quality of a 5DMkII in something the size of a thick iPhone. Built in GPS and mapping, of course, and it wouldn't hurt if it could browse the web, play music while you shoot, and even make phone calls.

OK, maybe I'm getting carried away, but if you could give me the image quality I used to get from my 4x5 Deardorff in a package to fit a jacket pocket or belt pack I'd be really interested. Doesn't bother me at all if the focus, perspective and even maybe zoom are digital instead of mechanical -- so long as it puts a quality print on the wall.

As an aside, maybe if you had said "large format" rather than "view camera" you would have headed off some of the objections. As to film and the rest, I have no real desire to go back. While I never expect to see a digital camera that is as much fun to operate or feels as good as the Deardorff or my M3, I cannot imagine I would ever use either of them again. Looking back, in those days I was too much a camera operator and not enough a photographer. Today I'm much less concerned about cameras and spending more time thinking about pictures.

I wonder if a variation of this would work for one of my ideas.

A 4x5 digital back built in the form factor of a 4x5 Grafmatic back.

I have a Graflex reflex that I would really like to bring into the digital age.

-Hudson

"Now make a sensor the size of the back of the iPad so you can slip into the back of a view camera like a film holder and keep it under $5000.00"

A sensor that size, given current technology, would likely run into the millions of dollars. So it's going to be many years before this would be possible in a consumer device (as opposed to one-off or limited run government or scientific devices).

Still, the giant sensor Canon built is really cool, if impractical : http://www.dpreview.com/news/1008/10083101canonlargestsensor.asp

Here a link to an interesting announcement - http://www.dpreview.com/news/1101/11011915curvilinearcamera.asp - scientists have developed a small flexible sensor that allows a simple lens to be focused and when combined with an adjustable fluid filled lens to zoom...

A spread of these would give you zoom and focus.

This one from the Samsung E700 is $20 for 640 x 480:
http://www.sparkfun.com/products/637

There are more at the SparkFun site.

Dear James,

Thanks for that info.

The cameras I'm thinking of are much smaller and physically simpler-- they're of the same ilk as are used in cell phones, iPODs and laptop lids. They're only a few mm on a side-- a sugar cube wouldn't fit.

I know they're under $10 in volume; the question is how cheap are they, and in what quantities. At $10 each, this device doesn't fly, cost-wise.

Any help?

pax / Ctein

Dear view camera users,

Invoking a small amount of topic drift (very small), I'd like to ask a question:

Assuming you need swings and tilts for more than merely getting large depth of field, is there something in physical swings and tilts that software-based equivalents wouldn't give you?

(take it as a given that the camera software can do "swings" and "tilts", analogous to Photoshop doing perspective distortion correction)

I understand that for a few of you, you just like the physical experience of handling a view camera. But then, this wouldn't be the camera for you, any more than a DSLR would.

I'm not asking this as a marketing question, but as an engineering one.

pax/ Ctein

Hi Ctein,

Marc Levoy has been researching on computational photography for years now in Stanford. His group even modified a digital medium format camera couple of years ago to do computational photography by adding a super size microlens array in front of the high resolution sensor, such that each micro lens covers about a few thousand pixels. With a couple of thousand super-size micro lenses this setup simulates a couple of thousand small slightly shifted low resolution sensors just like you proposed. They shown that you can slightly shift your view-point and refocus after the exposure with this setup. Here is the link: http://graphics.stanford.edu/papers/lfcamera/

Additionally, they've recently created an iPhone app doing something similar, by taking a video while the user carefully moves their iPhone and using individual frames from this video as images from a virtual camera array: http://sites.google.com/site/marclevoy/

So, there is already an app for that! :)

Cheers,
Engin.

@ Ctein

Looks like Hugh Crawford has hit the mark with his $8 integrated modules.

Still got a RAW issue, though I think. Also a physical dimensions issue: the "base" of the sensor is broader than the lens, so overall optical density is compromised (at a guess, and without measuring, by about 75%). Re-engineering that to give a consistent diameter from back to front will result in a higher unit cost, although you could improve the quality of the lens while you are doing it.

I like Steve Jacobs' ideas about not making the device OS-specific, but you'd probably need to start with one platform and then roll out to others.

It may be worth thinking about making the camera as a clip on to an iPad: as an example look at http://www.wired.com/gadgetlab/2010/11/aluminum-shell-hides-ipad-keyboard/ The overall engineering would be easier, as well as not having to deal with Apple direct. The data presumably would go from sensor-bank to iPad over the dock connector with a short cable. You could use the additional real estate in a clamshell for battery power.

Betterlight has made scanning digital backs that go into 4x5 cameras for a while now. I don't think they are full frame, and they are line sensors so they scan the frame like a flatbed scanner instead of taking the whole shot at once.

I have to say that I had a sillier idea in my head when I made the original comment ... I was pictuing a view camera body with two iPads where the lens board and camera back go. This is of course not a very useful picture. :)

Anyway, neat idea.

Dear Hugh,

Thanks very much! That's the kind of device I'm talking about, although this one is a couple of generations old and a little bigger and not fab-line manufactured. But it will give people a sense of what these devices are like.

Something folks may not realize unless they've looked at the specs, is that the pixels in these low-priced cameras are pleasantly large. In this case, 3.5 µ. The reason is that it costs more to make smaller pixels.

Anyway, from this I can tell that the OEM price in modest quantities would be one to two bucks. That's what I was hoping for. That makes the selling price of the tablet view camera under $3000. Maybe under $2500.

If some of the readers are wondering why the price would be that high, given what these cameras cost and what an iPad costs, it's for two reasons. First, there is about a fivefold magnification in price between component costs and end-product street price. So, if there are $200 worth of cameras in this gadget, it adds $1000 to the selling price.

Second, one has to amortize the product development and start-up costs, which are fixed regardless of how many units are sold. A digital view camera is never going to be a large seller; it is simply not something that most people need or want. 1000 units? Sure. 10,000? Not obvious. 100,000? Forget it, it would never happen at any plausible price. Upfront costs on producing something like this are, if the gods smile upon you, in the low millions. So, figure another thousand dollars to pay off getting this puppy off the ground.

And that's how you end up with a $2500-$3000 price tag.

This, not so incidentally, is why any specialty cameras are inherently expensive. They do not simply spring, full-blown, from the brow of Zeus.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear Steve,

Modular, multipurpose camera systems never prove to be as inexpensive as people imagine. In fact, they are almost always more expensive than “single-purpose” ones. There are a bunch of reasons for that, which would make up a whole column in itself. But what it boils down to is that I cannot imagine what you're desiring happening with an attractive price point at any time in the foreseeable future.

~~~~~~

Dear Gato,

With currently available technology, I don't think I can make the physical, electronic, and power requirements fit in a 4 x 5 camera. You'll just have to live with an 8 x 10 view camera that is only half an inch thick and weighs under 2 pounds.

I know, sucks, doesn't it [vbg].

Give it 5-10 years.

~~~~~~

Dear Hudson,

I can't think of any way to do this that would make you happy, because the only ways I can come up with require me to substantially modify your Graflex to make it work with a multi-camera array in ways that would make it useless for regular photography.

~~~~~~

Dear Engin,

Okay, that is so COOL! Mark is one clever guy. I thought I'd was pretty up on the tricks one can do with computational photography, but this is one that never occurred to me. Assuming the next generation iPad has a camera built-in (yeah yeah, I know rumors… And I also know Apple's penchant for misdirection), I am so going to write a column about this.

Startling examples; everybody should look at these: http://sites.google.com/site/marclevoy/examples

~~~~~~

Dear James,

The sensors don't deliver JPEG versus RAW, it's the electronic processing circuitry that gets the data from the sensors that outputs it as JPEG instead of RAW. Although in this case, I'm not sure JPEG wouldn't work fine. More optimization questions.

Anyway, not an issue inherent to integrated camera modules, just relates to the way they're commonly used today.

Fill factor is poor, which is why you throw more cameras at the problem. Off-axis performance is lousy with older generation cameras; not with the current crop, which have much larger lenses of much better quality when you do fab-line/wafer-level module integration.

In any case, all such stuff that can be dealt with by throwing computing power at many, many pixels. You don't need to reengineer the cameras.

There is not a platform or an OS issue involved. All I said was that I like the iPad form and design so I'm imagining building this into an iPad case. Don't read anything more into that than what I said. In fact, I cannot imagine anyone who was building this to not build both Mac and Windows versions of the on-computer (as opposed to the in-camera) software.

The in-camera software doesn't even have an OS flavor, as far as you're concerned. No more than the software in your Nikon does. It's special purpose, dedicated, running with custom electronics for maximum speed/power performance.

Most assuredly, this would not be an Apple product; this is not something that would interest them. Not that I would have a problem doing this with Apple… If I wanted to do it… And they wanted to do it. I have friends there in VERY high places.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

After taking a look around , this looks pretty interesting:
CMOS 2592x1944Pixels 48-Pin surface mount , quantity one sixteen bucks and change.
http://app.arrownac.com/aws/pg_webc?s=P

Prototype fully built camera 80 bucks

Development kit and board with support chips , source code etc. less than 400
https://www.leopardimaging.com/5M_HD_Camera_Board.html
Uses TI DM365 support chips , lots of linux stuff for it, I'd look at what leopard is offering.

@Ctein

I was kidding about the iPad auto rotation... even the barefoot and technologically challenged, such as myself, know that you can pry the darned thing open and install an off switch from Radio Shack. Might have to drill a hole for the toggle, but it's nothing I haven't done before.

Ctein

some very interesting ideas - nicely piques the conceptual engineering parts of my brain.

On your question about use of tilt & shift - I mainly use them in two ways: to control plane of focus (subtly different to depth of field) and adjustment of viewpoint (not quite the same as perspective control: think of it like sitting on the floor but trying to get the view of standing on a ladder - useful in the field in windy conditions). Large DoF & software would be and acceptable solution to the first, but a rise/fall can only solve the second.

On the integration - I'd much prefer a separtion of the camera module and the control device. WiFi/Bluetooth them and have remote control of a tripod-mounted camera. That could easily extend to control devices as small as a phone (used mainly for broad composition) up to a tablet (for fine detail). Then I could decide when to trade off portability and control image quality.

@Ctein

Assume you have infinite depth of field in the image and processing power to sub-select any focal plan and do perspective change within the image, bascially a lot of view camera advantage would be fulfilled by a iPad view screen. The only thing I can think is sometimes I may be limited by the position I am and adjust the arrangement so that I can reach a view.

The most extreme example is shifting to take a photos of a mirror or cannot climb further up to take a photo of something remote but a little barrier in front of you.

They are minor problem. Even the EVF issue is relatively minor.

On the idea of being a film back, I agree the other idea that one may not need to have it real time but use a photo, develop and print concept. Old style but if that give us another less costly choice, I do not think large format guy like me who are still here will object.

We may not even need to use fuji 4x3 to check a bit (the 4x5 is gone unfortunately). Given the gigabyte data but fix structure, one can sample here and there to give you a jpeg to check using a Eye-fi card.

Just take photo/"Polaorid"-develop-print!


2 entry level digital cameras look at a mirror at a 45 degree angle viewing the image plane projected through the lens of a view camera. The mirror reflects about 55% of the frame.(Divided in half vertically or horizontally as appropriate with a little overlap for stitching) Then after taking the exposure, the two cameras and the mirror slide over and take another exposure of the other half frame +5%. Stitch the four images together and you've got 40+ megapixels for the cost of two entry level cameras and some linear slides and mirror hardware. (Assuming you've already got a donor view camera) You'd have to compose/preview on a ground glass, then slide the camera module into place and take your two image pairs. This would allow higher DR shots with bracketing than you can get with current large sensors? Kind of blows out the iPad form factor goal though... What the heck has this got to do with an iPad view camera you ask? I don't know, but your discussion of multi camera imaging got me thinking... My idea saves the tilt/shift guys though. ;)

Ctein: I don't make any judgement about "better" when it comes to film vs digital -- or any other technology used to make "art" or pursue a vocation or avocation. So, that wasn't my point for anyone other than me.

The black box with accordion, shutter and big pieces of glass attached actually DOES inspire me; I guess I'm just very tactile and tuned into the tool in my hands.

So, enjoy the LF iPad -- whenever it's a reality. Until then I'll make some actual photos while someone slaves in fab for a few months/years *smirk*. I'm 62, so want to make use of whatever years I have to make pictures.

Here's what you need for this project - the "eyeball camera" - http://www.technologyreview.com/computing/27105/?nlid=4019

Dear Martin and Dennis,

Shifts/rises and falls? Oh, that's EASY. You're just scrolling (and cropping to format) within a larger field of coverage by the lens(es). That takes so little processing power it could be displayed in real time, in-camera.

I am perhaps not understanding something--view cameras were never my thing, especially: What is the purpose of "controlling the plane of focus" other than to produce a sharp-looking photograph over a greater (or, more rarely, lesser) range of subject distances?

If that's all it's for, it's functionally equivalent to depth of field. If not, please educate me.

Thanks!

pax / Ctein

Dear WeeDram,

Hey, I'm with you. I made that decision 30 years ago!

Sure didn't make me rich... but I've had a lot more fun along the way. No regrets.

pax / Ctein

Shifts/rises: sure panning works - assuming you build the device with far wider lens coverage than capture. But then surely you're throwing away pixels?

Plane of focus: take your own portfolio image TransAmerica Pyramid. Strictly the Depth of Field is very thin but on an inclined plane. tilt gives the ability to get it all in focus for a relatively large aperture. One can work at system resolution limit. For the current camera paradigm, that's an important distinction.

As I said, though, that is definitely computable if enough DoF & super-resolution are provided. Same end result by different means. For your new direction, it becomes semantics.

Dear Martin,

You're right; doing this in camera requires a larger field of coverage, just as it does in a traditional view camera. Pixels are cheap in this thing; we've got plenty to spare! If you end up throwing away a few tens of millions in return for features that make the camera work better for a photographer, it's probably a good trade off.

BTW, although my tablet view camera could directly address the shift/rise-fall adjustment, fixing that in Photoshop is an old, standard standard. That's what the perspective control tool is for; to remove keystoning in photographs where you have to tilt the camera to get the field of view you want because you don't have a shift/tilt lens or body available.

Personally, I like building it into the camera better, because it's more photographer-transparent. The thing then behaves even more like a traditional view camera. but it's not strictly necessary.

Again, I'll leave the cost-benefit analysis and optimization to whoever wants to build the damn thing.

Regarding plane of focus, we are on the same page. My way of thinking of it is that swings and tilts are just a hardware hack that lets you get around the depth of field restrictions set by geometric optics. I'm merely substituting a software hack for the hardware hack. Should get you to the same place.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Ctein, you said...

"Dear Steve,

Modular, multipurpose camera systems never prove to be as inexpensive as people imagine. In fact, they are almost always more expensive than “single-purpose” ones. There are a bunch of reasons for that, which would make up a whole column in itself. But what it boils down to is that I cannot imagine what you're desiring happening with an attractive price point at any time in the foreseeable future."

However what I was proposing was not really a modular camera in the conventional sense, nor a multipurpose one. I was proposing a fully functional but compact LF device which is controlled via a standard API by a smartphone or tablet that can support the required app of which there are now many.

In what way would the development costs be significantly higher than for a similar device built into to a modified iPad, espcially if you were not relying on the control device to develop the final image? The only major difference would be flexibility and convenience. If bandwidth is an issue, a cabled connection should be fine.

Cheers
Steve

Dear Martin et al.,

A P.S. ...

Just to be clear, I don't know that there's any single Photoshop tool that exactly replicates any view camera movement. It's not something I've had reason to care about. I am sure that a combination of no more than three of the transforms in Photoshop will replicate any view camera movement.

"Perspective"'s a key software tool, of course, because if you can't remove keystoning, yer screwed without a shift/rise-fall. But I am not saying it's necessarily the whole answer.

IOW, Photoshop isn't the final answer-- it's the proof of principle.

pax / Ctein

HI Ctein

Re software perspective control. (Photoshop etc) Surely the problem is that you probably have to throw away a lot of pixels, leaving lots of free space around your final intended image? With shifts/tlits you can make sure you maximise the image area.

Do you think maximising DOF with small apertures or whatever is the equivalent of changing the plane of sharp focus? Taking a photo of a long building facade at 45 degrees (e.g.) is one classic use of altering the plane of sharp focus where relying on deep DOF won't give the same quality.

I'm wondering what definition you used for "view camera" when floating this idea if you don't seem to put much emphasis on camera movements? A "view camera" without movements is surely just an SLR without a pentaprism.

Dear Richard,

"Surely the problem is that you probably have to throw away a lot of pixels..."

We have megapixels to burn in this design. Bigger field of view just means adding a few dozen more camera modules.

BTW, you essentially "throw away pixels" when you buy a lens with extended coverage to accommodate movements for a conventional view camera. That extra coverage comes at the price of other aspects of lens performance and image quality.

"Do you think maximising DOF... where relying on deep DOF won't give the same quality."

Wrong, in this case it does. You don't understand how this works.

"I'm wondering what definition you used for "view camera" ..."

I'm using the normal one. For one thing, the definition of a "view camera" has nothing to do with whether or not it has movements. Many view camera designs don't. Their flexibility and utility is more limited-- they're still view cameras.

Besides this camera DOES has a full complement of movements-- they're software, not hardware, that's all.

pax / Ctein

How can you throw away pixels that never made it to the image?

>>>>> "Do you think maximising DOF... where relying on deep DOF won't give the same quality."

Wrong, in this case it does. You don't understand how this works.

<<<<<<<

Hmm - this is what I meant

http://en.wikipedia.org/wiki/Scheimpflug_principle

Of course I'm not going to suggest you don't understand how it works

Dear Richard,

Really, I do understand all this. But you're not understanding how this device works. All your questions were addressed by previous answers.

pax / Ctein

It is just the same principle used for getting larger mirror diameter with several small mirror telescope, it worked, and it will work if you have enough computational power

Dear Nestor,

Exactly-- it's all in having enough bit-crunching.

I think some readers didn't get the import of my sentence, "Three approaches use off-the-shelf stuff available today." There's no new science, math or engineering in this gadget-- it's merely a novel combination of known gadgetry and algorithms.

Part of the conceptual breakthrough for me was realizing that multiple small cameras would get me the right form factor without compromising the image quality. The other was realizing that all the processing didn't have to be done in the camera-- just enough to drive the "ground glass" on the back.

A good desktop GPU delivers a teraflop these days-- no shortage of processing power, so long as it's parsable into GPU-ish type chunks. Stitching and computational photography are.

pax / Ctein

This would make the end product akin to what insects see with their eyes.

and with minor adjustments, you could easily pull off 3D pictures with it at the same time.

nifty, but another way of doing this would be to have a shared image server with GPS and orientation information.

using the 'put low res images together' math, every gps location and orientation will have pictures matching it making the picture more and more detailed as time goes on.

of course it won't work so well outdoors with changing weathers and exposure, but if you add algorithms to only take similarly exposed pictures, you could probably get something...

Dear Harold,

I think Seadragon ( http://theonlinephotographer.typepad.com/the_online_photographer/2007/06/photography_in__1.html ) does some of what you're talking about.

Computational photography changes the fundamental nature of the game-- hardware, software and aesthetics-- in ways that are profound. It's the biggest change to happen to the field since the invention of geometric optics and the laws of optical design.

50 years from now, photography is going to be a very alien landscape.

pax / Ctein

Tom Hogan (jan.24) :

Ctein proposed recently using an array concept rather than pushing up a single sensor pixel count. That's exactly the right path to increase image quality, IMHO. We've been trying to brute force image quality by making better photosites, but there are alternative strategies that actually take us further, faster.

Maybe a wireless lens would be the ticket:
http://www.artefactgroup.com/wvil/

The comments to this entry are closed.