Here’s an awesome idea for the camera industry. Like most of my seemingly awesome ideas, someone else has
probably already thought of it (UPDATE: someone has already thought of it, sort of… skip to the bottom of this post). But just in case it’s at all novel, I offer it up for public review:
Yesterday I saw a little kid yawning, and I thought it would have made a really cute photo if only I had a camera in my hand, ready to shoot. Of course, I had a camera in my hand: my iPhone. The moment, however, went by just too fast: I noticed it, but could not photograph it.
Being able to take a photograph spontaneously, with as much ease as pointing your finger or blinking your eyes, would change the world of photography even more than the profound effect digital photography has already had. It would make available the ephemeral and fleeting moments of beauty and inspiration in a way that current photographic technology still cannot deliver.
Today’s cameras, however, still have too many obstacles to this goal. The camera has to be in your hand, with the lens cap removed, and if electronic it has to be powered up. And then you have to adjust the exposure and focus on your subject.
Most of these obstacles can be overcome: small, cheap, and fast cameras are already here. A camera mounted in a pair of glasses or on a fingertip is easy to imagine. And you can adjust exposure, to a limited extent, after the fact in Photoshop. It’s the focus part that seems the biggest barrier: Choosing the subject in the frame and then adjusting a mechanical lens array to focus on that object takes both time and human intelligence. Automating this would seem impossible.
But I think I’ve figured it out.
Multifocus: Fix it in post!
Instead of taking one photograph upon clicking the shutter, my camera would shoot 50 photos as fast as possible. Each photograph would have a slightly different focus setting, zooming on different points in space. Cameras are pretty damn fast these days, and getting faster, so it seems possible that taking 50 good photos in a fraction of a second is reasonable.
Some of the 50 photos will focus on nothing, and will be useless. But among the rest there would almost certainly be one image that is nicely focused on exactly what you wanted to shoot.
The idea is that we use brute force (that is, speed) to capture a variety of photos, then we pick the one we like best. Basically what photographers have been doing for years with motor drives, but ridiculously faster.
The key to this concept is the post-production software. You could just view 50 photos, but I picture it being more interesting than that. The interface for choosing the photo could feel like taking a photo, where you look upon a scene and move a slider to change your focus on the scene. I imagine an interface like the one Harrison Ford used in Blade Runner to investigate the space in a crime-scene photo, but instead of exploring a 3D space, it permits the viewer to explore the image-space by moving the point of focus.
Many years ago I made a Flash experiment showing how a focus effect might work. You can try it here. If you play with the demo, you can imagine the UI for my multifocus selector tool, choosing the best-focused image from the 50 images originally captured by the camera.
If the system was fast enough (say, fast enough to take 200 photos in a second) the lenses could also take each photo at several zoom levels or exposure settings, too. So you point, snap, and then do all of the zoom, focus, and exposure work later, almost as if you were freezing and capturing time itself.
This idea isn’t so far fetched. It’s influenced by a bunch of other ideas along similar lines:
- Bullet-time camera: Popularized in The Matrix, the “bullet-time” effect is achieved by a brute-force technique of taking dozens of photos at the same time from many different angles. Cool example here.
- Page scanner concept: The idea behind this concept is that instead of slowly photographing the pages of a book one page at a time from a fully-flat perspective, a machine could scan the book’s pages a hundred times faster by simply photographing them at an angle as they are quickly flipping by, adjusting the image later to appear flat.
- iPhone’s “always on” camera: Lonelysandwich’s Adam Lisagor recently speculated and tested that the iPhone is able to take photos really quickly because it doesn’t wait for you to click the shutter to record the image in memory. It just takes photos constantly and then keeps the one it already took at the time you click the shutter.
- Focus Stacking: In microscopic photography where the depth of field is miniscule and getting an image of an entire tiny object is difficult, a technique called focus stacking allows the photographer to take many photos of the same object at different focus lengths, then combining them all into a single composite image where everything is in focus. Check out this cool focus stacking animation.
Most of the conceptual and technological pieces are there. Another issue would seem to be the lenses themselves: how to move a physical lens array quickly, but given the size of cameras these days it seems that we’d only need to move the lens a few millimeters to get all 50 focal lengths.
Now, someone please tell me this already exists.
UPDATE: Okay, it already exists. The plenoptic camera, or light-field camera, which uses an array of tiny lenses to take multiple photos at different focus points. Different concept (mine relies on a single moving lens), same result. Either way, I hope someone figures out a way to build this kind of thing into cheap phone cameras.
5 Responses to Idea: Multifocus Photography
I had a related and (I think) slightly more complex idea.
If you think about how we perceive the world, nothing is really ever out of focus (from a perception and cognition standpoint. the idea that a photograph has static areas that are out of focus goes counter to the real world analog perception we have when looking at a scene.
the human eye is just a gittery fella that can jump around refocusing in what is perceptually instantaneously.
i want a camera that creates an image where everywhere on the picture is in focus? yes, it might require a tripod (especially early on), but I think for some shots it would totally be worth it if we can create images where every distance, and place on the camera is in focus. No need to pick from one of 50 shots for the “best” focus, but to truly have a multi-focus image.
Just a thought.
Photoshop CS4 actually has this capabilityâ€¦Â
If you’re shooting on a tripod, you can take several images at different focus points/DoFs and “stitch” them as a single image, giving “hyper DoF/focus” It’s pretty cool, actually.
Of course, this won’t work with a moving image (no matter how quick, there will always be a difference between frames if the subject or photographer is moving â€”Â assuming the images are tanken serially).
@David: You are correct, and yeah, that would be better, wouldn’t it. This is what those 1970’s “hyperrealist” painters were doing — creating images that looked more real than photographs because of that level of focal detail. The Focus Stacking method described above seeks this result. But with a multifocus photo system, you could choose to reproduce the everything-in-focus human perception *or* the “old-fashioned camera look” (because once this tech really exists, the some-things-are-in-focus-but-some-things-are-not photo will look antiquated).
@CMH: Check out that last link. It takes all the images at once, so it would work with a moving image just fine — PLUS it gets the image from a bunch of slightly different angles, allowing you to adjust the photo in 3D a little bit, kind of like Deckert’s machine in Blade Runner. Amazing.
well a while back i have a cheap lomo cam that has four lenses that takes a slightly different angle of the same subject on a same photo.
so if that’s possible, it should be pretty doable to perhaps mount 4 lenses with different focus setting in a phone, well at least to my layman point of view.
currently at least i have the iphone to satisfy those lightning quick photo ops.