Here’s an awesome idea for the camera industry. Like most of my seemingly awesome ideas, someone else has
probably already thought of it (UPDATE: someone has already thought of it, sort of… skip to the bottom of this post). But just in case it’s at all novel, I offer it up for public review:
Yesterday I saw a little kid yawning, and I thought it would have made a really cute photo if only I had a camera in my hand, ready to shoot. Of course, I had a camera in my hand: my iPhone. The moment, however, went by just too fast: I noticed it, but could not photograph it.
Being able to take a photograph spontaneously, with as much ease as pointing your finger or blinking your eyes, would change the world of photography even more than the profound effect digital photography has already had. It would make available the ephemeral and fleeting moments of beauty and inspiration in a way that current photographic technology still cannot deliver.
Today’s cameras, however, still have too many obstacles to this goal. The camera has to be in your hand, with the lens cap removed, and if electronic it has to be powered up. And then you have to adjust the exposure and focus on your subject.
Most of these obstacles can be overcome: small, cheap, and fast cameras are already here. A camera mounted in a pair of glasses or on a fingertip is easy to imagine. And you can adjust exposure, to a limited extent, after the fact in Photoshop. It’s the focus part that seems the biggest barrier: Choosing the subject in the frame and then adjusting a mechanical lens array to focus on that object takes both time and human intelligence. Automating this would seem impossible.
But I think I’ve figured it out.
Multifocus: Fix it in post!
Instead of taking one photograph upon clicking the shutter, my camera would shoot 50 photos as fast as possible. Each photograph would have a slightly different focus setting, zooming on different points in space. Cameras are pretty damn fast these days, and getting faster, so it seems possible that taking 50 good photos in a fraction of a second is reasonable.
Some of the 50 photos will focus on nothing, and will be useless. But among the rest there would almost certainly be one image that is nicely focused on exactly what you wanted to shoot.
The idea is that we use brute force (that is, speed) to capture a variety of photos, then we pick the one we like best. Basically what photographers have been doing for years with motor drives, but ridiculously faster.
The key to this concept is the post-production software. You could just view 50 photos, but I picture it being more interesting than that. The interface for choosing the photo could feel like taking a photo, where you look upon a scene and move a slider to change your focus on the scene. I imagine an interface like the one Harrison Ford used in Blade Runner to investigate the space in a crime-scene photo, but instead of exploring a 3D space, it permits the viewer to explore the image-space by moving the point of focus.
Many years ago I made a Flash experiment showing how a focus effect might work. You can try it here. If you play with the demo, you can imagine the UI for my multifocus selector tool, choosing the best-focused image from the 50 images originally captured by the camera.
If the system was fast enough (say, fast enough to take 200 photos in a second) the lenses could also take each photo at several zoom levels or exposure settings, too. So you point, snap, and then do all of the zoom, focus, and exposure work later, almost as if you were freezing and capturing time itself.
This idea isn’t so far fetched. It’s influenced by a bunch of other ideas along similar lines:
- Bullet-time camera: Popularized in The Matrix, the “bullet-time” effect is achieved by a brute-force technique of taking dozens of photos at the same time from many different angles. Cool example here.
- Page scanner concept: The idea behind this concept is that instead of slowly photographing the pages of a book one page at a time from a fully-flat perspective, a machine could scan the book’s pages a hundred times faster by simply photographing them at an angle as they are quickly flipping by, adjusting the image later to appear flat.
- iPhone’s “always on” camera: Lonelysandwich’s Adam Lisagor recently speculated and tested that the iPhone is able to take photos really quickly because it doesn’t wait for you to click the shutter to record the image in memory. It just takes photos constantly and then keeps the one it already took at the time you click the shutter.
- Focus Stacking: In microscopic photography where the depth of field is miniscule and getting an image of an entire tiny object is difficult, a technique called focus stacking allows the photographer to take many photos of the same object at different focus lengths, then combining them all into a single composite image where everything is in focus. Check out this cool focus stacking animation.
Most of the conceptual and technological pieces are there. Another issue would seem to be the lenses themselves: how to move a physical lens array quickly, but given the size of cameras these days it seems that we’d only need to move the lens a few millimeters to get all 50 focal lengths.
Now, someone please tell me this already exists.
UPDATE: Okay, it already exists. The plenoptic camera, or light-field camera, which uses an array of tiny lenses to take multiple photos at different focus points. Different concept (mine relies on a single moving lens), same result. Either way, I hope someone figures out a way to build this kind of thing into cheap phone cameras.