The light field is a 4D representation of radiance as a function of position and direction in space. In computer graphics, light fields have been used to fly through scenes without the use of geometric models. In this talk, I explore three ways to capture and use light fields that fall outside this paradigm. Specifically, I describe:
- A new photographic technique called dual photography, which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. In its simplest form, the technique allows us to take photographs using a projector and a photocell. Replacing the photocell with a camera or an array of cameras produces a 4D or 6D dataset, with applications to relighting and the measurement of appearance.
- A compact handheld camera capable of capturing a light field in a single exposure. The main idea is to insert a microlens array between the sensor and main lens. By capturing directional as well as spatial information about the light entering the camera, we can refocus a photograph *after* it is taken, and we can move the viewpoint slightly.
- New applications for the Stanford multi-camera array. In past work, we have configured our array to generate video at 3000 frames per second and to simulate a camera with an 8-foot aperture - allowing us to see through foliage and crowds. Recently, we have configured the array to simulate a 30-megapixel tiled video camera with independent exposure metering in each tile. This lets us record dynamic environments with unprecedented resolution and dynamic range.
About the speaker: