Fichiers

Résumé

What is the actual information contained in light rays filling the 3-D world? Leonardo da Vinci saw the world as an infinite number of radiant pyramids caused by the objects located in it. Nowadays, the radiant pyramid is usually described as a set of light rays with various directions passing through a given point. By recording light rays at every point in space, all the information in a scene can be fully acquired. This work focuses on the analysis of the sampling models of a light field camera, a device dedicated to recording the amount of light traveling through any point along any direction in the 3-D world. In contrast to the conventional photography which only records a 2-D projection of the scene, such camera captures both the geometry information and material properties of a scene by recording 2-D angular data for each point in a 2-D spatial domain. This 4-D data is referred to as the light field. The main goal of this thesis is to utilize this 4-D data from one or multiple light field cameras based on the proposed sampling models for recovering the given scene. We first propose a novel algorithm to recover the depth information from the light field. Based on the analysis of the sampling model, we map the high dimensional light field data to a low dimensional texture signal in the continuous domain modulated by the geometric structure of the scene. We formulate the depth estimation problem as a signal recovery problem with samples at unknown locations. A practical framework is proposed to recover alternately the texture signal and the depth map. We thus acquire not only the depth map with high accuracy but also a compact representation of the light field in the continuous domain. The proposed algorithm performs especially well for scenes with fine geometric structure while also achieving state-of-the-art performance on public data-sets. Secondly, we consider multiple light fields to increase the amount of information captured from the 3-D world. We derive a motion model of the light field camera from the proposed sampling model. Given this motion model, we can extend the field of view to create light field panoramas and perform light-field super-resolution. This can help overcome the shortcoming of limited sensor resolution in current light field cameras. Finally, we propose a novel image based rendering framework to represent light rays in the 3-D space: the circular light field. The circular light field is acquired by taking photos from a circular camera array facing outwards from the center of the rig. We propose a practical framework to capture, register and stitch multiple circular light fields. The information presented in multiple circular light fields allows the creation of any virtual camera view at any chosen location with a 360-degree field of view. The new representation of the light rays can be used to generate high quality contents for virtual reality and augmented reality.

Détails

Actions

Aperçu