Files

Abstract

Stereoscopic video content is usually being created by using two or more cameras which are recording the same scene. Traditionally, those cameras have the exact same intrinsic camera parameters. In this project, the exposure times of the cameras differ, allowing to record different parts of the dynamic range of the scene. Image processing techniques are then used to enhance the dynamic range of the captured data. A pipeline for the recording, processing, and displaying of high dynamic range (HDR) stereoscopic content, acquired using inexpensive low dynamic range (LDR) cameras, is proposed. Two different approaches to obtain stereoscopic HDR content are presented and compared. In the temporal approach, different parts of the luminance range of the scene are recorded by temporally changing the exposure time of both cameras. Information from adjacent frames captured by the same camera is then used in order to increase the dynamic range. In the spatial approach, both cameras are assigned a distinct, fixed exposure time. Here, the dynamic range is increased by combining data from the cameras. It is found that the intrinsic problems of the spatial approach are much more difficult to deal with than the ones of the temporal approach. In particular stereo matching, the critical component to combine data in the spatial approach, is more difficult than traditionally because the two cameras have different exposure times. The results are evaluated for both static scenes and scenes with object movement using an objective quality metric of the visible differences of the stereoscopic pair independently, and visual evaluation on a stereoscopic display to evaluate the stereoscopic quality.

Details

Actions

Preview