Analysis of Multiview Omnidirectional Images in a Spherical Framework

With the increasing demand of information for more immersive applications such as Google Street view or 3D movies, the efficient analysis of visual data from cameras has gained more importance. This visual information permits to extract some crucial information in scenes such as similarity for recognition, 3D scene information, textures and patterns of objects. Multi-camera systems provide detailed visual information about a scene with images from different positions and viewing angles. The conventional perspective cameras that are commonly used in these systems however, have limited field of view. Therefore, they require either the deployment of many of these cameras or the capture of many images from different points to extract sufficient details about a scene. It increases the amount of data to be processed and the maintenance costs for such systems. Omnidirectional vision systems overcome this problem due to their 360-degree field of view and have found wide application areas in robotics, surveillance. These systems often use special refractive elements such as fisheye lenses or mirror-lense systems. The resulting images however inherit from the specific geometry of these units. Therefore, the analysis of these images with methods designed for perspective cameras results in degraded performance in omnivision systems. In this thesis, we focus on the analysis of multi-view omnidirectional images for efficient scene information extraction. We propose a novel spherical framework for omnidirectional image processing by exploiting the property that most omnidirectional images can be uniquely mapped on the sphere. We propose solutions for three common multiview image processing problems, namely feature detection, dense depth estimation and super-resolution for omnidirectional images. We first address the feature extraction problem in omnidirectional images. We develop a scale-invariant feature detection method which carefully handles the geometry of the images by performing the scale-space analysis directly on their native manifolds such as the parabola or the sphere. We then propose a new descriptor and a matching criteria that take into account the geometry and also eliminate the need for orientation computation. We also demonstrate that the proposed method can be used to match features in images captured by different types of sensors such as perspective, omnidirectional or spherical cameras. We then propose a dense depth estimation method to extract the 3D scene information from multiple omnidirectional images. We propose a graph-cut method adapted to the geometry of those images to minimize an energy function formulated for the dense depth estimation problem. We also propose a parallel graph-cut method that gives a significant speed improvement without a big penalty in accuracy. We show that the proposed method can be applied to multi-camera depth estimation and depth-based arbitrary view synthesis. Finally, we consider the multi-view omnidirectional images with the transformation of pure rotation. We address the view synthesis problem for these images in the framework of super-resolution. Considering also the inaccuracies in the rotation parameters, we solve an optimization problem that jointly estimates the rotation errors and reconstructs a high resolution omnidirectional image. We then extend the minimization problem with a regularization term for improved reconstruction quality with a reduced number of images. Results with both synthetic and real omnidirectional images suggest that the proposed method is a viable solution for super-resolution with omnidirectional images. Overall, this dissertation addresses three important issues of multiview omnidirectional image analysis and processing in a novel spherical framework. Our feature detection method can be used for the calibration of omnidirectional images as well as the feature matching in mobile and hybrid camera networks. Furthermore, our dense depth estimation method can impact the quality of 3D scene reconstruction and provide efficient solutions for view synthesis and multiview omnidirectional image coding. Finally, our super-resolution algorithm for omnidirectional images can promote the development of efficient acquisition systems for high resolution omnidirectional images

Frossard, Pascal
Lausanne, EPFL
Other identifiers:
urn: urn:nbn:ch:bel-epfl-thesis4836-8

Note: The status of this file is: EPFL only

 Record created 2010-08-19, last modified 2018-03-17

Texte intégral / Full text:
Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)