000150477 001__ 150477
000150477 005__ 20190509132329.0
000150477 0247_ $$2doi$$a10.5075/epfl-thesis-4836
000150477 02470 $$2urn$$aurn:nbn:ch:bel-epfl-thesis4836-8
000150477 02471 $$2nebis$$a6131207
000150477 037__ $$aTHESIS
000150477 041__ $$aeng
000150477 088__ $$a4836
000150477 245__ $$aAnalysis of Multiview Omnidirectional Images in a Spherical Framework
000150477 269__ $$a2010
000150477 260__ $$bEPFL$$c2010$$aLausanne
000150477 300__ $$a142
000150477 336__ $$aTheses
000150477 520__ $$aWith the increasing demand of information for more immersive applications such as Google Street view or 3D movies, the efficient analysis of visual data from cameras has gained more importance. This visual information permits to extract some crucial information in scenes such as similarity for recognition, 3D scene information, textures and patterns of objects. Multi-camera systems provide detailed visual information about a scene with images from different positions and viewing angles. The conventional perspective cameras that are commonly used in these systems however, have limited field of view. Therefore, they require either the deployment of many of these cameras or the capture of many images from different points to extract sufficient details about a scene. It increases the amount of data to be processed and the maintenance costs for such systems. Omnidirectional vision systems overcome this problem due to their 360-degree field of view and have found wide application areas in robotics, surveillance. These systems often use special refractive elements such as fisheye lenses or mirror-lense systems. The resulting images however inherit from the specific geometry of these units. Therefore, the analysis of these images with methods designed for perspective cameras results in degraded performance in omnivision systems. In this thesis, we focus on the analysis of multi-view omnidirectional images for efficient scene information extraction. We propose a novel spherical framework for omnidirectional image processing by exploiting the property that most omnidirectional images can be uniquely mapped on the sphere. We propose solutions for three common multiview image processing problems, namely feature detection, dense depth estimation and super-resolution for omnidirectional images. We first address the feature extraction problem in omnidirectional images. We develop a scale-invariant feature detection method which carefully handles the geometry of the images by performing the scale-space analysis directly on their native manifolds such as the parabola or the sphere. We then propose a new descriptor and a matching criteria that take into account the geometry and also eliminate the need for orientation computation. We also demonstrate that the proposed method can be used to match features in images captured by different types of sensors such as perspective, omnidirectional or spherical cameras. We then propose a dense depth estimation method to extract the 3D scene information from multiple omnidirectional images. We propose a graph-cut method adapted to the geometry of those images to minimize an energy function formulated for the dense depth estimation problem. We also propose a parallel graph-cut method that gives a significant speed improvement without a big penalty in accuracy. We show that the proposed method can be applied to multi-camera depth estimation and depth-based arbitrary view synthesis. Finally, we consider the multi-view omnidirectional images with the transformation of pure rotation. We address the view synthesis problem for these images in the framework of super-resolution. Considering also the inaccuracies in the rotation parameters, we solve an optimization problem that jointly estimates the rotation errors and reconstructs a high resolution omnidirectional image. We then extend the minimization problem with a regularization term for improved reconstruction quality with a reduced number of images. Results with both synthetic and real omnidirectional images suggest that the proposed method is a viable solution for super-resolution with omnidirectional images. Overall, this dissertation addresses three important issues of multiview omnidirectional image analysis and processing in a novel spherical framework. Our feature detection method can be used for the calibration of omnidirectional images as well as the feature matching in mobile and hybrid camera networks. Furthermore, our dense depth estimation method can impact the quality of 3D scene reconstruction and provide efficient solutions for view synthesis and multiview omnidirectional image coding. Finally, our super-resolution algorithm for omnidirectional images can promote the development of efficient acquisition systems for high resolution omnidirectional images
000150477 6531_ $$aomnidirectional imaging
000150477 6531_ $$aspherical framework
000150477 6531_ $$amultiview image processing
000150477 6531_ $$ascale-invariant features
000150477 6531_ $$adense depth estimation
000150477 6531_ $$asuper-resolution
000150477 6531_ $$aimagerie omnidirectionnelle
000150477 6531_ $$acadre sphérique
000150477 6531_ $$atraitement d'images multi-vue
000150477 6531_ $$acaractéristiques invariantes par changement d'échelle
000150477 6531_ $$aestimation dense de la profondeur
000150477 6531_ $$asuper-résolution
000150477 700__ $$aArican, Zafer
000150477 720_2 $$aFrossard, Pascal$$edir.$$g101475$$0241061
000150477 8564_ $$uhttps://infoscience.epfl.ch/record/150477/files/EPFL_TH4836.pdf$$zTexte intégral / Full text$$s8791660$$yTexte intégral / Full text
000150477 909C0 $$xU10851$$0252393$$pLTS4
000150477 909CO $$pthesis$$pthesis-bn2018$$pDOI$$ooai:infoscience.tind.io:150477$$qDOI2$$qGLOBAL_SET$$pSTI
000150477 918__ $$dEDIC2005-2015$$cIEL$$aIC
000150477 919__ $$aLTS4
000150477 920__ $$b2010
000150477 970__ $$a4836/THESES
000150477 973__ $$sPUBLISHED$$aEPFL
000150477 980__ $$aTHESIS