Files

Abstract

This paper presents a new method for learning overcomplete dictionaries adapted to efficient joint representation of stereo images. We first formulate a sparse stereo image model where the multi-view correlation is described by local geometric transforms of dictionary atoms in two stereo views. A maximum-likelihood method for learning stereo dictionaries is then proposed, which includes a multi-view geometry constraint in the probabilistic modeling in order to obtain dictionaries optimized for the joint representation of stereo images. The dictionaries are learned by optimizing the maximum-likelihood objective function using the expectation- maximization algorithm. We illustrate the learning algorithm in the case of omnidirectional images, where we learn scales of atoms in a parametric dictionary. The resulting dictionaries provide both better performance in the joint representation of stereo omnidirectional images and improved multi- view feature matching. We finally discuss and demonstrate the benefits of dictionary learning for distributed scene representation and camera pose estimation.

Details

Actions

Preview