Fichiers

Résumé

We address the question of how to characterize the outliers that may appear when matching two views of the same scene. The match is performed by comparing the difference of the two views at a pixel level, aiming at a better registration of the images. When using digital photographs as input, we notice that an outlier is often a region that has been occluded, an object that suddenly appears in one of the images, or a region that undergoes an unexpected motion. By assuming that the error in pixel intensity levels generated by the outlier is similar to an error generated by comparing two randomly picked regions in the scene, we can build a model for the outliers based on the content of two views. We illustrate our model by solving a pose estimation problem: the goal is to compute the camera motion between two views. The matching is expressed as a mixture of inliers versus outliers an defines a function to minimise for improving the pose estimation. Our model has two benefits: First it delivers a probability for each pixel to belong to the outliers. Second our tests show that the method is substantially more robust than traditional robust estimators (M-estimators) used in image stitching applications, with only a slightly higher computational complexity.

Détails

Actions

Aperçu