Fichiers

Résumé

Achieving perfect scale-invariance is usually not possible using classical color image features. This is mostly because of the fact that a traditional image is a two-dimensional projection of the real world. In contrast, light field imaging makes available rays from multiple view points and thus encodes depth and occlusion information which are very crucial for true scale-invariance. By studying and exploiting the information content of the light field signal and its very regular structure we came up with a provably efficient solution for extracting scale-invariance feature vector representation for more efficient light field matching and retrieval among various views. Our approach is based on a novel integral transform which maps the pixel intensities to a new space in which the effect of scaling can be easily canceled out by a simple integration. The experiments we conducted on various real and synthetic light field images verify that the performance of the proposed approach is promising in terms of both accuracy and time-complexity. As a probable future improvement, incorporating invariance to various other transforms such as rotation and translation will make the algorithm far more applicable.

Détails

Actions

Aperçu