Fichiers

Résumé

This paper presents a video-based camera tracker that combines marker-based and feature point-based cues in a particle filter framework. The framework relies on their complementary performance. Marker-based trackers can robustly recover camera position and orientation when a reference (marker) is available, but fail once the reference becomes unavailable. On the other hand, feature point tracking can still provide estimates given a limited number of feature points. However, these tend to drift and usually fail to recover when the reference reappears. Therefore, we propose a combination where the estimate of the filter is updated from the individual measurements of each cue. More precisely, the marker-based cue is selected when the marker is available whereas the feature point-based cue is selected otherwise. Feature points are dynamically found in scene and used for further tracking. Evaluations on real cases show that the fusion of these two approaches outperforms the individual tracking results. A critical aspect of the feature point-based cue is to robustly recognise the feature points depite rotations of the camera. A novelty of the proposed framework is the use of a rotation-discriminative method to match feature points.

Détails

Actions

Aperçu