This paper presents a video-based camera tracker that combines marker-based and feature point-based cues in a particle filter framework. The framework relies on their complementary performance. Marker-based trackers can robustly recover camera position and orientation when a reference (marker) is available, but fail once the reference becomes unavailable. On the other hand, feature point tracking can still provide estimates given a limited number of feature points. However, these tend to drift and usually fail to recover when the reference reappears. Therefore, we propose a combination where the estimate of the filter is updated from the individual measurements of each cue. More precisely, the marker-based cue is selected when the marker is available whereas the feature point-based cue is selected otherwise. The feature points tracked are the corners of the marker. Evaluations on real cases show that the fusion of these two approaches outperforms the individual tracking results. Filtering techniques often suffer from the difficulty of modeling the motion with precision. A second related topic presented is an adaptation method for the particle filer. It achieves tolerance to fast motion manoeuvres.