Files

Abstract

Automatic prediction of salient regions in images is a well developed topic in the field of computer vision. Yet, virtual reality omnidirectional visual content brings new challenges to this topic, due to a different representation of visual information and additional degrees of freedom available to viewers. Having a model for visual attention is important to continue research in this direction. In this paper we develop such a model for head direction trajectories. The method consists of three basic steps: First, a computed head angular speed is used to exclude the parts of a trajectory where motion is too fast to fixate viewer's attention. Second, fixation locations of different subjects are fused together, optionally preceded by a re-sampling step to conform to the equal distribution of points on a sphere. Finally, a Gaussian based filtering is performed to produce continuous fixation maps. The developed model can be used to obtain ground truth experimental data when eye tracking is not available.

Details

PDF