A simple method to obtain visual attention data in head mounted virtual reality

Automatic prediction of salient regions in images is a well developed topic in the field of computer vision. Yet, virtual reality omnidirectional visual content brings new challenges to this topic, due to a different representation of visual information and additional degrees of freedom available to viewers. Having a model for visual attention is important to continue research in this direction. In this paper we develop such a model for head direction trajectories. The method consists of three basic steps: First, a computed head angular speed is used to exclude the parts of a trajectory where motion is too fast to fixate viewer's attention. Second, fixation locations of different subjects are fused together, optionally preceded by a re-sampling step to conform to the equal distribution of points on a sphere. Finally, a Gaussian based filtering is performed to produce continuous fixation maps. The developed model can be used to obtain ground truth experimental data when eye tracking is not available.


Published in:
2017 IEEE International Conference on Multimedia & Expo (ICME)
Presented at:
2017 IEEE International Conference on Multimedia & Expo (ICME), Hong Kong, July 10-14, 2017
Year:
2017
Publisher:
IEEE
ISBN:
978-1-5386-0560-8
Keywords:
Laboratories:




 Record created 2017-04-20, last modified 2018-09-13

Preprint:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)