000227457 001__ 227457
000227457 005__ 20190317000700.0
000227457 020__ $$a978-1-5386-0560-8
000227457 0247_ $$2doi$$a10.1109/ICMEW.2017.8026231
000227457 037__ $$aCONF
000227457 245__ $$aA simple method to obtain visual attention data in head mounted virtual reality
000227457 269__ $$a2017
000227457 260__ $$bIEEE$$c2017
000227457 336__ $$aConference Papers
000227457 520__ $$aAutomatic prediction of salient regions in images is a well developed topic in the field of computer vision. Yet, virtual reality omnidirectional visual content brings new challenges to this topic, due to a different representation of visual information and additional degrees of freedom available to viewers. Having a model for visual attention is important to continue research in this direction. In this paper we develop such a model for head direction trajectories. The method consists of three basic steps: First, a computed head angular speed is used to exclude the parts of a trajectory where motion is too fast to fixate viewer's attention. Second, fixation locations of different subjects are fused together, optionally preceded by a re-sampling step to conform to the equal distribution of points on a sphere. Finally, a Gaussian based filtering is performed to produce continuous fixation maps. The developed model can be used to obtain ground truth experimental data when eye tracking is not available.
000227457 6531_ $$avisual attention
000227457 6531_ $$afixation maps
000227457 6531_ $$aomnidirectional visual content
000227457 6531_ $$avirtual reality
000227457 6531_ $$a360-degree images and video
000227457 700__ $$0248857$$g241533$$aUpenik, Evgeniy
000227457 700__ $$0240223$$g105043$$aEbrahimi, Touradj
000227457 7112_ $$dJuly 10-14, 2017$$cHong Kong$$a2017 IEEE International Conference on Multimedia & Expo (ICME)
000227457 773__ $$t2017 IEEE International Conference on Multimedia & Expo (ICME)
000227457 8564_ $$uhttps://infoscience.epfl.ch/record/227457/files/saliency360model_preprint.pdf$$zPreprint$$s2743657$$yPreprint
000227457 909C0 $$0252077$$pMMSPL
000227457 909CO $$pSTI$$ooai:infoscience.tind.io:227457$$qGLOBAL_SET$$pconf
000227457 917Z8 $$x241533
000227457 917Z8 $$x241533
000227457 917Z8 $$x241533
000227457 937__ $$aEPFL-CONF-227457
000227457 973__ $$rREVIEWED$$sPUBLISHED$$aEPFL
000227457 980__ $$aCONF