Towards Modelling of Visual Saliency in Point Clouds for Immersive Applications

Modelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. Yet, in the absence of ground truth data, it is unclear whether such predictions are in alignment with the actual human viewing behavior in virtual reality environments. In this study, we work towards solving this problem by conducting an eye-tracking experiment in an immersive 3D scene that offers 6 degrees of freedom. A wide range of static point cloud models is inspected by human subjects, while their gaze is captured in real-time. The visual attention information is used to extract fixation density maps, that can be further exploited for saliency modelling. To obtain high quality fixation points, we devise a scheme that utilizes every recorded gaze measurement from the two eye-cameras of our set-up. The obtained fixation density maps together with the recorded gaze and head trajectories are made publicly available, to enrich visual saliency datasets for 3D models.

Published in:
2019 IEEE International Conference on Image Processing (ICIP)
Presented at:
IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, September 22-25, 2019
Other identifiers:

 Record created 2019-05-28, last modified 2019-12-05

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)