Dynamic Attentive System for Omnidirectional Video

In this paper, we propose a dynamic attentive system for detecting the most salient regions of interest in omnidirectional video. The spot selection is based on computer modeling of dynamic visual attention. In order to operate on video sequences, the process encompasses the multiscale contrast detection of static and motion information, as well as fusion of the information in a scalar map called saliency map. The processing is performed in spherical geometry. While the static contribution collected in the static saliency map relies on our previous work, we propose a novel motion model based on block matching algorithm computed on the sphere. A spherical motion field pyramid is first estimated from two consecutive omnidirectional images by varying the block size. This latter constitutes the input of the motion model. Then, the motion saliency map is obtained by applying a multiscale motion contrast detection method in order to highlight the most salient motion regions. Finally, both static and motion saliency maps are integrated into a spherical dynamic saliency map. To illustrate the concept, the proposed attentive system is applied to real omnidirectional video sequences.

Published in:
Pcs: 2009 Picture Coding Symposium, 529-532
Presented at:
Picture Coding Symposium, Chicago, May 6-8, 2009
Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa
invited paper

 Record created 2009-12-21, last modified 2018-03-17

Rate this document:

Rate this document:
(Not yet reviewed)