Dynamic Attentive System for Omnidirectional Video
In this paper, we propose a dynamic attentive system for detecting the most salient regions of interest in omnidirectional video. The spot selection is based on computer modeling of dynamic visual attention. In order to operate on video sequences, the process encompasses the multiscale contrast detection of static and motion information, as well as fusion of the information in a scalar map called saliency map. The processing is performed in spherical geometry. While the static contribution collected in the static saliency map relies on our previous work, we propose a novel motion model based on block matching algorithm computed on the sphere. A spherical motion field pyramid is first estimated from two consecutive omnidirectional images by varying the block size. This latter constitutes the input of the motion model. Then, the motion saliency map is obtained by applying a multiscale motion contrast detection method in order to highlight the most salient motion regions. Finally, both static and motion saliency maps are integrated into a spherical dynamic saliency map. To illustrate the concept, the proposed attentive system is applied to real omnidirectional video sequences.
Record created on 2009-12-21, modified on 2016-08-08