Motion Integration in Visual Attention Models for Predicting Simple Dynamic Scenes

Visual attention models mimic the ability of a visual system, to detect potentially relevant parts of a scene. This process of attentional selection is a prerequisite for higher level tasks such as object recognition. Given the high relevance of temporal aspects in human visual attention, dynamic information as well as static information must be considered in computer models of visual attention. While some models have been proposed for extending to motion the classical static model, a comparison of the performances of models integrating motion in different manners is still not available. In this article, we present a comparative study of various visual attention models combining both static and dynamic features. The considered models are compared by measuring their respective performance with respect to the eye movement patterns of human subjects. Simple synthetic video sequences, containing static and moving objects, are used to assess the model suitability. Qualitative and quantitative results provide a ranking of the different models


Published in:
Proceedings of the IS&T/SPIE 19th Annual Symposium on Electronic Imaging SPIE, 6492-47
Presented at:
IS&T/SPIE 19th Annual Symposium on Electronic Imaging SPIE, San Jose, California, USA, January 28 - February 1, 2007
Year:
2007
Laboratories:


Note: The status of this file is: EPFL only


 Record created 2011-07-28, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)