Conference paper

Motion Integration in Visual Attention Models for Predicting Simple Dynamic Scenes

Visual attention models mimic the ability of a visual system, to detect potentially relevant parts of a scene. This process of attentional selection is a prerequisite for higher level tasks such as object recognition. Given the high relevance of temporal aspects in human visual attention, dynamic information as well as static information must be considered in computer models of visual attention. While some models have been proposed for extending to motion the classical static model, a comparison of the performances of models integrating motion in different manners is still not available. In this article, we present a comparative study of various visual attention models combining both static and dynamic features. The considered models are compared by measuring their respective performance with respect to the eye movement patterns of human subjects. Simple synthetic video sequences, containing static and moving objects, are used to assess the model suitability. Qualitative and quantitative results provide a ranking of the different models


    • EPFL-CONF-167727

    Record created on 2011-07-28, modified on 2016-08-09


  • There is no available fulltext. Please contact the lab or the authors.

Related material