Files

Abstract

Understanding the role of gaze in conversations and social interactions or exploiting it for HRI applications is an ongoing research subject. In these contexts, vision based eye trackers are preferred as they are non-invasive and allow people to behave more naturally. In particular, appearance based methods (ABM) are very promising, as they can perform online gaze estima- tion and have the potential to be head pose and person invariant, accommodate more situations as well as user mobility and the resulting low resolution images. However, they may also suffer from a lack of robustness when several of these challenges are jointly present. In this work, we address gaze coding in human-human interactions, and present a simple method based on a few manually annotated frames that is able to much reduce the error of a head pose invariant ABM method, as shown on a dataset of 6 interactions.

Details

Actions

Preview