Real-world tasks with full control over the visual scene: combining mobile gaze tracking and 4pi light-field measurements

Measuring gaze allocation during scene perception typically faces a dilemma: full control over the stimulus requires comparably constrained scenarios, while realistic tasks leave the visual input hard to control. We propose to capture the full (4pi) light-field of an oce space, while participants perform typical oce tasks. Using a wearable eye-tracking device ("EyeSeeCam"), gaze, head and body orientation are measured along with subjective well-being and performance. In the present study, 52 participants performed four oce tasks ("input", "reflection", "output", "interaction"), each with three dierent tools (phone, computer, paper) under varying lighting conditions and outside views. We found that eye and head were fundamentally differently affected by view and that this dependence was modulated by task and tool, unless participants' task was related to reading. Importantly, for some tasks head movements rather than eye movements dominated gaze allocation. Since head and body movements frequently remain unaddressed in eye-tracking studies, our data highlight the importance of unconstrained settings. Beyond assessing the interaction between top-down (task-related) and bottom-up (stimulus-related) factors for deploying gaze and attention under real-world conditions, such data are inevitable for realistic models of optimal workplace lighting and thus for the well-being of an occupant's workplace.

Published in:
Book of abstracts: 17th EUROPEAN CONFERENCE ON EYE MOVEMENTS 11-16 August 2013, Lund, Sweden, 264
Presented at:
17th EUROPEAN CONFERENCE ON EYE MOVEMENTS (ECEM), Lund, Sweden, August 11-16, 2013

 Record created 2013-10-24, last modified 2018-03-17

Download fulltextPDF
Download fulltextPNG
Rate this document:

Rate this document:
(Not yet reviewed)