Fichiers

Résumé

Measuring gaze allocation during scene perception typically faces a dilemma: full control over the stimulus requires comparably constrained scenarios, while realistic tasks leave the visual input hard to control. We propose to capture the full (4pi) light-field of an oce space, while participants perform typical oce tasks. Using a wearable eye-tracking device ("EyeSeeCam"), gaze, head and body orientation are measured along with subjective well-being and performance. In the present study, 52 participants performed four oce tasks ("input", "reflection", "output", "interaction"), each with three dierent tools (phone, computer, paper) under varying lighting conditions and outside views. We found that eye and head were fundamentally differently affected by view and that this dependence was modulated by task and tool, unless participants' task was related to reading. Importantly, for some tasks head movements rather than eye movements dominated gaze allocation. Since head and body movements frequently remain unaddressed in eye-tracking studies, our data highlight the importance of unconstrained settings. Beyond assessing the interaction between top-down (task-related) and bottom-up (stimulus-related) factors for deploying gaze and attention under real-world conditions, such data are inevitable for realistic models of optimal workplace lighting and thus for the well-being of an occupant's workplace.

Détails

Actions

Aperçu