Using Visual Attention to Evaluate Collaborative Control Architectures for Human Robot Interaction
Collaborative control architectures assist human users in performing tasks, without undermining their capabilities or curtailing the natural development of their skills. In this study, we evaluate our collaborative control architecture by investigating the visual attention patterns of robotic wheelchair users. Our initial hypothesis stated that the user would require less visual attention for driving, whilst they are being assisted by the collaborative system, thus allowing them to concentrate on higher level cognitive tasks, such as planning. However, our analysis of eye gaze patterns—as recorded by a head mounted eye tracking system—supports the opposite conclusion: that patterns of saccadic activation increase and become more chaotic under the assisted mode. Our findings highlight the necessity for techniques that assist the user in forming an appropriate mental model of the collaborative control architecture.
Record created on 2010-08-18, modified on 2016-08-08