Collaboration and abstract representations: towards predictive models based on raw speech and eye-tracking data
This study aims to explore the possibility of using machine learning techniques to build predictive models of performance in collaborative induction tasks. More specifically, we explored how signal-level data, like eye-gaze data and raw speech may be used to build such models. The results show that such low level features have effectively some potential to predict performance in such tasks. Implications for future applications design are shortly discussed.
Record created on 2009-08-04, modified on 2016-08-08