Temporal analysis of multimodal data to predict collaborative learning outcomes
The analysis of multiple data streams is a long-standing practice within educational research. Both multimodal data analysis and temporal analysis have been applied successfully, but in the area of collaborative learning, very few studies have investigated specific advantages of multiple modalities versus a single modality, especially combined with temporal analysis. In this paper, we investigate how both the use of multimodal data and moving from averages and counts to temporal aspects in a collaborative setting provides a better prediction of learning gains. To address these questions, we analyze multimodal data collected from 25 9-11-year-old dyads using a fractions intelligent tutoring system. Assessing the relation of dual gaze, tutor log, audio and dialog data to students' learning gains, we find that a combination of modalities, especially those at a smaller time scale, such as gaze and audio, provides a more accurate prediction of learning gains than models with a single modality. Our work contributes to the understanding of how analyzing multimodal data in temporal manner provides additional information around the collaborative learning process.
WOS:000550388500001
2020-07-20
51
5
1527
1547
REVIEWED