Computational Models of Mutual Understanding for Human-Robot Collaborative Learning
There is a growing trend towards designing learning activities featuring robots as collaborative exercises where children work together to achieve the activity objectives, generating interactions that can trigger learning processes. Witnessing such activities allows for making an interesting observation: humans, unlike robots, are highly skilled in detecting and addressing misunderstandings, building a mutual understanding about the task, and converging to a shared solution. A social robot equipped with these abilities can monitor the interaction and contribute to it, promoting and supporting the building of a mutual understanding, which may trigger learning. To verify this hypothesis, in this thesis, we first develop abilities for social robots to assess how humans build a mutual understanding. To this end, we (i) propose automatic measures to reveal structures in how children "align" in their dialogue and actions, when engaging in a collaborative activity aiming to foster their computational thinking skills and (ii) study how these lead to their performance in the task and learning outcomes; on data we collected in a large user study involving 78 children at schools for a collaborative activity we designed that elicits dialogue. Then, we equip the robot with mutual modeling abilities to build a mutual understanding with a learner: we (iii) present a framework for the robot to build and maintain a mental model, with its own beliefs about the activity, the human, as well as the human's beliefs about the robot; and (iv) evaluate the effects of robot behaviors that are guided by different mental models via an experiment with 61 children at schools, in which a child and the robot collaborate on a variant of the problem solving activity used throughout the thesis.
EPFL_TH9807.pdf
n/a
openaccess
copyright
13.38 MB
Adobe PDF
0b845091263c267dcc3f6dbc7f2c4736