Abstract

In order to collaborate with humans, robots are often provided with a Theory of Mind (ToM) architecture. Such architectures can be evaluated by humans perception of the robot's adaptations. However, humans sensitivities to these adaptations are not the one expected. In this paper, we introduce an interaction involving a robot with a human who design, element by element, the content of a short story. A second-order ToM reasoning aims at estimating user's perception of robot's intentions. We describe and compare three behaviors that rule the robot's decisions about the content of the story: the robot makes random decisions, the robot makes predictable decisions, and the robot makes adversarial decisions. The random condition involves no ToM, while the two others are involving 2nd-order ToM. We evaluate the ToM model with the ability to predict human decisions and compare the ability of the human to predict the robot given the different implemented behaviors. We then estimate the appreciation of the robot by the human, the visual attention of the human and his perceived mutual understanding with the robot. We found that our implementation of the adversarial behavior degraded the estimated interaction's quality. We link this observation with the lower perceived mutual understanding caused by the behavior. We also found that in this activity of story co-creation, subjects showed preferences for the random behavior.

Details