Abstract

A wider incorporation of robots into classrooms is hampered by current technological limitations on full autonomy in social robots. Automated speech recognition, for example, a key enabler for vocal communication, is still unable to perform with sufficient accuracy. Past studies have shown that humans adjust their speech patterns to accommodate less skilled interlocutors. If such a response holds in human-robot interactions as well, we may be able to exploit it to lessen the burden on social robots and enable rich, autonomous vocal communication. In this paper we explore whether a robot's speaking ability could have an impact on children's speech patterns, learning, and engagement by designing an interaction where a child and a robot collaborate on a Tower of Hanoi puzzle. Sixteen children aged 7-14 completed this collaborative task partnered with a social robot that communicated with either high verbal (full sentences), low verbal (short phrases or single words), or nonverbal (sound-based utterances) vocalization. While we found no significant impact on children's speech patterns or learning due to the robot's method of vocalization, children in the non-verbal condition had a significantly lower perception of the robot's intelligence along with higher rates of providing feedback and more instances of undoing its moves. This suggests that a link may exist between a robot's perceived speaking ability and children's confidence in that robot's overall intelligence and capability in a collaborative task, as well as their empathy towards a peer they perceive as less skilled in the task.

Details