Skills Learning in Robots by Interaction with Users and Environment

The fast technological evolution and dissemination of multimodal sensors and compliant actuators bring a new human-centric perspective to robotics. The variety of human-robot interactions that stem from these new capabilities unveil compelling challenges for machine learning. An attractive approach to the problem of transferring skills to robots is to take inspiration from the way humans learn by imitation, adaptation and self-refinement. Such learning strategies require various types of interaction with the end-users and with the robot's environment. The overall skill acquisition process can hardly be segmented or sequenced in a specific way in advance. This indicates the importance of finding a representation of skills that can be shared by different learning strategies and that can accommodate multimodal continuous data streams for both analysis and synthesis purposes. The aim is to provide robots with a representation of rich motor skills able to handle recognition, prediction, synthesis and refinement in a continuous and synergistic way. The representation also requires to be robust to various sources of perturbation, persistently arising from the environment, from the user, and from the robot. I present an approach exploiting the variability of multiple demonstrations and the co-variability of sensorimotor signals to extract the important characteristics of a task/skill. This information is used within an optimal control strategy to provide the robot with a minimal intervention controller regulating the stiffness and damping characteristics of the robot's actions according to the estimated precision and coordination requirements.


    • EPFL-REPORT-201762

    Record created on 2014-09-18, modified on 2016-08-09


  • There is no available fulltext. Please contact the lab or the authors.

Related material