Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems
Very often, when addressing the problem of human-robot skill transfer in task space, only the Cartesian position of the end-effector is encoded by the learning algorithms, instead of the full pose. However, orientation is just as important as position, if not more, when it comes to successfully performing a manipulation task. In this paper, we present a framework that allows robots to learn the full poses of their end-effectors in a task-parameterized manner. Our approach permits the encoding of complex skills, such as those found in bimanual manipulation scenarios, where the generalized coordination patterns between end-effectors (i.e. position and orientation patterns) need to be considered. The proposed framework combines a dynamical systems formulation of the demonstrated trajectories, both in R^3 and SO(3), and task-parameterized probabilistic models that build local task representations in both spaces, based on which it is possible to extract the relevant features of the demonstrated skill. We validate our approach with an experiment in which two 7-DoF WAM robots learn to perform a bimanual sweeping task.
2015
464
470
Event name | Event place |
Hamburg, Germany | |