Files

Abstract

In this work we propose an approach for learning task specifications automatically, by observing human demonstrations. Using this allows a robot to combine representations of individual actions to achieve a high-level goal. We hypothesize that task specifications consist of variables that present a pattern of change that is invariant across demonstrations. We identify these specifications at different stages of task completion. Changes in task constraints allow us to identify transitions in the task description and to segment them into sub-tasks. We extract the following task-space constraints: (1) the reference frame in which to express the task variables, (2) the variable of interest at each time step, position or force at the end effector; and (3) a factor that can modulate the contribution of force and position in a hybrid impedance controller. The approach was validated on a 7 DOF Kuka arm, performing 2 different tasks: grating vegetables and extracting a battery from a charging stand.

Details

Actions

Preview