Learning from demonstrations with partially observable task parameters

Robot learning from demonstrations requires the robot to learn and adapt movements to new situations, often characterized by position and orientation of objects or landmarks in the robot’s environment. In the task-parameterized Gaussian mixture model framework, the movements are considered to be modulated with respect to a set of candidate frames of reference (coordinate systems) attached to a set of objects in the robot workspace. Following a similar approach, this paper addresses the problem of having missing candidate frames during the demonstrations and reproductions, which can happen in various situations such as visual occlusion, sensor unavailability, or tasks with a variable number of descriptive features. We study this problem with a dust sweeping task in which the robot requires to consider a variable amount of dust areas to clean for each reproduction trial.

Presented at:
Proc. IEEE Intl Conf. on Robotics and Automation (ICRA)

 Record created 2014-05-19, last modified 2019-04-16

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)