Associate Latent Encodings in Learning from Demonstrations

We contribute a learning from demonstration approach for robots to acquire skills from multi-modal high-dimensional data. Both latent representations and associations of different modalities are proposed to be jointly learned through an adapted variational auto-encoder. The implementation and results are demonstrated in a robotic handwriting scenario, where the visual sensory input and the arm joint writing motion are learned and coupled. We show the latent representations successfully construct a task manifold for the observed sensor modalities. Moreover, the learned associations can be exploited to directly synthesize arm joint handwriting motion from an image input in an end-to-end manner. The advantages of learning associative latent encodings are further highlighted with the examples of inferring upon incomplete input images. A comparison with alternative methods demonstrates the superiority of the present approach in these challenging tasks.


Published in:
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Presented at:
The Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, Feb 4th-9th, 2017
Year:
2017
Keywords:
Laboratories:




 Record created 2017-01-04, last modified 2018-03-17

n/a:
hyin_aaai17_641 - Download fulltextPDF
video_641_new - Download fulltextWMV
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)