Structured Prediction of 3D Human Pose with Deep Neural Networks

Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from image to 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and account for joint dependencies. We demonstrate that our approach outperforms state-of-the-art ones both in terms of structure preservation and prediction accuracy.


Presented at:
British Machine Vision Conference (BMVC), York, UK, September 19-22, 2016
Year:
2016
Keywords:
Laboratories:




 Record created 2016-08-07, last modified 2018-09-13

n/a:
tekin_bmvc16_abstract - Download fulltextPDF
tekin_bmvc16 - Download fulltextPDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)