ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture

The accuracy of monocular 3D human pose estimation depends on the viewpoint from which the image is captured. While freely moving cameras, such as on drones, provide control over this viewpoint, automatically positioning them at the location which will yield the highest accuracy remains an open problem. This is the problem that we address in this paper. Specifically, given a short video sequence, we introduce an algorithm that predicts which viewpoints should be chosen to capture future frames so as to maximize 3D human pose estimation accuracy. The key idea underlying our approach is a method to estimate the uncertainty of the 3D body pose estimates. We integrate several sources of uncertainty, originating from deep learning based regressors and temporal smoothness. Our motion planner yields improved 3D body pose estimates and outperforms or matches existing ones that are based on person following and orbiting.


Published in:
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Presented at:
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, Washington, USA, June 14-19, 2020
Year:
2020-06
Publisher:
IEEE
Keywords:
Note:
This CVPR 2020 paper is the open access version, provided by the Computer Vision Foundation.
Laboratories:


Note: The status of this file is: Anyone


 Record created 2020-10-09, last modified 2020-10-12

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)