PifPaf: Composite Fields for Human Pose Estimation

We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.


Published in:
2019 Ieee/Cvf Conference On Computer Vision And Pattern Recognition (Cvpr 2019), 11969-11978
Presented at:
IEEE conference on computer vision and pattern recognition (CVPR), Long Beach, CA, Jun 16-20, 2019
Year:
Jun 01 2019
Publisher:
IEEE
ISSN:
1063-6919
ISBN:
978-1-7281-3293-8
Laboratories:


Note: The status of this file is: Anyone


 Record created 2019-03-10, last modified 2020-07-22

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)