GestureGAN for Hand Gesture-to-Gesture Translation in the Wild

Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator G and a discriminator D, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of "channel pollution" while back-propagating the gradients. In addition, we present the Frechet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https://github.com/Ha0Tang/GestureGAN.


Published in:
Proceedings Of The 2018 Acm Multimedia Conference (Mm'18), 774-782
Presented at:
26th ACM Multimedia Conference (MM), Seoul, SOUTH KOREA, Oct 22-26, 2018
Year:
Jan 01 2018
Publisher:
New York, ASSOC COMPUTING MACHINERY
ISBN:
978-1-4503-5665-7
Keywords:
Laboratories:




 Record created 2020-02-12, last modified 2020-03-09


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)