Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis

Cross-domain synthesizing realistic faces to learn deep models has attracted increasing attention for facial expression analysis as it helps to improve the performance of expression recognition accuracy despite having small number of real training images. However, learning from synthetic face images can be problematic due to the distribution discrepancy between low-quality synthetic images and real face images and may not achieve the desired performance when the learned model applies to real world scenarios. To this end, we propose a new attribute guided face image synthesis to perform a translation between multiple image domains using a single model. In addition, we adopt the proposed model to learn from synthetic faces by matching the feature distributions between different domains while preserving each domain’s characteristics. We evaluate the effectiveness of the proposed approach on several face datasets on generating realistic face images. We demonstrate that the expression recognition performance can be enhanced by benefiting from our face synthesis model. Moreover, we also conduct experiments on a near-infrared dataset containing facial expression videos of drivers to assess the performance using in-the-wild data for driver emotion recognition.


Presented at:
The 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, May 14 -18, 2019
Year:
May 14 2019
Publisher:
IEEE
Keywords:
Laboratories:


Note: The status of this file is: Involved Laboratories Only


 Record created 2019-04-14, last modified 2019-04-16

Final:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)