Cross-pose Facial Expression Recognition

In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose angles. Therefore, a desirable property of a FER system would be to allow the user to enroll his/her facial expressions under a single pose, for example frontal, and be able to recognize them under different pose angles. In this paper, we address this problem and present a method to recognize six prototypic facial expressions of an individual across different pose angles. We use Partial Least Squares to map the expressions from different poses into a common subspace, in which covariance between them is maximized. We show that PLS can be effectively used for facial expression recognition across poses by training on coupled expressions of the same identity from two different poses. This way of training lets the learned bases model the differences between expressions of different poses by excluding the effect of the identity. We have evaluated the proposed approach on the BU3DFE database [1]. We experiment with intensity values and Gabor filters for local face representation. We demonstrate that two representations perform similarly in case frontal is the input pose, but Gabor outperforms intensity for other pose pairs. We also perform a detailed analysis of the parameters used in the experiments. We have shown that it is possible to successfully recognize expressions of an individual from arbitrary viewpoints by only having his/her expressions from a single pose, for example frontal pose as the most practical case. Especially, if the difference in view angle is relatively small, that is less than 30 degrees, then the accuracy is over 90%. The correct recognition rate is often around 99% if there is only 15 degrees difference between view angles of the matched faces. Overall, we achieved an average recognition rate of 87.6% when using frontal images as gallery and 86.6% when considering all pose pairs.

Presented at:
2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE), In conjunction with the IEEE FG 2013, Shanghai, China, April 22-26, 2013

 Record created 2013-01-15, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)