Statistical Transformation Techniques for Face Verification Using Faces Rotated in Depth

In the framework of a {B}ayesian classifier based on mixtures of gaussians, we address the problem of non-frontal face verification (when only a single (frontal) training image is available) by extending each frontal face model with artificially synthesized models for non-frontal views. {T}he synthesis methods are based on several implementations of {M}aximum {L}ikelihood {L}inear {R}egression ({MLLR}), as well as standard multi-variate linear regression ({LinReg}). All synthesis techniques rely on prior information and learn how face models for the frontal view are related to face models for non-frontal views. {T}he synthesis and extension approach is evaluated by applying it to two face verification systems: {PCA} based (holistic features) and {DCTmod2} based (local features). Experiments on the {FERET} database suggest that for the {PCA} based system, the {LinReg} based technique is more suited than the {MLLR} based techniques; for the {DCTmod2} based system, the results show that synthesis via a new {MLLR} implementation obtains better performance than synthesis based on traditional {MLLR}. {T}he results further suggest that extending frontal models considerably reduces errors. It is also shown that the {DCTmod2} based system is less affected by out-of-plane rotations than the {PCA} based system; this can be attributed to the local feature representation of the face, and, due to the classifier based on mixtures of gaussians, the lack of constraints on spatial relations between face parts, allowing for movement of facial areas.

Related material


EPFL authors