Dynamic facial expression recognition with a discrete choice model
A generation of new models has been proposed to handle some complex human behaviors. These models account for the data ambiguity, and there- fore extend the application field of the discrete choice modeling. The facial expression recognition (FER) is highly relevant in this context. We develop a dynamic facial expression recognition (DFER) framework based on dis- crete choice models (DCM). The DFER consists in modeling the choice of a person who has to label a video sequence representing a facial expression. The originality is based on the the analysis of videos with discrete choice models as well as the explicit modeling of causal effects between the facial features and the recognition of the expression. Five models are proposed. The first assumes that only the last frame of the video triggers the choice of the expression. The second model has two components. The first captures the perception of the facial expression within each frame in the sequence, while the second determines which frame triggers the choice. The third model is an extension of the second model and assumes that the choice of the expression results from the average of perceptions within a group of frames. The fourth and fifth models integrate the panel effect inherent to the estimation data and are respectively extensing the first and second mod- els. The models are estimated using videos from the Facial Expressions and Emotions Database (FEED). Labeling data on the videos has been obtained using an internet survey available at http://transp-or2.epfl.ch/videosurvey/. The prediction capability of the models is studied in order to check their validity by cross-validation using the estimation data.
Record created on 2011-07-06, modified on 2016-12-16