Files

Abstract

Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developped Discrete Choice Models to capture the human perception of facial expressions. In a first step, the static case is treated, that is modelling perception of facial images. Image information is extracted using a computer vision tool called Active Appearance model (AAM). DCMs attributes are based on the Facial Action Coding System (FACS), Expressions Descriptive Units (EDU) and outputs of AAM. Some behavioral data have been collected using an internet survey, where respondents are asked to label facial images from the Cohn-Kanade database with expressions. Different models were estimated by likelihood maximization using the obtained data. In a second step, the proposed static discrete choice framework is extended to the dynamic case, which considers facial video instead of images. The model theory is described and another internet survey is currently conducted in order to obtain expressions labels on videos. In this second internet survey, videos come from the Cohn-Kanade database and the Facial Expressions and Emotions Database (FEED).

Details

Actions

Preview