Modelling human perception of static expressions by discrete choice models
When people interact to communicate, a very important role is played by the face. In fact, thanks to it we can get information about our interlocutors: who they are, what they feel, what their intentions are, etc. Studies demonstrated that facial expression is even more significant than verbal communication, and we can daily experience this in our life (when we want to tell something important to someone, we often prefer to have this person in front of us). This is why various researchers have been interested in this topic, during past centuries and until nowadays. In particular, in the last years, a new aspect of facial expression analysis has been tackled: how expression recognition could be automatically performed by a computer. A lot of different algorithms have been proposed, most of them using traditional classification techniques for identifying an expression on a face. In the present work, we want to demonstrate the validity of a new approach, based on discrete choice analysis, for associating a face image with the expression it seems to represent. Moreover, we want to prove that the process of recognizing an expression depends not only on the characteristics of the analysed face, but also on the characteristics of the analysing people.