Fusing Audio-Visual Nonverbal Cues to Detect Dominant People in Conversations

This paper addresses the multimodal nature of social dominance and presents multimodal fusion techniques to combine audio and visual nonverbal cues for dominance estimation in small group conversations. We combine the two modalities both at the feature extraction level and at the classifier level via score and rank level fusion. The classification is done by a simple rule-based estimator. We perform experiments on a new 10-hour dataset derived from the popular AMI meeting corpus. We objectively evaluate the performance of each modality and each cue alone and in combination. Our results show that the combination of audio and visual cues is necessary to achieve the best performance.

Presented at:
20th International Conference on Pattern Recognition, Istanbul, Turkey, 2010, Istanbul, Turkey

 Record created 2010-08-26, last modified 2018-10-01

Download fulltextPDF
External links:
Download fulltextURL
Download fulltextRelated documents
Rate this document:

Rate this document:
(Not yet reviewed)