Automatic Speaker Role Labeling in AMI Meetings: Recognition of Formal and Social Roles

This work aims at investigating the automatic recognition of speaker role in meeting conversations from the AMI corpus. Two types of roles are considered: formal roles, fixed over the meeting duration and recognized at recording level, and social roles related to the way participants interact between themselves, recognized at speaker turn level. Various structural, lexical and prosodic features as well as Dialog Act tags are exhaustively investigated and combined for this purpose. Results reveal an accuracy of 74% in recognizing the speakers formal roles and an accuracy of 66% (percentage of time) in correctly labeling the social roles. Feature analysis reveals that lexical features provide the higher performances in formal/functional role recognition while prosodic features provide the higher performances in social role recognition. Furthermore results reveal that social role recognition in case of rare roles in the corpus can be improved through the use of lexical and Dialog Act information combined over short time windows.

Presented at:
Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, Kyoto, Japan, 2012

 Record created 2013-12-19, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)