Automatic Analysis of Multimodal Group Actions in Meetings

This paper investigates the recognition of group actions in meetings. A statistical framework is proposed in which group actions result from the interactions of the individual participants. The group actions are modelled using different HMM-based approaches, where the observations are provided by a set of audio-visual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modelling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis.


Published in:
IEEE Transactions on Pattern Analysis and Machine Intelligence (to appear)
Year:
2004
Keywords:
Note:
To appear.
Laboratories:




 Record created 2006-03-10, last modified 2018-03-17

n/a:
Download fulltextPDF
External links:
Download fulltextURL
Download fulltextRelated documents
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)