Al-Hames, MarcDielmann, AlfredGatica-Perez, DanielReiter, StephanRenals, SteveZhang, Dong2006-03-102006-03-102006-03-10200510.1007/11677482_5https://infoscience.epfl.ch/handle/20.500.14299/228732We address the problem of segmentation and recognition of sequences of multimodal human interactions in meetings. These interactions can be seen as a rough structure of a meeting, and can be used either as input for a meeting browser or as a first step towards a higher semantic analysis of the meeting. A common lexicon of multimodal group meeting actions, a shared meeting data set, and a common evaluation procedure enable us to compare the different approaches. We compare three different multimodal feature sets and four modelling infrastructures: a higher semantic feature approach, multi-layer HMMs, a multi-stream DBN, as well as a multi-stream mixed-state DBN for disturbed data.visionzhangMultimodal Integration for Meeting Group Action Segmentation and Recognitiontext::conference output::conference proceedings::conference paper