Just-in-Time Multimodal Association and Fusion from Home Entertainment

In this paper, we describe a real-time multimodal analysis system with just-in-time multimodal association and fusion for a living room environment, where multiple people may enter, interact and leave the observable world with no constraints. It comprises detection and tracking of up to 4 faces, detection and localisation of verbal and paralinguistic events, their association and fusion. The system is designed to be used in open, unconstrained environments like in next generation video conferencing systems that automatically “orchestrate” the transmitted video streams to improve the overall experience of interaction between spatially separated families and friends. Performance levels achieved to date on hand-labelled dataset have shown sufficient reliability at the same time as fulfilling real-time processing requirements.

Presented at:
Proceedings IEEE International Conference on Multimedia & Expo, Barcelona, Spain

 Record created 2011-05-19, last modified 2018-01-28

External links:
Download fulltextRelated documents
Download fulltextn/a
Rate this document:

Rate this document:
(Not yet reviewed)