Towards Audio-Visual On-line Diarization Of Participants In Group Meetings

We propose a fully automated, unsupervised, and non-int\-rusive method of identifying the current speaker audio-vis\-ually in a group conversation. This is achieved without specialized hardware, user interaction, or prior assignment of microphones to participants. Speakers are identified acoustically using a novel on-line speaker diarization approach. The output is then used to find the corresponding person in a four-camera video stream by approximating individual activity with computationally efficient features. We present results showing the robustness of the association on over 4.5 hours of non-scripted audio-visual meeting data.

Presented at:
European Conference on Computer Vision Workshop on Multi-camera and Multi-modal Sensor Fusion

 Record created 2010-02-11, last modified 2018-01-28

External links:
Download fulltextURL
Download fulltextn/a
Rate this document:

Rate this document:
(Not yet reviewed)