Blind Audiovisual Source Separation Using Sparse Redundant Representations

In this work, we present a method that jointly separates active audio and visual structures on a given mixture. This new concept, the Blind Audiovisual Source Separation (BAVSS), is achieved by exploiting the coherence existing between the recorded signal of a video camera and only one microphone. An efficient representation of audio and video sequences allows to build robust audiovisual relationships between temporally correlated structures of both modalities or, what turns to be the same, two parts of the same audiovisual event. First, video sources are localized and separated on the image sequence exploiting the temporal occurrence of audiovisual events and using a spatial clustering algorithm, without necessity of any previous assumption about the number of sources in the mixture. Second, the same audiovisual relationships together with a time-frequency probabilistic analysis allow the separation of the audio sources in the soundtrack, and, consequently, the complete Audiovisual Separation.


Année
2007
Mots-clefs:
Note:
ITS
Laboratoires:




 Notice créée le 2007-01-25, modifiée le 2019-03-16

n/a:
Télécharger le document
PDF

Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)