Blind Audiovisual Separation based on Redundant Representations

In this work we present a method to perform a complete audiovisual source separation without need of previous information. This method is based on the assumption that sounds are caused by moving structures. Thus, an efficient representation of audio and video sequences allows to build relationships between synchronous structures on both modalities. A robust clustering algorithm groups video structures exhibiting strong correlations with the audio so that sources are counted and located in the image. Using such information and exploiting audio-video correlation, the audio sources activity is determined. Next, \emph{spectral} GMMs are learnt in time slots with only one source active so that it is possible to separate them in case of an audio mixture. Audio source separation performances are rigorously evaluated, clearly showing that the proposed algorithm performs efficiently and robustly.


Published in:
Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing
Presented at:
IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, Nevada, U.S.A., March 30 - April 4, 2008
Year:
2008
Keywords:
Note:
ITS
Laboratories:




 Record created 2007-10-08, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)