Unsupervised Extraction of Audio-Visual Objects

We propose a novel method to automatically detect and extract the video modality of the sound sources that are present in a scene. For this purpose, we first assess the synchrony between the moving objects captured with a video camera and the sounds recorded by a microphone. Next, video regions presenting a high coherence with the soundtrack are automatically labelled as being part of the source. This represents the starting point for an innovative video segmentation approach, whose objective is to extract the complete audio-visual object. The proposed graph-cut segmentation procedure includes an audio-visual term that links together pixels in regions with high audio-video coherence. Our approach is demonstrated on challenging sequences presenting non-stationary sound sources and distracting moving objects.


Published in:
Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing
Presented at:
IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Prague, Czech Republic, May 22-27, 2011
Year:
2011
Publisher:
Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa
Keywords:
Laboratories:




 Record created 2010-10-21, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)