Unsupervised Extraction of Audio-Visual Objects

We propose a novel method to automatically detect and extract the video modality of the sound sources that are present in a scene. For this purpose, we first assess the synchrony between the moving objects captured with a video camera and the sounds recorded by a microphone. Next, video regions presenting a high coherence with the soundtrack are automatically labelled as being part of the source. This represents the starting point for an innovative video segmentation approach, whose objective is to extract the complete audio-visual object. The proposed graph-cut segmentation procedure includes an audio-visual term that links together pixels in regions with high audio-video coherence. Our approach is demonstrated on challenging sequences presenting non-stationary sound sources and distracting moving objects.


Publié dans:
Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing
Présenté à:
IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Prague, Czech Republic, May 22-27, 2011
Année
2011
Publisher:
Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa
Mots-clefs:
Laboratoires:




 Notice créée le 2010-10-21, modifiée le 2018-09-13

n/a:
Télécharger le document
PDF

Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)