Learning Multimodal Temporal Representation for Dubbing Detection in Broadcast Media

Person discovery in the absence of prior identity knowledge requires accurate association of visual and auditory cues. In broadcast data, multimodal analysis faces additional challenges due to narrated voices over muted scenes or dubbing in different languages. To address these challenges, we define and analyze the problem of dubbing detection in broadcast data, which has not been explored before. We propose a method to represent the temporal relationship between the auditory and visual streams. This method consists of canonical correlation analysis to learn a joint multimodal space, and long short term memory (LSTM) networks to model cross-modality temporal dependencies. Our contributions also include the introduction of a newly acquired dataset of face-speech segments from TV data, which we have made publicly available. The proposed method achieves promising performance on this real world dataset as compared to several baselines.


Published in:
Mm'16: Proceedings Of The 2016 Acm Multimedia Conference, 202-206
Presented at:
ACM Multimedia, Amsterdam
Year:
2016
Publisher:
New York, ACM
ISBN:
978-1-4503-3603-1
Keywords:
Laboratories:




 Record created 2016-08-19, last modified 2018-09-13

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)