Files

Abstract

In this paper, we describe the automatic audio-based temporal alignment of audio-visual data, recorded by different cameras, camcorders or mobile phones during social events like high school concerts. All recorded data is temporally aligned with a common master track, recorded by a reference camera. The core of the algorithm is based on perceptual time-frequency analysis with a precision of 10 ms. The results show correct alignment in 99% of cases for a real life dataset.

Details

PDF