Low-Dimensional Motion Features for Audio-Visual Speech Recognition

Audio-visual speech recognition promises to improve the performance of speech recognizers, especially when the audio is corrupted, by adding information from the visual modality, more specifically, from the video of the speaker. However, the number of visual features that are added is typically bigger than the number of audio features, for a small gain in accuracy. We present a method that shows gains in performance comparable to the commonly-used DCT features, while employing a much smaller number of visual features based on the motion of the speaker’s mouth. Motion vector differences are used to compensate for errors in the mouth tracking. This leads to a good performance even with as few as 3 features. The advantage of low-dimensional features is that a good accuracy can be obtained with relatively little training data, while also increasing the speed of both training and testing.

Published in:
15th European Signal Processing Conference (EUSIPCO)
Presented at:
15th European Signal Processing Conference (EUSIPCO), Poznan, Poland, September, 3-7, 2007
Poznan, Poland

 Record created 2007-07-18, last modified 2018-03-17

Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
(Not yet reviewed)