Audio-Visual Speech Recognition with a Hybrid SVM-HMM System
Traditional speech recognition systems use Gaussian mixture models to obtain the likelihoods of individual phonemes, which are then used as state emission probabilities in hidden Markov models representing the words. In hybrid systems, the Gaussian mixtures are replaced by more discriminant classifiers, leading to an improved performance. Most of the time the classifiers used in such systems are neural networks. Support vector machines have also been used in one-modality audio or visual speech recognition, but never in a multimodal audio-visual system. We propose such a hybrid SVM-HMM speech recognizer, and we show how the multimodal approach leads to better performance than that obtained with any of the two modalities individually.
Record created on 2006-06-14, modified on 2016-08-08