Dynamic modality weighting for multi-stream HMMs in Audio- Visual Speech Recognition

Merging decisions from different modalities is a crucial problem in Audio-Visual Speech Recognition. To solve this, state synchronous multi-stream HMMs have been proposed for their important advantage of incorporating stream reliability in their fusion scheme. This paper focuses on stream weight adaptation based on modality confidence estimators. We assume different and time-varying environment noise, as can be encountered in realistic applications, and, for this, adaptive methods are best- suited. Stream reliability is assessed directly through classifier outputs since they are not specific to either noise type or level. The influence of constraining the weights to sum to one is also discussed.

Published in:
Proceedings of the 10th International Conference on Multimodal Interfaces, 237-240
Presented at:
10th International Conference on Multimodal Interfaces, Chania, Greece, October 20-22, 2008
New York, NY, USA, ACM

 Record created 2008-06-09, last modified 2018-01-28

External links:
Download fulltextURL
Download fulltextn/a
Rate this document:

Rate this document:
(Not yet reviewed)