Development of Bilingual ASR System for MediaParl Corpus

The development of an Automatic Speech Recognition (ASR) system for the bilingual MediaParl corpus is challenging for several reasons: (1) reverberant recordings, (2) accented speech, and (3) no prior information about the language. In that context, we employ frequency domain linear prediction-based (FDLP) features to reduce the effect of reverberation, exploit bilingual deep neural networks applied in Tandem and hybrid acoustic modeling approaches to significantly improve ASR for accented speech and develop a fully bilingual ASR system using entropy-based decoding-graph selection. Our experiments indicate that the proposed bilingual ASR system performs similar to a language-specific ASR system if approximately five seconds of speech are available.


Presented at:
Proceedings of the 15th Annual Conference of the International Speech Communication Association (Interspeech 2014), Singapore
Year:
2014
Publisher:
ISCA
Laboratories:




 Record created 2014-12-19, last modified 2018-01-28

External links:
Download fulltextRelated documents
Download fulltextn/a
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)