Exploiting un-transcribed foreign data for speech recognition in well-resourced languages

Manual transcription of audio databases for automatic speech recognition (ASR) training is a costly and time-consuming process. State-of-the-art hybrid ASR systems that are based on deep neural networks (DNN) can exploit un-transcribed foreign data during unsupervised DNN pre-training or semi-supervised DNN training. We investigate the relevance of foreign data characteristics, in particular domain and language. Using three different datasets of the MediaParl and Ester databases, our experiments suggest that domain and language are equally important. Foreign data recorded under matched conditions (language and domain) yields the most improvement. The resulting ASR system yields about 5% relative improvement compared to the baseline system only trained on transcribed data. Our studies also reveal that the amount of foreign data used for semi-supervised training can be significantly reduced without degrading the ASR performance if confidence measure based data selection is employed.

Presented at:
Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, IT

 Record created 2014-04-19, last modified 2019-03-16

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)