Abstract

This report provides an overview of the work carried out in improving Language Model (LM) development used during the decoding of an Automatic Speech Recognition (ASR) system. The goal of this work is to develop a robust language model that can be adapted to multiple domains (ex: talks), offering better accuracies of the ASR system when applied to an adapted domain. By exploring and exploiting various datasets like Common Crawl, Europarl, news and TEDLIUM and by experimenting different techniques in training a model, we achieve the goal of adapting a general purpose LM to a domain like talks. This also significantly improves the ASR performance compared to the existing (generic version) LM.

Details