Files

Abstract

A Language Model (LM) is a helpful component of a variety of Natural Language Processing (NLP) systems today. For speech recognition, machine translation, information retrieval, word sense disambiguation etc., the contribution of an LM is to provide features and indications on the probability of word sequences, their grammaticality and semantical meaningfulness. What makes language modeling a challenge for Machine Learning algorithms is the sheer amount of possible word sequences: the curse of dimensionality is especially encountered when modeling natural language. The survey will summarize and group literature that has addressed this problem and we will examine promising recent research on Neural Network techniques applied to language modeling in order to overcome the mentioned curse and to achieve better generalizations over word sequences.

Details

Actions

Preview