Investigating Lexical Substitution Scoring for Subtitle Generation
This paper investigates an isolated setting of the lexical substitution task of replacing words with their synonyms. In particular, we examine this problem in the setting of subtitle generation and evaluate state of the art scoring methods that predict the validity of a given substitution. The paper evaluates two context independent models and two contextual models. The major findings suggest that distributional similarity provides a useful complementary estimate for the likelihood that two Wordnet synonyms are indeed substitutable, while proper modeling of contextual constraints is still a challenging task for future research.
To appear in Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-2006).
Record created on 2010-02-11, modified on 2016-08-08