Improving Contextual Quality Models for MT Evaluation Based on Evaluators' Feedback

The Framework for Machine Translation Evaluation (FEMTI), introduced by the ISLE Evaluation Working Group, contains guidelines for defining a quality model used to evaluate an MT system, in relation to the purpose and context of use of the system. In this paper, we report results from a recent experiment aimed at transferring knowledge from MT evaluation experts into the FEMTI guidelines, in particular, to populate relations denoting the influence of the context of use of a system on its evaluation. The results of this hands-on exercise carried out as part of a tutorial, are publicly available at

Presented at:
6th International Conference on Language Resources and Evaluation, Marrakech, Morocco

 Record created 2010-02-11, last modified 2018-09-13

Rate this document:

Rate this document:
(Not yet reviewed)