Conference paper

Improving Contextual Quality Models for MT Evaluation Based on Evaluators' Feedback

The Framework for Machine Translation Evaluation (FEMTI), introduced by the ISLE Evaluation Working Group, contains guidelines for defining a quality model used to evaluate an MT system, in relation to the purpose and context of use of the system. In this paper, we report results from a recent experiment aimed at transferring knowledge from MT evaluation experts into the FEMTI guidelines, in particular, to populate relations denoting the influence of the context of use of a system on its evaluation. The results of this hands-on exercise carried out as part of a tutorial, are publicly available at


    • LIDIAP-CONF-2008-013

    Record created on 2010-02-11, modified on 2017-05-10


  • There is no available fulltext. Please contact the lab or the authors.

Related material