000192531 001__ 192531
000192531 005__ 20190316235801.0
000192531 037__ $$aREP_WORK
000192531 088__ $$aIdiap-RR-31-2012
000192531 245__ $$aTranslation Error Spotting from a User's Point of View
000192531 269__ $$a2012
000192531 260__ $$bIdiap$$c2012
000192531 336__ $$aReports
000192531 500__ $$aEPFL course project paper
000192531 520__ $$aThe evaluation of errors made by Machine Translation (MT) systems still needs human effort despite the fact that there are automated MT evaluation tools, such as the BLEU metric. Moreover, assuming that there would be tools that support humans in this translation quality checking task, for example by automatically marking some errors found in the MT system output, there is no guarantee that this actually helps to achieve a more correct or faster human evaluation. The paper presents a user study which found statistically significant interaction effects for the task of finding MT errors under the conditions of non-annotated and automatically pre-annotated errors, in terms of the time needed to complete the task and the number of correctly found errors.
000192531 6531_ $$aError Analysis
000192531 6531_ $$aLinear Mixed-Effects Modeling
000192531 6531_ $$aMachine Translation
000192531 6531_ $$auser study
000192531 700__ $$0246044$$aMeyer, Thomas$$g207160
000192531 8564_ $$s840860$$uhttps://infoscience.epfl.ch/record/192531/files/Meyer_Idiap-RR-31-2012.pdf$$yn/a$$zn/a
000192531 909C0 $$0252189$$pLIDIAP$$xU10381
000192531 909CO $$ooai:infoscience.tind.io:192531$$pSTI$$preport$$qGLOBAL_SET
000192531 937__ $$aEPFL-REPORT-192531
000192531 970__ $$aMeyer_Idiap-RR-31-2012/LIDIAP
000192531 973__ $$aEPFL
000192531 980__ $$aREPORT