On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation

Evaluation of cross-lingual encoders is usually performed either via zero-shot cross-lingual transfer in supervised downstream tasks or via unsupervised cross-lingual textual similarity. In this paper, we concern ourselves with reference-free machine translation (MT) evaluation where we directly compare source texts to (sometimes low-quality) system translations, which represents a natural adversarial setup for multilingual encoders. Reference-free evaluation holds the promise of web-scale comparison of MT systems. We systematically investigate a range of metrics based on state-of-the-art cross-lingual semantic representations obtained with pretrained M-BERT and LASER. We find that they perform poorly as semantic encoders for reference-free MT evaluation and identify their two key limitations, namely, (a) a semantic mismatch between representations of mutual translations and, more prominently, (b) the inability to punish "translationese", i.e., low-quality literal translations. We propose two partial remedies: (1) post-hoc re-alignment of the vector spaces and (2) coupling of semantic-similarity based metrics with target-side language modeling. In segment-level MT evaluation, our best metric surpasses reference-based BLEU by 5.7 correlation points. We make our MT evaluation code available.


Published in:
58Th Annual Meeting Of The Association For Computational Linguistics (Acl 2020), 1656-1671
Presented at:
58th Annual Meeting of the Association-for-Computational-Linguistics (ACL), ELECTR NETWORK, Jul 05-10, 2020
Year:
Jan 01 2020
Publisher:
Stroudsburg, ASSOC COMPUTATIONAL LINGUISTICS-ACL
ISBN:
978-1-952148-25-5
Laboratories:




 Record created 2020-10-15, last modified 2020-10-24


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)