Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task

In this paper, we propose a new approach to learn multimodal multilingual embeddings for matching images and their relevant captions in two languages. We combine two existing objective functions to make images and captions close in a joint embedding space while adapting the alignment of word embeddings between existing languages in our model. We show that our approach enables better generalization, achieving state-of-the-art performance in text-to-image and image-to-text retrieval task, and caption-caption similarity task. Two multimodal multilingual datasets are used for evaluation: Multi30k with German and English captions and Microsoft-COCO with English and Japanese captions.


Published in:
Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), 27-33
Presented at:
2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Hong Kong, China, November 3-7, 2019
Year:
Nov 03 2019
Publisher:
Hong Kong, Association for Computational Linguistics
Keywords:
Laboratories:




 Record created 2019-12-12, last modified 2019-12-22


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)