Mohammadshahi, AlirezaLebret, RĂ©mi PhilippeAberer, Karl2019-12-122019-12-122019-12-122019-11-0310.18653/v1/D19-6605https://infoscience.epfl.ch/handle/20.500.14299/163976In this paper, we propose a new approach to learn multimodal multilingual embeddings for matching images and their relevant captions in two languages. We combine two existing objective functions to make images and captions close in a joint embedding space while adapting the alignment of word embeddings between existing languages in our model. We show that our approach enables better generalization, achieving state-of-the-art performance in text-to-image and image-to-text retrieval task, and caption-caption similarity task. Two multimodal multilingual datasets are used for evaluation: Multi30k with German and English captions and Microsoft-COCO with English and Japanese captions.NLPDeep LearningImagecaptionretrievalAligning Multilingual Word Embeddings for Cross-Modal Retrieval Tasktext::conference output::conference proceedings::conference paper