Infoscience

Report

# PLSA-based Image Auto-Annotation: Constraining the Latent Space

We address the problem of unsupervised image auto-annotation with probabilistic latent space models. Unlike most previous works, which build latent space representations assuming equal relevance for the text and visual modalities, we propose a new way of modeling multi-modal co-occurrences, constraining the definition of the latent space to ensure its consistency in semantic terms (words), while retaining the ability to jointly model visual information. The concept is implemented by a linked pair of Probabilistic Latent Semantic Analysis (PLSA) models. On a 16000-image collection, we show with extensive experiments and using various performance measures, that our approach significantly outperforms previous joint models.

Keywords: vision

Note:

Published in Proc. ACM Multimedia 2004'', 2004

#### Reference

• EPFL-REPORT-83116

Record created on 2006-03-10, modified on 2017-05-10