On Image Auto-Annotation with Latent Space Models

Image auto-annotation, i.e., the association of words to whole images, has attracted considerable attention. In particular, unsupervised, probabilistic latent variable models of text and image features have shown encouraging results, but their performance with respect to other approaches remains unknown. In this paper, we apply and compare two simple latent space models commonly used in text analysis, namely Latent Semantic Analysis (LSA) and Probabilistic LSA (PLSA). Annotation strategies for each model are discussed. Remarkably, we found that, on a 8000-image dataset, a classic LSA model defined on keywords and a very basic image representation performed as well as much more complex, state-of-the-art methods. Furthermore, non-probabilistic methods (LSA and direct image matching) outperformed PLSA on the same dataset.

Published in:
Proc. ACM Int. Conf. on Multimedia (ACM MM)
Presented at:
Proc. ACM Int. Conf. on Multimedia (ACM MM)
IDIAP-RR 03-31

Note: The status of this file is: Anyone

 Record created 2006-03-10, last modified 2020-07-30

Download fulltextPDF
External links:
Download fulltextURL
Download fulltextRelated documents
Rate this document:

Rate this document:
(Not yet reviewed)