Adaptive relevance feedback for large-scale image retrieval

Content-based image retrieval aims at substituting traditional indexing based on manual annotation by using automatically-extracted visual indexing features. Novel techniques are needed however to efficiently deal with the semantic gap (i.e. the partial match between the low-level features and the visual content). Here, we investigate a query-free retrieval approach first proposed by Ferecatu and Geman. This approach relies solely on an iterative relevance feedback mechanism that drives a heuristic sampling of the collection, and aims to take explicitly into account the semantic gap. Our contributions are related to three complementary aspects. First, we formalize a large-scale approach based on a hierarchical tree-like organization of the images computed off-line. Second, we propose a versatile modulation of the exploration/exploitation trade-off based on the consistency of the system internal states between successive iterations. Third, we elaborate a long-term optimization of the similarity metric based on the user searching session logs accumulated off-line. We implemented a web-application that integrates all our contributions, and distribute it under the AGPL Version 3 free software license. We organized user-based evaluation campaigns using ImageNet dataset, and show empirically that our contributions significantly improve the retrieval performance of the original framework, that they are complementary to each other, and that their overall integration is consistently beneficial.

Published in:
Multimedia Tools and Applications, 75, 12, 6777–6807

 Record created 2016-12-19, last modified 2018-09-13

Rate this document:

Rate this document:
(Not yet reviewed)