Learning to Match Aerial Images with Deep Attentive Architectures

Image matching is a fundamental problem in Computer Vision. In the context of feature-based matching, SIFT and its variants have long excelled in a wide array of applications. However, for ultra-wide baselines, as in the case of aerial images captured under large camera rotations, the appearance variation goes beyond the reach of SIFT and RANSAC. In this paper we propose a data-driven, deep learning-based approach that sidesteps local correspondence by framing the problem as a classification task. Furthermore, we demonstrate that local correspondences can still be useful. To do so we incorporate an attention mechanism to produce a set of probable matches, which allows us to further increase performance. We train our models on a dataset of urban aerial imagery consisting of 'same' and 'different' pairs, collected for this purpose, and characterize the problem via a human study with annotations from Amazon Mechanical Turk. We demonstrate that our models outperform the state-of-the-art on ultra-wide baseline matching, and close the gap with human performance.


Published in:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Presented at:
Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, June 27-30, 2016
Year:
2016
Keywords:
Laboratories:




 Record created 2016-04-11, last modified 2018-03-17

Postprint:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)