Files

Résumé

Digital images, taken for example with a smartphone, are usually geo-tagged with location and viewing direction information. This information is not always accurate due to, for example, GPS inaccuracies in large cities. The aim of this project is to develop an algorithm that takes an image and estimates the location and viewing direction by key-point matching with Google street-view data. As a first step, the original image will be geo-tagged allowing an easy pruning of the street view data. The project could then progress to localizing a non-geo-tagged image using some additional location information, such as a street name or point of interest. This type of algorithm could have applications localizing the large number of images on the Internet with unreliable or no geo-tagging information. In addition, one can envision such a system being employed in indoor environments where GPS currently fails. The student will be expected to deliver software to take an image with imprecise geo-tagging and match it to Google street images with a similar location. The algorithm should return the location and orientation of the best matching street-view image. There are two possible extensions to this main requirement: 1. Use matches in multiple street-view images to estimate a more accurate location and orientation, between the street-view images. 2. Improve the recognition stage so that the algorithm can deal with less accurate geo-tagging.

Détails

Aperçu