View-Based Maps

Robotic systems that can create and use visual maps in realtime have obvious advantages in many applications, from automatic driving to mobile manipulation in the home. In this paper we describe a mapping system based on retaining views of the environment that are collected as the robot moves. Connections among the views are formed by consistent geometric matching of their features. The key problem we solve is how to efficiently find and match a new view to the set of views already collected. Our approach uses a vocabulary tree to propose candidate views, and a new compact feature descriptor that makes view matching very fast – essentially, the robot continually re-recognizes where it is. We present experiments showing the utility of the approach on video data, including map building in large environments, map building without localization, and re-localization when lost.

Presented at:
Robotics: Science and Systems Conference, Seattle, June 2009

 Record created 2010-02-26, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)