Among our five senses, we rely mostly on audition and vision to perceive an environment. Our ears are able to detect stimuli from all directions, especially from obstructed and far-away objects. Even in smoke, harsh weather conditions, or at night â situations where our eyes struggle to operate â they are an essential source of information. On the other hand, our eyes add rich and instantaneous information to the auditory signals. These two senses are thus complementary; taken together, they constitute a powerful system for localization and navigation.
In this thesis, non-visual ("blind") modalities are studied to solve spatial perception problems, particularly those involving localization and mapping in robotics. Although the potential of blind
modalities has long been recognized for niche applications, including sonar for underwater navigation and radio-frequency signals for indoor localization, less progress has been made on blind methods than visual methods, and many interesting problems remain unsolved.
In the application-oriented Part I of the thesis, an indoor localization solution based on WiFi and Bluetooth measurements is proposed that, unlike competing approaches, does not require offline calibration. Next, the localization of a moving device is analyzed, based on single, sequential distance measurements, which constitutes a fundamental but unsolved variation of trilateration. This problem is solved using a closed-form algorithm that includes recovery guarantees. Finally, a drone is equipped with microphones and a buzzer, thus emulating a bat â an expert of blind navigation. Algorithms are presented to detect walls using echoes as well as external sound sources. Throughout these chapters, the optimal exploitation of a device's motion, which is commonly ignored in sensing algorithms, is studied for improving spatial perception.
Although blind modalities carry less information than images, they can be used to extract fundamental features, such as distances or angles between objects of interest. In Part II, we therefore treat the fundamental question of localization from distance and angle measurements. When performing localization from distance measurements, one can resort to distance geometry, a mature field with well-established results and methods. Many fewer results exist for angle measurements. Thus, novel localization algorithms are proposed to bridge this gap, using different angle measurements to supplement or replace distances. These algorithms are useful beyond localization, as they contribute to the much broader field of low-dimensional embedding of entities, such as images, words, or abstract concepts.
Overall, novel algorithms, systems, and theories are proposed for spatial perception from a variety of non-visual signals. Along the way, a wide range of important problems from localization is approached from a signal-processing perspective. This enables the formulation of optimal and guaranteed algorithms in some cases, and efficient approximate solutions otherwise. Guaranteeing optimality has recently emerged as a pressing problem for achieving robust real-world operation in robotics; this thesis contributes to this ambitious goal.
EPFL_TH8127.pdf
n/a
openaccess
copyright
49.07 MB
Adobe PDF
23f64d18309a30af4e5625f16ff14638