Reshaping Perception for Autonomous Driving with Semantic Keypoints
The field of artificial intelligence is set to fuel the future of mobility by driving forward the transition from advanced driver-assist systems to fully autonomous vehicles (AV). Yet the current technology, backed by cutting-edge deep learning techniques, still leads to fatal accidents and does not convey trust. Current frameworks for 3D perception tasks, such as 3D object detection, are not adequate as they (i) do not generalize well to new scenarios, (ii) do not take into account measures of confidence in their predictions, and (iii) are not suitable for large-scale deployment as mainly based on costly LiDAR sensors.
This doctoral thesis aims to study vision-based deep learning frameworks that can accurately perceive the world in 3D and generalize to new scenarios. We propose to escape the pixel domain using semantic keypoints, a sparse representation for every object in the scene containing meaningful information for 2D and 3D reasoning. The low-dimensionality enables downstream neural networks to focus on essential elements in the scene and improve their generalization capabilities. Furthermore, driven by the limitation of deep learning architectures outputting point estimates, we study how to estimate a confidence interval for each prediction. In particular, we emphasize vulnerable road users, such as pedestrians and cyclists, and explicitly address the long tail of 3D pedestrian detection to contribute to the safety of our roads. We further show the efficacy of our framework on multiple real-world domains by (a) integrating it in an existing AV pipeline, (b) detecting human-robot eye contact in real-world scenarios, and (c) helping verify the compliance of safety measures in the case of the COVID-19 outbreak. Finally, we publicly release the source code of all our projects and develop a unified library to contribute to an open science mission.
EPFL_TH10072.pdf
n/a
openaccess
copyright
82.3 MB
Adobe PDF
834e174e4a186b1dc9f60c0ec2f75455