Object Classification Based on Unsupervised Learned Multi-Modal Features For Overcoming Sensor Failures

For autonomous driving applications it is critical to know which type of road users and road side infrastructure are present to plan driving manoeuvres accordingly. Therefore autonomous cars are equipped with different sensor modalities to robustly perceive its environment. However, for classification modules based on machine learning techniques it is challenging to overcome unseen sensor noise. This work presents an object classification module operating on unsupervised learned multi-modal features with the ability to overcome gradual or total sensor failure. A two stage approach composed of an unsupervised feature training and a uni-modal and multimodal classifiers training is presented. We propose a simple but effective decision module switching between uni-modal and multi-modal classifiers based on the closeness in the feature space to the training data. Evaluations on the ModelNet 40 data set show that the proposed approach has a 14% accuracy gain compared to a late fusion approach operating on a noisy point cloud data and a 6% accuracy gain when operating on noisy image data.

Published in:
Presented at:
2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, May 20-24, 2019
Aug 12 2019
Other identifiers:

 Record created 2019-10-31, last modified 2019-11-06

Rate this document:

Rate this document:
(Not yet reviewed)