Fichiers

Résumé

Modern cars can support their drivers by assessing and autonomously performing different driving maneuvers based on information gathered by in-car sensors. We propose that brain–machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this paper, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the drivers' intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation, and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-of-the-art BMIs. We foresee that neural fusion correlates with information extracted from other physiological measures, e.g., eye movements or electromyography as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus, keeping the user in the loop and allowing him to fully enjoy the driving experience.

Détails

Actions

Aperçu