Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
This study aims to comprehensively explore the complexities of integrating Artificial Intelligence (AI) into Autonomous Vehicles (AVs), examining the challenges introduced by AI components and their impact on testing procedures. The research focuses on essential requirements for trustworthy AI, including cybersecurity, transparency, robustness, and fairness. We first analyse the role of AI at the most relevant operational layers of AVs, and discuss the implications of the EU’s AI Act on AVs, highlighting the importance of the concept of a safety component. Using an expert opinion-based methodology, involving an interdisciplinary workshop with 21 academics and a subsequent in-depth analysis by a smaller group of experts, this study provides a state-of-the-art overview of the current landscape of vehicle regulation and standards, including ex-ante, post-hoc, and accident investigation processes, highlighting the need for new testing methodologies for both Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). The study also provides a detailed analysis of cybersecurity audits, explainability in AI decision-making processes and protocols for assessing the robustness and ethical behaviour of predictive systems in AVs. The analysis highlights significant challenges and suggests future directions for research and development of AI in AV technology, emphasising the need for multidisciplinary expertise. The study’s conclusions have relevant implications for the development of trustworthy AI systems, vehicle regulations, and the safe deployment of AVs.
10.1186_s12544-025-00732-x.pdf
Main Document
Published version
openaccess
CC BY
3.55 MB
Adobe PDF
5c1cc9f74a26b84eee0ae285a0820fde