A qualitative AI security risk assessment of autonomous vehicles
This paper systematically analyzes the security risks associated with artificial intelligence (AI) components in autonomous vehicles (AVs). Given the increasing reliance on AI for various AV functions, from perception to control, the potential for security breaches presents a significant challenge. We focus on AI security, including attacks like adversarial examples, backdoors, privacy breaches and unauthorized model replication, reviewing over 170 papers. To evaluate the practical implications of such vulnerabilities we introduce qualitative measures for assessing the exposure and severity of potential attacks. Our findings highlight a critical need for more realistic security evaluations and a balanced focus on various sensors, learning paradigms, threat models, and studied attacks. We also pinpoint areas requiring more research, such as the study of training time attacks, transferability, system-based studies and development of effective defenses. By also outlining implications for the automotive industry and policymakers, we not only advance the understanding of AI security risks in AVs, but contribute to the development of safer and more reliable autonomous driving technologies.