Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Viewpoint: The Future of Human-Centric Explainable Artificial Intelligence is not Post-Hoc Explanations
 
review article

Viewpoint: The Future of Human-Centric Explainable Artificial Intelligence is not Post-Hoc Explanations

Swamy, Vinitra  
•
Frej, Jibril  
•
Käser, Tanja  
2025
Journal of Artificial Intelligence Research

Explainable Artificial Intelligence (XAI) plays a crucial role in enabling human understanding and trust in deep learning systems. As models get larger, more ubiquitous, and pervasive in aspects of daily life, explainability is necessary to minimize adverse effects of model mistakes. Unfortunately, current approaches in human-centric XAI (e.g. predictive tasks in healthcare, education, or personalized ads) tend to rely on a single post-hoc explainer, whereas recent work has identified systematic disagreement between post-hoc explainers when applied to the same instances of underlying black-box models. In this viewpoint paper, we therefore present a call for action to address the limitations of current state-of-the-art explainers. We propose a shift from post-hoc explainability to designing interpretable neural network architectures. We identify five needs of human-centric XAI (real-time, accurate, actionable, human-interpretable, and consistent) and propose two possible routes forward for interpretable-by-design neural network workflows (adaptive routing and temporal diagnostics). We postulate that the future of human-centric XAI is neither in explaining black-boxes nor in reverting to traditional, interpretable models, but in neural networks that are intrinsically interpretable.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

10.1613_jair.1.17970.pdf

Type

Main Document

Version

Published version

Access type

openaccess

License Condition

CC BY

Size

2.32 MB

Format

Adobe PDF

Checksum (MD5)

0ae7bdcf0d4784b621b6aee36466a99b

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés