Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. A Human-Centric Approach to Explainable AI for Personalized Education
 
doctoral thesis

A Human-Centric Approach to Explainable AI for Personalized Education

Swamy, Vinitra  
2025

Deep neural networks form the backbone of artificial intelligence (AI) research, with potential to transform the human experience in areas ranging from autonomous driving to personal assistants, healthcare to education. However, their integration into the daily routines of real-world classrooms remains limited. It is not yet common for a teacher to assign students individualized homework targeting their specific weaknesses, provide students with instant feedback, or simulate student responses to a new exam question. While these models excel in predictive performance, this lack of adoption can be attributed to a significant weakness: the lack of explainability of model decisions, leading to a lack of trust from students, parents, and teachers.

This thesis aims to bring human needs to the forefront of eXplainable AI (XAI) research, grounded in the concrete use case of personalized learning and teaching. We frame the contributions along two verticals: technical advances in XAI and their aligned human studies.

We begin with a generalizable approach to student modeling evaluated at scale across 26 online courses with over 100,000 students and millions of student interactions. To enable personalized learning interventions, we evaluate five state-of-the-art explainability methods, finding systematic disagreement between explainers when they are evaluated for the same students and the same models. We then turn to expert educators for ground truth evaluation; they find strong actionability value in explanations, but disagree regarding which explanations are trustworthy.

This thesis therefore presents a shift away from popular approximation-based explainability methods towards model architectures that are inherently interpretable. We propose four complementary technical contributions to enhance interpretability:

  • MultiModN, an interpretable, modular, multimodal model which offers accurate predictions and explanations even with missing data, at the fraction of the number of parameters.
  • InterpretCC, an interpretable mixture-of-experts model that uses adaptive sparsity to produce concise explanations without sacrificing performance.
  • An exploration of adversarial training, to improve the consistency and stability of post-hoc explainers in educational settings.
  • iLLuMinaTE, an LLM-XAI pipeline that generates user-friendly, actionable, zero-shot explanations as a communication layer for XAI.

This thesis also places a strong emphasis on measuring the human perception, actionability, and real-world usefulness of new technical contributions. We conducted four human-AI user studies:

  • 26 university professors participate in semi-structured interviews to validate post-hoc explanations for course design across LIME, SHAP, and confounder explanations.
  • 56 teachers evaluate hybrid (visual and text) explanations in the first known perception study of intrinsically interpretable models.
  • 20 expert learning scientists participate in semi-structured interviews to measure perceptions of inconsistency in explanations.
  • 114 university students evaluate the actionability of mid-semester feedback explanations.

By combining empirical evaluations of existing explainers with novel architectural designs and human studies, our work lays a foundation for human-centric AI systems that balance state-of-the-art performance with built-in transparency and trust.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

EPFL_TH11014.pdf

Type

Main Document

Version

Not Applicable (or Unknown)

Access type

openaccess

License Condition

N/A

Size

32.3 MB

Format

Adobe PDF

Checksum (MD5)

dbdd8a46fc2300715a3802733b005976

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés