Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback
 
conference paper

iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback

Swamy, Vinitra  
•
Romano, Davide
•
Desikan, Bhargav
Show more
February 25, 2025
Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence
The 39th Annual AAAI Conference on Artificial Intelligence

Recent advances in eXplainable AI (XAI) for education have highlighted a critical challenge: ensuring that explanations for state-of-the-art models are understandable for non-technical users such as educators and students. In response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI pipeline inspired by Miller (2019)’s cognitive modelof explanation. iLLuMinaTEis designed to deliver theory-driven, actionable feedback to students in online courses. iLLuMinaTE navigates three main stages — causal connection, explanation selection, and explanation presentation — with variations drawing from eight social science theories (e.g. Abnormal Conditions, Pearl’s Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation). We extensively evaluate 21,915 natural language explanations of iLLuMinaTE extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from three diverse online courses. Our evaluation involves analyses of explanation alignment to the social science theory, understandability of the explanation, and a real-world user preference study with 114 university students containing a novel actionability simulation. We find that students prefer iLLuMinaTE explanations over traditional explainers 89.52% of the time. Our work provides a robust, ready-to-use framework for effectively communicating hybrid XAI-driven insights in education, with significant generalization potential for other human-centric fields.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1609/aaai.v39i27.35065
Author(s)
Swamy, Vinitra  

EPFL

Romano, Davide
Desikan, Bhargav

Institute for Public Policy Research

Camburu, Oana-Maria

University College London

Käser, Tanja  

EPFL

Date Issued

2025-02-25

Publisher

Association for the Advancement of Artificial Intelligence

Published in
Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence
ISBN of the book

978-1-57735-897-8

Series title/Series vol.

Proceedings of the AAAI Conference on Artificial Intelligence; 39

ISSN (of the series)

2374-3468

2159-5399

Subjects

eXplainable AI

•

LLM

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ML4ED  
AVP-E-LEARN  
Event nameEvent acronymEvent placeEvent date
The 39th Annual AAAI Conference on Artificial Intelligence

AAAI 2025

Philadelphia, Pennsylvania, USA

2025-02-25 - 2025-03-04

Available on Infoscience
September 12, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/241100
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés