Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
 
conference paper not in proceedings

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Swamy, Vinitra  
•
Du, Sijia
•
Marras, Mirko  
Show more
March 13, 2023
LAK 2023: The 13th International Learning Analytics and Knowledge Conference

Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.

  • Files
  • Details
  • Metrics
Type
conference paper not in proceedings
DOI
10.1145/3576050.3576147
Author(s)
Swamy, Vinitra  
Du, Sijia
Marras, Mirko  
Käser, Tanja  
Date Issued

2023-03-13

Total of pages

16

Subjects

Explainable AI

•

LIME

•

SHAP

•

Counterfactuals

•

MOOCs

•

LSTMs

•

Student Performance Prediction

Note

Accepted as a full paper at LAK 2023: The 13th International Learning Analytics and Knowledge Conference, March 13-17, 2023, Arlington, Texas, USA

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ML4ED  
AVP-E-LEARN  
Event nameEvent placeEvent date
LAK 2023: The 13th International Learning Analytics and Knowledge Conference

Arlington, Texas, USA

March 13-17, 2023

Available on Infoscience
December 17, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/193271
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés