Who's Helping Who? When Students Use ChatGPT to Engage in Practice Lab Sessions
Little is understood about how chatbots powered by Large Language Models (LLMs) impact teaching and learning, and how to effectively integrate them into educational practices. This study examined whether and how using ChatGPT in a graduate-level robotics course impacts performance and learning with data from 64 students (40 Chat-GPT users and 24 non-users). Regression analyses revealed complex interactions between ChatGPT use, task performance, and learning of course-related concepts: using ChatGPT significantly improved task performance, but not necessarily learning outcomes. Task performance positively correlated with learning only for students with low and medium prior knowledge who did not use ChatGPT, suggesting that performance translated to learning only for non-ChatGPT users overall. Clustering ChatGPT-users' prompts helped identify three types of usage that differed in terms of learning and performance. Although all ChatGPT users had improved performance, Debuggers (who requested solutions and error fixes) outperformed the other clusters. In terms of learning, Conceptual Explorers (who sought to understand concepts, tasks or codes) had higher learning outcomes than Debuggers and Practical Developer (who exclusively asked for task solutions). The behaviors elicited by students in the Practical Developer and Debuggers clusters therefore were less likely to translate performance into conceptual understanding, while the Conceptual Explorers' behaviors were more conducive to learning. This empirical study helps improve our understanding of the complex dynamics between how ChatGPT is used, performance, and learning outcomes.
WhosHelpingWhoWhenStudentsUseChatGPTtoEngage_in_PracticeLabSessions.pdf
Main Document
Published version
restricted
N/A
665.81 KB
Adobe PDF
7d4637ed9c702609548bd975c478c6c8