Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. A Comparative Analysis of Tools & Task Types for Measuring Computational Problem-Solving
 
conference paper

A Comparative Analysis of Tools & Task Types for Measuring Computational Problem-Solving

Bumbacher, Engin
•
Brender, Jérôme
•
Davis, Richard Lee  
2024
SIGCSE 2024: Proceedings of the 55thACM Technical Symposium on Computer Science Education
ACM Technical Symposium on Computer Science Education (SIGCSE 2024)

How to measure students' Computational Problem-Solving (CPS) competencies is an ongoing research topic. Prevalent approaches vary by measurement tools (e.g., interactive programming, multiple-choice tests, or programming-independent tests) and task types (e.g., debugging problems or Parson problems). However, few studies have examined the measurement tools of CPS competencies themselves: affordances and limitations of the measurement tools and how they compare, or whether different task types might elicit CPS competencies differently. Research needs to address these questions in order to better understand how to design robust, generalizable, and effective measurement tools for CPS competencies. This paper presents an exploratory study that contributes to this research direction. It is part of a larger international project to develop an open-access formative assessment platform for CPS, which includes a novel authoring tool for a wide range of task types for interactive block-based programming. We used the tool to create an interactive programming experience with multiple task types and gave it to more than 300 secondary school students from different countries. We also administered a validated multiple-choice measurement of Computational Thinking with block-based programs. We focused on task complexity as a characteristic of task type, using a classification scheme based on task design features. Comparing students' performances on tasks of different complexity and using two distinct measurement tools, we found that the multiple-choice measurement only partially predicts performance in the interactive programming task. Additionally, its predictive capacity varies significantly between task types of differing complexity.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3626253.3635547
Author(s)
Bumbacher, Engin
Brender, Jérôme
Davis, Richard Lee  
Date Issued

2024

Publisher

Association for Computing Machinery

Publisher place

New York, United States

Published in
SIGCSE 2024: Proceedings of the 55thACM Technical Symposium on Computer Science Education
ISBN of the book

979-8-4007-0423-9

Volume

2

Start page

1

End page

1567

Subjects

computational thinking

•

computing education

•

computer science education

•

student assessment

•

K-12 education

•

computational problem solving

•

formative assessment

URL
https://dl.acm.org/doi/proceedings/10.1145/3626252
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
CHILI  
AVP-E-LEARN  
SCI-STI-FMO1  
Event nameEvent placeEvent date
ACM Technical Symposium on Computer Science Education (SIGCSE 2024)

Portland OR USA

March 20-23, 2024

Available on Infoscience
April 4, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/207005
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés