Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. A Comparative Analysis of Tools & Task Types for Measuring Computational Problem-Solving
 
conference paper

A Comparative Analysis of Tools & Task Types for Measuring Computational Problem-Solving

Bumbacher, Engin
•
Brender, Jérôme
•
Davis, Richard Lee  
2024
SIGCSE 2024: Proceedings of the 55thACM Technical Symposium on Computer Science Education
ACM Technical Symposium on Computer Science Education (SIGCSE 2024)

How to measure students' Computational Problem-Solving (CPS) competencies is an ongoing research topic. Prevalent approaches vary by measurement tools (e.g., interactive programming, multiple-choice tests, or programming-independent tests) and task types (e.g., debugging problems or Parson problems). However, few studies have examined the measurement tools of CPS competencies themselves: affordances and limitations of the measurement tools and how they compare, or whether different task types might elicit CPS competencies differently. Research needs to address these questions in order to better understand how to design robust, generalizable, and effective measurement tools for CPS competencies. This paper presents an exploratory study that contributes to this research direction. It is part of a larger international project to develop an open-access formative assessment platform for CPS, which includes a novel authoring tool for a wide range of task types for interactive block-based programming. We used the tool to create an interactive programming experience with multiple task types and gave it to more than 300 secondary school students from different countries. We also administered a validated multiple-choice measurement of Computational Thinking with block-based programs. We focused on task complexity as a characteristic of task type, using a classification scheme based on task design features. Comparing students' performances on tasks of different complexity and using two distinct measurement tools, we found that the multiple-choice measurement only partially predicts performance in the interactive programming task. Additionally, its predictive capacity varies significantly between task types of differing complexity.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

3626253.3635547.pdf

Type

Publisher

Version

Published version

Access type

openaccess

License Condition

copyright

Size

883.63 KB

Format

Adobe PDF

Checksum (MD5)

9f33ac11ba7d7180ac84f621f7c4f1d8

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés