Developing rule-based scoring methods that capture student progress in computational problem-solving in open, interactive tasks

  • Natalie Foster
  • , Jessica L. Holmes
  • , Emma Linsenmayer
  • , Nathan Zoanetti
  • , Huseyin Yildiz

Research output: Contribution to journalArticlepeer-review

Abstract

This paper describes the development and validation processes involved in defining evidence rules for computational problem-solving, one of two aspects measured in the PISA 2025 Learning in the Digital World (LDW) assessment. We developed two theory-driven, rule-based scoring approaches – the Expert Strategy and Quality Marker approach – and applied them to a relatively complex and open computational problem-solving task targeting students' capacity to decompose a problem, recognise patterns and create generalisable solutions. The Expert Strategy approach extracts features of students' code aligned with an optimal task solution, whereas the Quality Marker approach rewards incremental progress towards the task goal and in applying target practices. We compare the results of the two approaches with each other and the results of a paired comparison validation study involving experts in the field of programming and computational thinking. We reflect on the practical implications of the results for PISA 2025 and provide directions for applying the methods to other task-types. Educational relevance statement: Assessment of students via complex, interactive and open-ended tasks in technology-rich environments, such as visual programming tasks, present new challenges to assessors, who need to account for the infinite solution space and students' individual strategies for solving these problems. In large-scale and/or international assessment, challenges are compounded by a heterogeneous testing population, in terms of cultural and educational backgrounds and experience with digital tools and visual coding, and by the prohibitive costs of human scoring. Sophisticated automated scoring models are needed that can validly interpret evidence about students' skills and that can flexibly accommodate individual differences in solution pathways. This study contributes new knowledge to existing literature in educational and learning sciences on assessment design by developing and validating sophisticated rule-based scoring approaches that provide granular information about student abilities in computational problem-solving, that are appropriate for both large-scale summative and formative contexts, and that can handle different approaches to solving open tasks. Our approaches are supported by the results of an empirical validation exercise (a paired comparison study) involving experts from the field of programming and computational thinking.
Original languageEnglish
JournalLearning and Individual Differences
Volume126
DOIs
Publication statusPublished - 1 Feb 2026

Keywords

  • Automated scoring
  • Computational problem solving
  • Evidence-centred design
  • Large-scale assessment
  • PISA

Disciplines

  • Educational Assessment, Evaluation, and Research

Cite this