Abstract
The purpose of this paper is to present details of the National Assessment Program – Literacy and Numeracy (NAPLAN) Online automated scoring research program and its key outcomes. The research is designed to collect and evaluate empirical evidence on the feasibility and validity of automated scoring for NAPLAN writing assessments, based on a range of studies and analyses. The studies and analyses presented in this report used a broad, nationally stratified sample of over 11,000 essays across eight persuasive and four narrative writing prompts. The research also focused on some of the key aspects of NAPLAN writing assessments and whether its underlying measurement construct, including the design and implementation of the NAPLAN marking rubric, is conducive to automated scoring. Finally, the research investigated some of the practical issues of potential implementation of automated scoring. Research findings demonstrated that the modern automated scoring system tested, when marking NAPLAN writing, provided the same level of reliability and consistency as that found between two independent sets of human markers. Further results showed that automated scoring is resilient to attempts to manipulate marking and that the latent structure of automated scores was the same as that of the human markers
Original language | English |
---|---|
DOIs | |
Publication status | Published - Jan 2018 |
Disciplines
- Education
- Language and Literacy Education