Why Can’t it Mark this one? A Qualitative Analysis of Student Writing Rejected by an Automated Essay Scoring System

Nathanael Reinertsen

Research output: Contribution to journalArticlepeer-review

Abstract

The difference in how humans read and how Automatic Essay Scoring (AES) systems process written language leads to a situation where a portion of student responses will be comprehensible to human markers, while being unable to be parsed by AES systems. This paper examines a number of pieces of student writing that were marked by trained human markers, but subsequently rejected by an AES system during the development of a scoring model for the eWrite online writing assessment that is offered by The Australian Council for Educational Research. The features of these ‘unscoreable’ responses are examined through a qualitative analysis. The paper reports on the features common to a number of the rejected scripts, and considers the appropriateness of the computer-generated error codes as descriptors of the writing. Finally, it considers the implications of the results for teachers using AES in assessing writing.
Original languageEnglish
JournalEnglish in Australia
Volume53
Issue number1
Publication statusPublished - 2018

Keywords

  • Assessing writing
  • Automated essay scoring
  • Automated marking errors
  • Student writing
  • Written language

Disciplines

  • Educational Assessment, Evaluation, and Research

Cite this