Abstract
The difference in how humans read and how Automatic Essay Scoring (AES) systems process written language leads to a situation where a portion of student responses will be comprehensible to human markers, while being unable to be parsed by AES systems. This paper examines a number of pieces of student writing that were marked by trained human markers, but subsequently rejected by an AES system during the development of a scoring model for the eWrite online writing assessment that is offered by The Australian Council for Educational Research. The features of these ‘unscoreable’ responses are examined through a qualitative analysis. The paper reports on the features common to a number of the rejected scripts, and considers the appropriateness of the computer-generated error codes as descriptors of the writing. Finally, it considers the implications of the results for teachers using AES in assessing writing.
Original language | English |
---|---|
Journal | English in Australia |
Volume | 53 |
Issue number | 1 |
Publication status | Published - 2018 |
Keywords
- Assessing writing
- Automated essay scoring
- Automated marking errors
- Student writing
- Written language
Disciplines
- Educational Assessment, Evaluation, and Research