Impact of different item location in common-item test equating

Research output: Contribution to conferencePresentation

Abstract

This study used real data and simulation data to investigate the impacts of different item location in common-item test equating. The real data set included responses of about 150 000 students to a generic skills test from a university survey in Columbia in 2011, where each student was assigned randomly to one of four linked test forms. Each form consisted of two of three Critical Reasoning clusters and two of three Quantitative Reasoning clusters (20 items in each cluster). The results of an analysis using Item Response Theory clearly showed that items could become more difficult when located towards the end of the test. The changing of item parameters could be due to test domain effects or students’ fatigue. Adjustment of student performance by a test form effect could be a solution if test-taker ability is distributed equally across test forms. However, when ability of test-takers is not expected to be equivalent across the linked forms, it is suggested that the common items should be located at similar early positions in the forms.
Original languageEnglish
Publication statusPublished - Jul 2013
Externally publishedYes
Event12th European Conference on Psychological Assessment (ECPA12) -
Duration: 1 Jul 2013 → …

Conference

Conference12th European Conference on Psychological Assessment (ECPA12)
Period1/07/13 → …

Keywords

  • Critical reasoning
  • Data
  • Quantitative reasoning
  • Response theory
  • Responses
  • Skills
  • Student performance
  • Students
  • Test forms
  • Tests

Disciplines

  • Educational Assessment, Evaluation, and Research

Cite this