Automated scoring in context: Rapid assessment for placed students

Andrew Klobucar, Norbert Elliot, Perry Deess, Oleksandr Rudniy, Kamal Joshi

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the Criterion® Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation, generalization, and consequence. Based on the results of our two-year study with students (N=1,482) at a public technological research university in the United States, we found that Criterion offered a defined writing construct congruent with established models, achieved acceptance among students and instructors, showed no statistically significant differences between ethnicity groups of sufficient sample size, correlated at acceptable levels with other writing measures, performed in a stable fashion, and enabled instructors to identify at-risk students to increase their course success.

Original languageEnglish (US)
Pages (from-to)62-84
Number of pages23
JournalAssessing Writing
Volume18
Issue number1
DOIs
StatePublished - Jan 2013

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Education
  • Linguistics and Language

Keywords

  • Automated essay scoring (AES)
  • Validation methods
  • Writing assessment
  • Writing placement

Fingerprint

Dive into the research topics of 'Automated scoring in context: Rapid assessment for placed students'. Together they form a unique fingerprint.

Cite this