Abstract
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the Criterion® Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation, generalization, and consequence. Based on the results of our two-year study with students (N=1,482) at a public technological research university in the United States, we found that Criterion offered a defined writing construct congruent with established models, achieved acceptance among students and instructors, showed no statistically significant differences between ethnicity groups of sufficient sample size, correlated at acceptable levels with other writing measures, performed in a stable fashion, and enabled instructors to identify at-risk students to increase their course success.
Original language | English (US) |
---|---|
Pages (from-to) | 62-84 |
Number of pages | 23 |
Journal | Assessing Writing |
Volume | 18 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2013 |
All Science Journal Classification (ASJC) codes
- Language and Linguistics
- Education
- Linguistics and Language
Keywords
- Automated essay scoring (AES)
- Validation methods
- Writing assessment
- Writing placement