Improving written-expression curriculum-based measurement feasibility with automated writing evaluation programs.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Milena A Keller-Margulis, Michael Matta, Sterett H Mercer

Ngôn ngữ: eng

Ký hiệu phân loại: 547.04 *Organonitrogen compounds

Thông tin xuất bản: United States : School psychology (Washington, D.C.) , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 725985

Automated writing evaluation programs have emerged as alternative, feasible approaches for scoring student writing. This study evaluated accuracy, predictive validity, diagnostic accuracy, and bias of automated scores of Written-Expression Curriculum-Based Measurement (WE-CBM). A sample of 722 students in Grades 2-5 completed 3-min WE-CBM tasks during one school year. A subset of students also completed the state-mandated writing test the same year or 1 year later. Writing samples were hand-scored for four WE-CBM metrics. A computer-based approach generated automated scores for the same four metrics. Findings indicate simpler automated metrics such as total words written and words spelled correctly, closely matched hand-calculated scores, while small differences were observed for more complex metrics including correct word sequences and correct minus incorrect word sequences. Automated scores for simpler WE-CBM metrics also predicted performance on the state test similarly to hand-calculated scores. Finally, we failed to identify evidence of bias between African American and Hispanic students associated with automated scores. Implications of using automated scores for educational decision making are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH