Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FeedEval: Pedagogically Aligned Evaluation of LLM-Generated Essay Feedback

About

Going beyond the prediction of numerical scores, recent research in automated essay scoring has increasingly emphasized the generation of high-quality feedback that provides justification and actionable guidance. To mitigate the high cost of expert annotation, prior work has commonly relied on LLM-generated feedback to train essay assessment models. However, such feedback is often incorporated without explicit quality validation, resulting in the propagation of noise in downstream applications. To address this limitation, we propose FeedEval, an LLM-based framework for evaluating LLM-generated essay feedback along three pedagogically grounded dimensions: specificity, helpfulness, and validity. FeedEval employs dimension-specialized LLM evaluators trained on datasets curated in this study to assess multiple feedback candidates and select high-quality feedback for downstream use. Experiments on the ASAP++ benchmark show that FeedEval closely aligns with human expert judgments and that essay scoring models trained with FeedEval-filtered high-quality feedback achieve superior scoring performance. Furthermore, revision experiments using small LLMs show that the high-quality feedback identified by FeedEval leads to more effective essay revisions. We will release our code and curated datasets upon accepted.

Seongyeub Chu, Jongwoo Kim, Munyong Yi• 2026

Related benchmarks

TaskDatasetResultRank
Essay ScoringASAP-SAS
QWK (Prompt 3)0.661
10
Essay ScoringASAP++ five-fold averaged results
Overall Score0.712
10
Human Expert AlignmentFeedEval--
6
Showing 3 of 3 rows

Other info

Follow for update