Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Confusion-Aware Rubric Optimization for LLM-based Automated Grading

About

Accurate and unambiguous guidelines are critical for large language model (LLM) based graders, yet manually crafting these prompts is often sub-optimal as LLMs can misinterpret expert guidelines or lack necessary domain specificity. Consequently, the field has moved toward automated prompt optimization to refine grading guidelines without the burden of manual trial and error. However, existing frameworks typically aggregate independent and unstructured error samples into a single update step, resulting in "rule dilution" where conflicting constraints weaken the model's grading logic. To address these limitations, we introduce Confusion-Aware Rubric Optimization (CARO), a novel framework that enhances accuracy and computational efficiency by structurally separating error signals. CARO leverages the confusion matrix to decompose monolithic error signals into distinct modes, allowing for the diagnosis and repair of specific misclassification patterns individually. By synthesizing targeted "fixing patches" for dominant error modes and employing a diversity-aware selection mechanism, the framework prevents guidance conflict and eliminates the need for resource-heavy nested refinement loops. Empirical evaluations on teacher education and STEM datasets demonstrate that CARO significantly outperforms existing SOTA methods. These results suggest that replacing mixed-error aggregation with surgical, mode-specific repair yields robust improvements in automated assessment scalability and precision.

Yucheng Chu, Hang Li, Kaiqi Yang, Yasemin Copur-Gencturk, Joseph Krajcik, Namsoo Shin, Jiliang Tang• 2026

Related benchmarks

TaskDatasetResultRank
Binary ClassificationInteraction Dataset (DI)
Accuracy98
39
ClassificationTeacher Education Dataset DT 1.0 (test)
Accuracy79
36
Item Response GradingElementary Item Response Dataset DE (test)
Accuracy81
18
ClassificationDT
Accuracy72
3
ClassificationElementary Item Response Dataset DE
Accuracy73
3
ClassificationOverall Across all datasets
Accuracy78
3
Showing 6 of 6 rows

Other info

Follow for update