Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QG-SMS: Enhancing Test Item Analysis via Student Modeling and Simulation

About

While the Question Generation (QG) task has been increasingly adopted in educational assessments, its evaluation remains limited by approaches that lack a clear connection to the educational values of test items. In this work, we introduce test item analysis, a method frequently used by educators to assess test question quality, into QG evaluation. Specifically, we construct pairs of candidate questions that differ in quality across dimensions such as topic coverage, item difficulty, item discrimination, and distractor efficiency. We then examine whether existing QG evaluation approaches can effectively distinguish these differences. Our findings reveal significant shortcomings in these approaches with respect to accurately assessing test item quality in relation to student performance. To address this gap, we propose a novel QG evaluation framework, QG-SMS, which leverages Large Language Model for Student Modeling and Simulation to perform test item analysis. As demonstrated in our extensive experiments and human evaluation study, the additional perspectives introduced by the simulated student profiles lead to a more effective and robust assessment of test items.

Bang Nguyen, Tingting Du, Mengxia Yu, Lawrence Angrave, Meng Jiang• 2025

Related benchmarks

TaskDatasetResultRank
DiscriminationEduAgent (test)
Accuracy66.39
10
DiscriminationDBE-KT (test)
AA66.66
10
Distractor EffectivenessEduAgent (test)
Agreement Accuracy0.7933
10
Topic CoverageEduAgent (test)
AA98.85
10
DifficultyEduAgent (test)
Accuracy (AA)68.55
10
DifficultyDBE-KT (test)
AA69.44
10
Topic CoverageDBE-KT (test)
AA79.9
10
Question Generation EvaluationEduAgent HumanQs (test)
AA76.67
8
Question Generation EvaluationEduAgent GenQs (test)
Accuracy74.17
7
Showing 9 of 9 rows

Other info

Code

Follow for update