Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LMUnit: Fine-grained Evaluation with Natural Language Unit Tests

About

As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge -- human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks (FLASK, BigGenBench) and competitive results on RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development.

Jon Saad-Falcon, Rajan Vivek, William Berrios, Nandita Shankar Naik, Matija Franklin, Bertie Vidgen, Amanpreet Singh, Douwe Kiela, Shikib Mehri• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench v2 (test)
Average Score82.1
42
Pair-wise comparisonRewardBench
Accuracy93.45
29
Reward Model EvaluationRewardBench 2
Factuality87.2
13
Pairwise RankingLFQA
Pairwise Preference Accuracy76.53
13
Direct AssessmentFLASK
Pearson Correlation Coefficient0.7203
12
Direct AssessmentBiGGen-Bench
Pearson Correlation Coefficient67.69
12
Model Performance EvaluationTable 1 Aggregate excluding Human-Internal
Average Score79.74
12
ClassificationInfoBench
Binary Accuracy91.26
12
ClassificationHuman-Internal
Binary Accuracy94.14
10
Showing 9 of 9 rows

Other info

Follow for update