Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learned-Rule-Augmented Large Language Model Evaluators

About

Large language models (LLMs) are predominantly used as evaluators for natural language generation (NLG) tasks, but their application to broader evaluation scenarios remains limited. In this work, we explore the potential of LLMs as general evaluators across diverse tasks. Although LLM-based evaluators have made progress in different areas, existing methods struggle to generalize due to their reliance on costly, human-designed evaluation principles, which are often misaligned with both annotated data and LLMs' understanding.To address these challenges, we propose a rule-augmented evaluation paradigm. First, we introduce a rule distillation method that automatically extracts scoring rules from data using an LLM-assisted Monte Carlo Tree Search (MCTS), alleviating scalability issues and improving alignment with data. Second, to enable LLMs to effectively apply the learned rules, we propose two strategies: (1) Chain-of-Rule (CoR), which guides LLM to follow distilled rules, and (2) training a rule-augmented LLM evaluator (RuAE) via reinforcement learning, further bridging the gap between rules and LLMs' reasoning. Extensive experiments on diverse tasks demonstrate the effectiveness and generalizability of our approach across various evaluation scenarios.

Jie Meng, Jin Mao• 2025

Related benchmarks

TaskDatasetResultRank
Summarization EvaluationSummEval
Avg Spearman Rho0.6
40
Automatic Text EvaluationASAP
QWK0.379
15
Automatic Text EvaluationRelish
mAP32
15
Automatic Text EvaluationAMAZON
MAE0.209
15
Showing 4 of 4 rows

Other info

Follow for update