Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Conversation for Non-verifiable Learning: Self-Evolving LLMs through Meta-Evaluation

About

Training large language models (LLMs) for non-verifiable tasks, such as creative writing, dialogue, and ethical reasoning, remains challenging due to the absence of ground-truth labels. While LLM-as-Judge approaches offer a scalable alternative to human feedback, they face a fundamental limitation: performance is constrained by the evaluator's own quality. If the judge cannot recognize good solutions, it cannot provide useful training signals, and evaluation biases (e.g., favoring verbosity over quality) remain unaddressed. This motivates meta-evaluation: the ability to evaluate and improve the evaluator itself. We introduce CoNL, a framework that unifies generation, evaluation, and meta-evaluation through multi-agent self-play. Our key insight: critique quality can be measured by whether it helps others improve their solutions. In CoNL, multiple agents sharing the same policy engage in structured conversations to propose, critique, and revise solutions. Critiques that enable solution improvements earn a diagnostic reward, creating explicit supervision for meta-evaluation and enabling joint optimization of generation and judging capabilities through self-play, without external judges or ground truth. Experiments on five benchmarks show that CoNL achieves consistent improvements over self-rewarding baselines while maintaining stable training.

Yuan Sui, Bryan Hooi• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2024 (test)--
103
Mathematical ReasoningAIME 2025 (test)
Pass@1 Rate73.5
47
Mathematical ReasoningDeepMind-Mathematics
Pass@187.1
22
Scientific ReasoningGPQA
Pass@179.2
22
CodingUSACO
Pass@119.5
4
Scientific ReasoningFrontierSci
Pass@155.7
4
Showing 6 of 6 rows

Other info

Follow for update