Multi-Agent Evolve: LLM Self-Improve through Co-evolution
About
Reinforcement Learning (RL) has demonstrated significant potential in enhancing the reasoning capabilities of large language models (LLMs). However, the success of RL for LLMs heavily relies on human-curated datasets and verifiable rewards, which limit their scalability and generality. Recent Self-Play RL methods, inspired by the success of the paradigm in games and Go, aim to enhance LLM reasoning capabilities without human-annotated data. However, their methods primarily depend on a grounded environment for feedback (e.g., a Python interpreter or a game engine); extending them to general domains remains challenging. To address these challenges, we propose Multi-Agent Evolve (MAE), a framework that enables LLMs to self-evolve in solving diverse tasks, including mathematics, reasoning, and general knowledge Q&A. The core design of MAE is based on a triplet of interacting agents (Proposer, Solver, Judge) that are instantiated from a single LLM, and applies reinforcement learning to optimize their behaviors. The Proposer generates questions, the Solver attempts solutions, and the Judge evaluates both while co-evolving. Experiments on Qwen2.5-3B-Instruct demonstrate that MAE achieves an average improvement of 4.54% on multiple benchmarks. These results highlight MAE as a scalable, data-efficient method for enhancing the general reasoning abilities of LLMs with minimal reliance on human-curated supervision.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval+ | Pass@176.2 | 383 | |
| Mathematical Reasoning | AIME 2024 | Pass@1 Accuracy13.3 | 165 | |
| Mathematical Reasoning | AIME 2025 | Pass@1 Accuracy13.3 | 118 | |
| Mathematical Reasoning | AMC 2023 | Pass@170 | 67 | |
| Code Generation | LiveCodeBench v1-5 | pass@124.2 | 12 | |
| Reasoning Performance Aggregation | Aggregate Benchmarks Code Math | Code Component Average55.2 | 12 |