More Agents Is All You Need
About
We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method, termed as Agent Forest, is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. Our code is publicly available at: https://github.com/MoreAgentsIsAllYouNeed/AgentForest
Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, Deheng Ye• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Classification | AG News (test) | Accuracy82.47 | 210 | |
| Arithmetic Reasoning | GSM8K | Accuracy86.8 | 155 | |
| Arithmetic Reasoning | GSM8K (test) | Accuracy77.4 | 129 | |
| Instruction Following | AlpacaEval | Win Rate40.5 | 125 | |
| Text Classification | TREC (test) | Accuracy73.2 | 113 | |
| Mathematical Reasoning | MAWPS (test) | Accuracy92.4 | 87 | |
| Text Classification | IMDB (test) | Accuracy94.18 | 77 | |
| Multi-task Language Understanding | MMLU (test) | Normalized Accuracy60.92 | 76 | |
| Arithmetic Reasoning | AQuA (test) | Accuracy60.9 | 58 | |
| Arithmetic Reasoning | SVAMP (test) | Accuracy86.7 | 54 |
Showing 10 of 23 rows