Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Auto-Arena: Automating LLM Evaluations with Agent Peer Battles and Committee Discussions

About

As LLMs continuously evolve, there is an urgent need for a reliable evaluation method that delivers trustworthy results promptly. Currently, static benchmarks suffer from inflexibility and unreliability, leading users to prefer human voting platforms like Chatbot Arena. However, human evaluations require significant manual effort. To address this, we propose the Auto-Arena, an innovative framework that automates the entire evaluation process using LLM-powered agents. Firstly, an LLM examiner generates questions. Then, two LLM candidates engage in a multi-round peer battle based on individual questions, aiming at revealing their true performance differences. Finally, a committee of LLM judges collaboratively discusses and decides the winner, reducing bias and enhancing fairness. During the peer battles, we observe intriguing scenarios where the LLM candidates display competitive behaviors and even learn from the opponents. In our extensive experiments involving 15 recent LLMs, Auto-Arena shows a 92.14% correlation with human preferences, surpassing all previous expert-annotated benchmarks without any manual efforts. As a result, Auto-Arena offers a promising alternative to current human evaluation platforms for evaluating LLMs automatically.

Ruochen Zhao, Wenxuan Zhang, Yew Ken Chia, Weiwen Xu, Deli Zhao, Lidong Bing• 2024

Related benchmarks

TaskDatasetResultRank
Alignment with Human PreferencesChatbot Arena English-only
Spearman Correlation91.67
9
Correlation analysis with human preferencesChatbot Arena 15 LLMs after extension
Spearman Correlation0.9214
7
Showing 2 of 2 rows

Other info

Code

Follow for update