Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation

About

Post-training is essential for enabling large language models (LLMs) to follow human instructions. However, its effectiveness depends on high-quality instruction data, which is challenging to obtain in the real world due to privacy concerns, data scarcity, and high annotation costs. To fill this gap, inspired by the recent success of using LLMs to simulate human society, we propose MATRIX, a multi-agent simulator that automatically generates diverse text-based scenarios, capturing a wide range of real-world human needs in a realistic and scalable manner. Leveraging these outputs, we introduce a novel scenario-driven instruction generator MATRIX-Gen for controllable and highly realistic data synthesis. Extensive experiments demonstrate that our framework effectively generates both general and domain-specific data. On AlpacaEval 2 and Arena-Hard benchmarks, Llama-3-8B-Base, post-trained on datasets synthesized by MATRIX-Gen with just 20K instruction-response pairs, outperforms Meta's Llama-3-8B-Instruct model, which was trained on over 10M pairs.

Shuo Tang, Xianghe Pang, Zexi Liu, Bohan Tang, Rui Ye, Tian Jin, Xiaowen Dong, Yanfeng Wang, Siheng Chen• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@17.93e+3
850
Instruction FollowingIFEval--
292
Instruction FollowingAlpacaEval 2.0--
281
Mathematical ReasoningMATH 500
pass@171.4
153
Mathematical ReasoningGSM8K
EM88.7
115
Mathematical ReasoningMATH
Pass@173.6
112
Instruction FollowingArena Hard
Win Rate43.2
77
LLM Alignment EvaluationArena Hard
Win Rate22.7
67
Science ReasoningGPQA
Pass@117.18
35
Multitask Language UnderstandingMMLU
pass@171.9
24
Showing 10 of 12 rows

Other info

Follow for update