Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation

About

Post-training is essential for enabling large language models (LLMs) to follow human instructions. However, its effectiveness depends on high-quality instruction data, which is challenging to obtain in the real world due to privacy concerns, data scarcity, and high annotation costs. To fill this gap, inspired by the recent success of using LLMs to simulate human society, we propose MATRIX, a multi-agent simulator that automatically generates diverse text-based scenarios, capturing a wide range of real-world human needs in a realistic and scalable manner. Leveraging these outputs, we introduce a novel scenario-driven instruction generator MATRIX-Gen for controllable and highly realistic data synthesis. Extensive experiments demonstrate that our framework effectively generates both general and domain-specific data. On AlpacaEval 2 and Arena-Hard benchmarks, Llama-3-8B-Base, post-trained on datasets synthesized by MATRIX-Gen with just 20K instruction-response pairs, outperforms Meta's Llama-3-8B-Instruct model, which was trained on over 10M pairs.

Shuo Tang, Xianghe Pang, Zexi Liu, Bohan Tang, Rui Ye, Tian Jin, Xiaowen Dong, Yanfeng Wang, Siheng Chen• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@17.93e+3
1036
Instruction FollowingIFEval--
625
Instruction FollowingAlpacaEval 2.0
Win Rate25.85
507
Mathematical ReasoningMATH 500
pass@171.4
239
Mathematical ReasoningGSM8K
EM88.7
123
Mathematical ReasoningMATH
Pass@173.6
112
Instruction FollowingArena Hard
Win Rate43.2
103
LLM Alignment EvaluationArena Hard
Win Rate22.7
73
Science ReasoningGPQA
Pass@117.18
50
Mathematical ReasoningAIME
Pass@113.33
44
Showing 10 of 12 rows

Other info

Follow for update