Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unleashing LLM Reasoning Capability via Scalable Question Synthesis from Scratch

About

Improving the mathematical reasoning capabilities of Large Language Models (LLMs) is critical for advancing artificial intelligence. However, access to extensive, diverse, and high-quality reasoning datasets remains a significant challenge, particularly for the open-source community. In this paper, we propose ScaleQuest, a novel, scalable, and cost-effective data synthesis method that enables the generation of large-scale mathematical reasoning datasets using lightweight 7B-scale models. ScaleQuest introduces a two-stage question-tuning process comprising Question Fine-Tuning (QFT) and Question Preference Optimization (QPO) to unlock the question generation capabilities of problem-solving models. By generating diverse questions from scratch -- without relying on powerful proprietary models or seed data -- we produce a dataset of 1 million problem-solution pairs. Our experiments demonstrate that models trained on our data outperform existing open-source datasets in both in-domain and out-of-domain evaluations. Furthermore, our approach shows continued performance improvement as the volume of training data increases, highlighting its potential for ongoing data scaling. The extensive improvements observed in code reasoning tasks demonstrate the generalization capabilities of our proposed method. Our work provides the open-source community with a practical solution to enhance the mathematical reasoning abilities of LLMs.

Yuyang Ding, Xinyu Shi, Xiaobo Liang, Juntao Li, Zhaopeng Tu, Qiaoming Zhu, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy73.4
643
Mathematical ReasoningGSM8K
Accuracy93
212
Mathematical ReasoningGSM-Hard
Solve Rate66.3
162
Mathematical ReasoningCollegeMATH
Accuracy50
161
Mathematical ReasoningOlympiad Bench
Pass@1 Accuracy38.5
115
Mathematical ReasoningMATH 500
MATH 500 Accuracy91
106
Mathematical ReasoningAIME 24
AIME 24 Accuracy53.3
84
Code ReasoningHumanEval
HumanEval Score86.6
35
Code ReasoningMBPP
MBPP Execution Accuracy83.1
33
Mathematical ReasoningGSM8K (test)
Accuracy (5-shot)38.9
5
Showing 10 of 13 rows

Other info

Code

Follow for update