Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Augmenting Math Word Problems via Iterative Question Composing

About

Despite the advancements in large language models (LLMs) for mathematical reasoning, solving competition-level math problems remains a significant challenge, especially for open-source LLMs without external tools. We introduce the MMIQC dataset, comprising a mixture of processed web data and synthetic question-response pairs, aimed at enhancing the mathematical reasoning capabilities of base language models. Models fine-tuned on MMIQC consistently surpass their counterparts in performance on the MATH benchmark across various model sizes. Notably, Qwen-72B-MMIQC achieves a 45.0% accuracy, exceeding the previous open-source state-of-the-art by 8.2% and outperforming the initial version GPT-4 released in 2023. Extensive evaluation results on Hungarian high school finals suggest that such improvement can generalize to unseen data. Our ablation study on MMIQC reveals that a large part of the improvement can be attributed to our novel augmentation method, Iterative Question Composing (IQC), which involves iteratively composing new questions from seed problems using an LLM and applying rejection sampling through another LLM.

Haoxiong Liu, Yifan Zhang, Yifan Luo, Andrew Chi-Chih Yao• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy79
983
Mathematical ReasoningGSM8K (test)
Accuracy89.3
751
Mathematical ReasoningMATH
Accuracy45.3
643
Mathematical ReasoningMATH (test)
Overall Accuracy49.4
433
Mathematical ReasoningCollegeMATH
Accuracy35.3
161
Mathematical ReasoningOlympiad Bench
Pass@1 Accuracy13
115
Mathematical ReasoningMATH
Pass@145.3
112
Mathematical ReasoningOlympiadBench Math
Accuracy13
84
Mathematical ReasoningCollegeMath (test)
Accuracy37.6
61
Mathematical ReasoningOlympiadBench Math (test)
Accuracy15.3
59
Showing 10 of 19 rows

Other info

Code

Follow for update