Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning

About

Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses on broadening the training set with various data augmentation techniques, which is effective for standard single-round question-answering settings. Our work introduces a novel technique aimed at cultivating a deeper understanding of the training problems at hand, enhancing performance not only in standard settings but also in more complex scenarios that require reflective thinking. Specifically, we propose reflective augmentation, a method that embeds problem reflection into each training instance. It trains the model to consider alternative perspectives and engage with abstractions and analogies, thereby fostering a thorough comprehension through reflective reasoning. Extensive experiments validate the achievement of our aim, underscoring the unique advantages of our method and its complementary nature relative to existing augmentation techniques.

Zhihan Zhang, Tao Ge, Zhenwen Liang, Wenhao Yu, Dian Yu, Mengzhao Jia, Dong Yu, Meng Jiang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy71.6
983
Mathematical ReasoningGSM8K (test)
Accuracy84.1
797
Mathematical ReasoningMATH
Accuracy33.1
643
Mathematical ReasoningMATH (test)
Overall Accuracy56.4
433
Mathematical ReasoningSVAMP
Accuracy84.3
368
Mathematical ReasoningASDIV
Accuracy0.924
221
Mathematical ReasoningMAWPS
Accuracy93.2
219
Mathematical ReasoningCollegeMATH
Accuracy36.2
161
Mathematical ReasoningMATH (test)
Pass@142.5
151
Mathematical ReasoningOlympiadBench Math
Accuracy10.5
84
Showing 10 of 22 rows

Other info

Follow for update