Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models

About

Large Language Models (LLMs) exhibit impressive performance across various domains but still struggle with arithmetic reasoning tasks. Recent work shows the effectiveness of prompt design methods in enhancing reasoning capabilities. However, these approaches overlook crucial requirements for prior knowledge of specific concepts, theorems, and tricks to tackle most arithmetic reasoning problems successfully. To address this issue, we propose a novel and effective Teaching-Inspired Integrated Framework, which emulates the instructional process of a teacher guiding students. This method equips LLMs with essential concepts, relevant theorems, and similar problems with analogous solution approaches, facilitating the enhancement of reasoning abilities. Additionally, we introduce two new Chinese datasets, MathMC and MathToF, both with detailed explanations and answers. Experiments are conducted on nine benchmarks which demonstrates that our approach improves the reasoning accuracy of LLMs. With GPT-4 and our framework, we achieve new state-of-the-art performance on four math benchmarks (AddSub, SVAMP, Math23K and AQuA) with accuracies of 98.2% (+3.3%), 93.9% (+0.2%), 94.3% (+7.2%) and 81.1% (+1.2%). Our data and code are available at https://github.com/SallyTan13/Teaching-Inspired-Prompting.

Wenting Tan, Dongxiao Chen, Jieting Xue, Zihao Wang, Taijie Chen• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningSVAMP
Accuracy93.9
368
Arithmetic ReasoningMultiArith
Accuracy99
181
Mathematical ReasoningAQUA
Accuracy81.1
132
Arithmetic ReasoningADDSUB
Accuracy98.2
76
Mathematical ReasoningMath23K
Accuracy94.3
5
Mathematical ReasoningMathMC
Accuracy0.922
4
Mathematical ReasoningMathToF
Accuracy89.2
4
Showing 7 of 7 rows

Other info

Code

Follow for update