Large Language Models as Analogical Reasoners
About
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically needs labeled exemplars of the reasoning process. In this work, we introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models. Inspired by analogical reasoning, a cognitive process in which humans draw from relevant past experiences to tackle new problems, our approach prompts language models to self-generate relevant exemplars or knowledge in the context, before proceeding to solve the given problem. This method presents several advantages: it obviates the need for labeling or retrieving exemplars, offering generality and convenience; it can also tailor the generated exemplars and knowledge to each problem, offering adaptability. Experimental results show that our approach outperforms 0-shot CoT and manual few-shot CoT in a variety of reasoning tasks, including math problem solving in GSM8K and MATH, code generation in Codeforces, and other reasoning tasks in BIG-Bench.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K (test) | Accuracy90.7 | 797 | |
| Reasoning | BBH | -- | 507 | |
| Code Generation | MBPP (test) | -- | 276 | |
| Arithmetic Reasoning | GSM8K | Accuracy87.6 | 155 | |
| Commonsense Question Answering | CSQA (test) | Accuracy0.708 | 127 | |
| Long-context Reasoning | LongBench | Score53.4 | 62 | |
| Question Answering | GPQA (test) | Accuracy31.6 | 55 | |
| Multi-hop Reasoning | MuSiQue | EM33.1 | 41 | |
| Mathematical Reasoning | MATH | EM65.6 | 38 | |
| Counterfactual reasoning | MMLU CF | EM66.1 | 30 |