LLMs cannot find reasoning errors, but can correct them given the error location
About
While self-correction has shown promise in improving LLM outputs in terms of style and quality (e.g. Chen et al., 2023b; Madaan et al., 2023), recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023). In this paper, we show that poor self-correction performance stems from LLMs' inability to find logical mistakes, rather than their ability to correct a known mistake. Firstly, we benchmark several state-of-the-art LLMs on their mistake-finding ability and demonstrate that they generally struggle with the task, even in highly objective, unambiguous cases. Secondly, we test the correction abilities of LLMs -- separately from mistake finding -- using a backtracking setup that feeds ground truth mistake location information to the model. We show that this boosts downstream task performance across our 5 reasoning tasks, indicating that LLMs' correction abilities are robust. Finally, we show that it is possible to obtain mistake location information without ground truth labels or in-domain training data. We train a small classifier with out-of-domain data, which exhibits stronger mistake-finding performance than prompting a large model. We release our dataset of LLM-generated logical mistakes, BIG-Bench Mistake, to enable further research into locating LLM reasoning mistakes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Common Sense Reasoning | NoRa Irrelevant Rationales | Accuracy37.8 | 40 | |
| Commonsense Reasoning | Commonsense | -- | 29 | |
| Mathematical Reasoning | Math Base-9 | Accuracy82.4 | 20 | |
| Mathematical Reasoning (Base-9) | NoRa Inaccurate Rationales | Accuracy26.7 | 20 | |
| Symbolic Reasoning (Equations) | NoRa Irrelevant Rationales | Accuracy29.3 | 20 | |
| Symbolic Reasoning (Equations) | NoRa Inaccurate Rationales | Accuracy28.7 | 20 | |
| Mathematical Reasoning | Math Base-11 | Accuracy (P_clean, Avg)24.3 | 5 | |
| Symbolic Reasoning | Symbolic Equal | Acc (Clean, Avg)31.8 | 5 | |
| Symbolic Reasoning | Symbolic Longer | Accuracy (Clean, Avg)0.072 | 5 |