Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLMs cannot find reasoning errors, but can correct them given the error location

About

While self-correction has shown promise in improving LLM outputs in terms of style and quality (e.g. Chen et al., 2023b; Madaan et al., 2023), recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023). In this paper, we show that poor self-correction performance stems from LLMs' inability to find logical mistakes, rather than their ability to correct a known mistake. Firstly, we benchmark several state-of-the-art LLMs on their mistake-finding ability and demonstrate that they generally struggle with the task, even in highly objective, unambiguous cases. Secondly, we test the correction abilities of LLMs -- separately from mistake finding -- using a backtracking setup that feeds ground truth mistake location information to the model. We show that this boosts downstream task performance across our 5 reasoning tasks, indicating that LLMs' correction abilities are robust. Finally, we show that it is possible to obtain mistake location information without ground truth labels or in-domain training data. We train a small classifier with out-of-domain data, which exhibits stronger mistake-finding performance than prompting a large model. We release our dataset of LLM-generated logical mistakes, BIG-Bench Mistake, to enable further research into locating LLM reasoning mistakes.

Gladys Tyen, Hassan Mansoor, Victor C\u{a}rbune, Peter Chen, Tony Mak• 2023

Related benchmarks

TaskDatasetResultRank
Common Sense ReasoningNoRa Irrelevant Rationales
Accuracy37.8
40
Commonsense ReasoningCommonsense--
29
Mathematical ReasoningMath Base-9
Accuracy82.4
20
Mathematical Reasoning (Base-9)NoRa Inaccurate Rationales
Accuracy26.7
20
Symbolic Reasoning (Equations)NoRa Irrelevant Rationales
Accuracy29.3
20
Symbolic Reasoning (Equations)NoRa Inaccurate Rationales
Accuracy28.7
20
Mathematical ReasoningMath Base-11
Accuracy (P_clean, Avg)24.3
5
Symbolic ReasoningSymbolic Equal
Acc (Clean, Avg)31.8
5
Symbolic ReasoningSymbolic Longer
Accuracy (Clean, Avg)0.072
5
Showing 9 of 9 rows

Other info

Code

Follow for update