Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?

About

This paper investigates an under-explored challenge in large language models (LLMs): chain-of-thought prompting with noisy rationales, which include irrelevant or inaccurate reasoning thoughts within examples used for in-context learning. We construct NoRa dataset that is tailored to evaluate the robustness of reasoning in the presence of noisy rationales. Our findings on NoRa dataset reveal a prevalent vulnerability to such noise among current LLMs, with existing robust methods like self-correction and self-consistency showing limited efficacy. Notably, compared to prompting with clean rationales, base LLM drops by 1.4%-19.8% in accuracy with irrelevant thoughts and more drastically by 2.2%-40.4% with inaccurate thoughts. Addressing this challenge necessitates external supervision that should be accessible in practice. Here, we propose the method of contrastive denoising with noisy chain-of-thought (CD-CoT). It enhances LLMs' denoising-reasoning capabilities by contrasting noisy rationales with only one clean rationale, which can be the minimal requirement for denoising-purpose prompting. This method follows a principle of exploration and exploitation: (1) rephrasing and selecting rationales in the input space to achieve explicit denoising and (2) exploring diverse reasoning paths and voting on answers in the output space. Empirically, CD-CoT demonstrates an average improvement of 17.8% in accuracy over the base model and shows significantly stronger denoising capabilities than baseline methods. The source code is publicly available at: https://github.com/tmlr-group/NoisyRationales.

Zhanke Zhou, Rong Tao, Jianing Zhu, Yiwen Luo, Zengmao Wang, Bo Han• 2024

Related benchmarks

TaskDatasetResultRank
Common Sense ReasoningNoRa Irrelevant Rationales
Accuracy57.7
40
Commonsense ReasoningCommonsense--
29
Mathematical ReasoningMath Base-9
Accuracy92.7
20
Mathematical Reasoning (Base-9)NoRa Inaccurate Rationales
Accuracy76.7
20
Symbolic Reasoning (Equations)NoRa Irrelevant Rationales
Accuracy49.3
20
Symbolic Reasoning (Equations)NoRa Inaccurate Rationales
Accuracy53.3
20
Symbolic ReasoningSymbolic Equal
Acc (Clean, Avg)42.7
5
Mathematical ReasoningMath Base-11
Accuracy (P_clean, Avg)31
5
Symbolic ReasoningSymbolic Longer
Accuracy (Clean, Avg)0.123
5
Showing 9 of 9 rows

Other info

Code

Follow for update