Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Chain-of-Thought Prompting

About

Despite the success of chain of thought in enhancing language model reasoning, the underlying process remains less well understood. Although logically sound reasoning appears inherently crucial for chain of thought, prior studies surprisingly reveal minimal impact when using invalid demonstrations instead. Furthermore, the conventional chain of thought does not inform language models on what mistakes to avoid, which potentially leads to more errors. Hence, inspired by how humans can learn from both positive and negative examples, we propose contrastive chain of thought to enhance language model reasoning. Compared to the conventional chain of thought, our approach provides both valid and invalid reasoning demonstrations, to guide the model to reason step-by-step while reducing reasoning mistakes. To improve generalization, we introduce an automatic method to construct contrastive demonstrations. Our experiments on reasoning benchmarks demonstrate that contrastive chain of thought can serve as a general enhancement of chain-of-thought prompting.

Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya Poria, Lidong Bing• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy38.7
1460
Commonsense ReasoningPIQA
Accuracy65.9
647
Commonsense ReasoningCSQA
Accuracy67
366
Commonsense ReasoningWinoGrande
Accuracy55.2
231
Commonsense ReasoningSIQA
Accuracy65
96
Common Sense ReasoningNoRa Irrelevant Rationales
Accuracy50.2
40
Commonsense ReasoningCommonsense--
29
Mathematical Reasoning (Base-9)NoRa Inaccurate Rationales
Accuracy43.6
20
Mathematical ReasoningMath Base-9
Accuracy67.5
20
Symbolic Reasoning (Equations)NoRa Irrelevant Rationales
Accuracy37.3
20
Showing 10 of 14 rows

Other info

Follow for update