Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FACT-E: Causality-Inspired Evaluation for Trustworthy Chain-of-Thought Reasoning

About

Chain-of-Thought (CoT) prompting has improved LLM reasoning, but models often generate explanations that appear coherent while containing unfaithful intermediate steps. Existing self-evaluation approaches are prone to inherent biases: the model may confidently endorse coherence even when the step-to-step implication is not valid, leading to unreliable faithfulness evaluation. We propose FACT-E, a causality-inspired framework for evaluating CoT quality. FACT-E uses controlled perturbations as an instrumental signal to separate genuine step-to-step dependence from bias-driven artifacts, producing more reliable faithfulness estimates (\textit{intra-chain faithfulness}). To select trustworthy trajectories, FACT-E jointly considers \textit{intra-chain faithfulness} and \textit{CoT-to-answer consistency}, ensuring that selected chains are both faithful internally and supportive of the correct final answer. Experiments on GSM8K, MATH, and CommonsenseQA show that FACT-E improves reasoning-trajectory selection and yields stronger in-context learning exemplars. FACT-E also reliably detects flawed reasoning under noisy conditions, providing a robust metric for trustworthy LLM reasoning.

Yuxi Sun, Aoqi Zuo, Haotian Xie, Wei Gao, Mingming Gong, Jing Ma• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM-8K
Accuracy97.3
57
Mathematical ReasoningMATH500
Accuracy94.81
50
Commonsense ReasoningCommonsenseQA
Accuracy (pass@1)86.2
45
Showing 3 of 3 rows

Other info

Follow for update