Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Illusions of Confidence? Diagnosing LLM Truthfulness via Neighborhood Consistency

About

As Large Language Models (LLMs) are increasingly deployed in real-world settings, correctness alone is insufficient. Reliable deployment requires maintaining truthful beliefs under contextual perturbations. Existing evaluations largely rely on point-wise confidence like Self-Consistency, which can mask brittle belief. We show that even facts answered with perfect self-consistency can rapidly collapse under mild contextual interference. To address this gap, we propose Neighbor-Consistency Belief (NCB), a structural measure of belief robustness that evaluates response coherence across a conceptual neighborhood. To validate the efficiency of NCB, we introduce a new cognitive stress-testing protocol that probes outputs stability under contextual interference. Experiments across multiple LLMs show that the performance of high-NCB data is relatively more resistant to interference. Finally, we present Structure-Aware Training (SAT), which optimizes context-invariant belief structure and reduces long-tail knowledge brittleness by approximately 30%. Code will be available at https://github.com/zjunlp/belief.

Haoming Xu, Ningyuan Zhao, Yunzhi Yao, Weihong Xu, Hongru Wang, Xinle Deng, Shumin Deng, Jeff Z. Pan, Huajun Chen, Ningyu Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy80.1
842
Mathematical ReasoningGSM8K
Accuracy91
212
Knowledge AcquisitionNewly learned facts
Base Accuracy93
4
Robustness EvaluationStress Tests
Quantity Stress Score58.1
4
Showing 4 of 4 rows

Other info

GitHub

Follow for update