Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BiMind: A Dual-Head Reasoning Model with Attention-Geometry Adapter for Incorrect Information Detection

About

Incorrect information poses significant challenges by disrupting content veracity and integrity, yet most detection approaches struggle to jointly balance textual content verification with external knowledge modification under collapsed attention geometries. To address this issue, we propose a dual-head reasoning framework, BiMind, which disentangles content-internal reasoning from knowledge-augmented reasoning. In BiMind, we introduce three core innovations: (i) an attention geometry adapter that reshapes attention logits via token-conditioned offsets and mitigates attention collapse; (ii) a self-retrieval knowledge mechanism, which constructs an in-domain semantic memory through kNN retrieval and injects retrieved neighbors via feature-wise linear modulation; (iii) the uncertainty-aware fusion strategies, including entropy-gated fusion and a trainable agreement head, stabilized by a symmetric Kullback-Leibler agreement regularizer. To quantify the knowledge contributions, we define a novel metric, Value-of-eXperience (VoX), to measure instance-wise logit gains from knowledge-augmented reasoning. Experiment results on public datasets demonstrate that our BiMind model outperforms advanced detection approaches and provides interpretable diagnostics on when and why knowledge matters.

Zhongxing Zhang, Emily K. Vraga, Jisu Huh, Jaideep Srivastava• 2026

Related benchmarks

TaskDatasetResultRank
Fact VerificationLIAR
F1 Score63.3
24
Misinformation DetectionMMCoVaR
Accuracy95.1
16
Incorrect Information DetectionReCOVery
Accuracy91.8
6
Incorrect Information DetectionMC Fake
Accuracy88.7
6
Showing 4 of 4 rows

Other info

Follow for update