Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoRect: Context-Aware Logit Contrast for Hidden State Rectification to Resolve Knowledge Conflicts

About

Retrieval-Augmented Generation (RAG) often struggles with knowledge conflicts, where model-internal parametric knowledge overrides retrieved evidence, leading to unfaithful outputs. Existing approaches are often limited, relying either on superficial decoding adjustments or weight editing that necessitates ground-truth targets. Through layer-wise analysis, we attribute this failure to a parametric suppression phenomenon: specifically, in deep layers, certain FFN layers overwrite context-sensitive representations with memorized priors. To address this, we propose CoRect (Context-Aware Logit Contrast for Hidden State Rectification). By contrasting logits from contextualized and non-contextualized forward passes, CoRect identifies layers that exhibit high parametric bias without requiring ground-truth labels. It then rectifies the hidden states to preserve evidence-grounded information. Across question answering (QA) and summarization benchmarks, CoRect consistently improves faithfulness and reduces hallucinations compared to strong baselines.

Xuhua Ma, Richong Zhang, Zhijie Nie• 2026

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L21.97
169
Question AnsweringTriviaQA
EM83
116
Question AnsweringSQuAD
Exact Match88.93
50
Abstractive SummarizationXSum (test)
ROUGE-L20.04
44
Question AnsweringNQ
EM72.74
20
Question AnsweringNQ-Swap
Exact Match80.15
20
Question AnsweringHotpotQA
Exact Match45.67
20
Question AnsweringTabMWP
EM70.6
20
Abstractive SummarizationTofuEval
Overall Score69.45
5
Showing 9 of 9 rows

Other info

Follow for update