Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ContextFocus: Activation Steering for Contextual Faithfulness in Large Language Models

About

Large Language Models (LLMs) encode vast amounts of parametric knowledge during pre-training. As world knowledge evolves, effective deployment increasingly depends on their ability to faithfully follow externally retrieved context. When such evidence conflicts with the model's internal knowledge, LLMs often default to memorized facts, producing unfaithful outputs. In this work, we introduce ContextFocus, a lightweight activation steering approach that improves context faithfulness in such knowledge-conflict settings while preserving fluency and efficiency. Unlike prior approaches, our solution requires no model finetuning and incurs minimal inference-time overhead, making it highly efficient. We evaluate ContextFocus on the ConFiQA benchmark, comparing it against strong baselines including ContextDPO, COIECD, and prompting-based methods. Furthermore, we show that our method is complementary to prompting strategies and remains effective on larger models. Extensive experiments show that ContextFocus significantly improves contextual-faithfulness. Our results highlight the effectiveness, robustness, and efficiency of ContextFocus in improving contextual-faithfulness of LLM outputs.

Nikhil Anand, Shwetha Somasundaram, Anirudh Phukan, Apoorv Saxena, Koyel Mukherjee• 2026

Related benchmarks

TaskDatasetResultRank
Open-book generation under knowledge conflictConFiQA 1,500 subset
Ps Score77.53
32
Open-book generation under knowledge conflictConFiQA MR 1,500
Ps Score54.47
16
Machine ReadingConFiQA MR
Ps Score54.47
4
Multiple-ChoiceConFiQA MC
Ps Score53.4
4
Question AnsweringConFiQA QA
Ps74.73
4
Showing 5 of 5 rows

Other info

Follow for update