ContextFocus: Activation Steering for Contextual Faithfulness in Large Language Models
About
Large Language Models (LLMs) encode vast amounts of parametric knowledge during pre-training. As world knowledge evolves, effective deployment increasingly depends on their ability to faithfully follow externally retrieved context. When such evidence conflicts with the model's internal knowledge, LLMs often default to memorized facts, producing unfaithful outputs. In this work, we introduce ContextFocus, a lightweight activation steering approach that improves context faithfulness in such knowledge-conflict settings while preserving fluency and efficiency. Unlike prior approaches, our solution requires no model finetuning and incurs minimal inference-time overhead, making it highly efficient. We evaluate ContextFocus on the ConFiQA benchmark, comparing it against strong baselines including ContextDPO, COIECD, and prompting-based methods. Furthermore, we show that our method is complementary to prompting strategies and remains effective on larger models. Extensive experiments show that ContextFocus significantly improves contextual-faithfulness. Our results highlight the effectiveness, robustness, and efficiency of ContextFocus in improving contextual-faithfulness of LLM outputs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open-book generation under knowledge conflict | ConFiQA 1,500 subset | Ps Score77.53 | 32 | |
| Open-book generation under knowledge conflict | ConFiQA MR 1,500 | Ps Score54.47 | 16 | |
| Machine Reading | ConFiQA MR | Ps Score54.47 | 4 | |
| Multiple-Choice | ConFiQA MC | Ps Score53.4 | 4 | |
| Question Answering | ConFiQA QA | Ps74.73 | 4 |