Towards Context-Robust LLMs: A Gated Representation Fine-tuning Approach
About
Large Language Models (LLMs) enhanced with external contexts, such as through retrieval-augmented generation (RAG), often face challenges in handling imperfect evidence. They tend to over-rely on external knowledge, making them vulnerable to misleading and unhelpful contexts. To address this, we propose the concept of context-robust LLMs, which can effectively balance internal knowledge with external context, similar to human cognitive processes. Specifically, context-robust LLMs should rely on external context only when lacking internal knowledge, identify contradictions between internal and external knowledge, and disregard unhelpful contexts. To achieve this goal, we introduce Grft, a lightweight and plug-and-play gated representation fine-tuning approach. Grft consists of two key components: a gating mechanism to detect and filter problematic inputs, and low-rank representation adapters to adjust hidden representations. By training a lightweight intervention function with only 0.0004\% of model size on fewer than 200 examples, Grft can effectively adapt LLMs towards context-robust behaviors.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Contextual Robustness Question Answering | ConflictQA (Known queries) | Accuracy (Contradictory Short)82.49 | 22 | |
| Contextual Robustness Question Answering | ConflictQA Unknown queries | Accuracy (Short Context)98.23 | 22 | |
| Question Answering | NQ 1200 noisy contexts | Unhelpful83.13 | 9 | |
| Question Answering | Knowns.QA (1,000 samples subset of COUNTERFACT) | Misleading Rate62.52 | 9 |