Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Context-Robust LLMs: A Gated Representation Fine-tuning Approach

About

Large Language Models (LLMs) enhanced with external contexts, such as through retrieval-augmented generation (RAG), often face challenges in handling imperfect evidence. They tend to over-rely on external knowledge, making them vulnerable to misleading and unhelpful contexts. To address this, we propose the concept of context-robust LLMs, which can effectively balance internal knowledge with external context, similar to human cognitive processes. Specifically, context-robust LLMs should rely on external context only when lacking internal knowledge, identify contradictions between internal and external knowledge, and disregard unhelpful contexts. To achieve this goal, we introduce Grft, a lightweight and plug-and-play gated representation fine-tuning approach. Grft consists of two key components: a gating mechanism to detect and filter problematic inputs, and low-rank representation adapters to adjust hidden representations. By training a lightweight intervention function with only 0.0004\% of model size on fewer than 200 examples, Grft can effectively adapt LLMs towards context-robust behaviors.

Shenglai Zeng, Pengfei He, Kai Guo, Tianqi Zheng, Hanqing Lu, Yue Xing, Hui Liu• 2025

Related benchmarks

TaskDatasetResultRank
Contextual Robustness Question AnsweringConflictQA (Known queries)
Accuracy (Contradictory Short)82.49
22
Contextual Robustness Question AnsweringConflictQA Unknown queries
Accuracy (Short Context)98.23
22
Question AnsweringNQ 1200 noisy contexts
Unhelpful83.13
9
Question AnsweringKnowns.QA (1,000 samples subset of COUNTERFACT)
Misleading Rate62.52
9
Showing 4 of 4 rows

Other info

Follow for update