Context-DPO: Aligning Language Models for Context-Faithfulness
About
Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remains underexplored. To address this, we propose $\textbf{Context-DPO}$, the first alignment method specifically designed to enhance LLMs' context-faithfulness. We introduce $\textbf{ConFiQA}$, a benchmark that simulates Retrieval-Augmented Generation (RAG) scenarios with knowledge conflicts to evaluate context-faithfulness. By leveraging faithful and stubborn responses to questions with provided context from ConFiQA, our Context-DPO aligns LLMs through direct preference optimization. Extensive experiments demonstrate that our Context-DPO significantly improves context-faithfulness, achieving 35% to 280% improvements on popular open-source models. Further analysis demonstrates that Context-DPO preserves LLMs' generative capabilities while providing interpretable insights into context utilization. Our code and data are released at https://github.com/byronBBL/Context-DPO
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open-book generation under knowledge conflict | ConFiQA 1,500 subset | Ps Score81.07 | 32 | |
| Instruction Following | MQuAKE | Accuracy82.5 | 24 | |
| Context-faithful Question Answering | ConFiQA | -- | 24 | |
| Retrieval Following | ConFiQA QA 1.0 (test) | Pc92.3 | 20 | |
| Retrieval Following | ConFiQA MR 1.0 (test) | Pc61.2 | 20 | |
| Retrieval Following | ConFiQA MC 1.0 (test) | Pc54.9 | 20 | |
| Retrieval Following | Natural Questions (test) | Pc98.4 | 20 | |
| Open-book generation under knowledge conflict | ConFiQA MR 1,500 | Ps Score59.8 | 16 | |
| Context-faithful Multi-hop Reasoning | ConFiQA MR | -- | 8 | |
| Context-faithful Reasoning | ConFiQA MC | -- | 8 |