Context-DPO: Aligning Language Models for Context-Faithfulness
About
Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remains underexplored. To address this, we propose $\textbf{Context-DPO}$, the first alignment method specifically designed to enhance LLMs' context-faithfulness. We introduce $\textbf{ConFiQA}$, a benchmark that simulates Retrieval-Augmented Generation (RAG) scenarios with knowledge conflicts to evaluate context-faithfulness. By leveraging faithful and stubborn responses to questions with provided context from ConFiQA, our Context-DPO aligns LLMs through direct preference optimization. Extensive experiments demonstrate that our Context-DPO significantly improves context-faithfulness, achieving 35% to 280% improvements on popular open-source models. Further analysis demonstrates that Context-DPO preserves LLMs' generative capabilities while providing interpretable insights into context utilization. Our code and data are released at https://github.com/byronBBL/Context-DPO
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | SQuAD | F162.4 | 134 | |
| Cardiac diagnosis | MIMIC-IV-Ext | F1@353.1 | 42 | |
| Multiple-choice Question Answering | ConFiQA MC | F1 Score76.9 | 42 | |
| Faithfulness Evaluation | FaithEval | F1 Score67.2 | 42 | |
| Multi-step Reasoning Question Answering | ConFiQA MR (test) | F1 Score78.5 | 36 | |
| Open-ended Question Answering | ConFiQA (test) | F1 Score83.7 | 36 | |
| Question Answering | SQuAD KRE-curated version | F1 Score64.4 | 36 | |
| Open-book generation under knowledge conflict | ConFiQA 1,500 subset | Ps Score81.07 | 32 | |
| Question Answering | TVQA In-Domain (test) | Precision84.32 | 26 | |
| Question Answering | NQ-Open In-Domain (test) | Precision56.82 | 26 |