Discerning and Resolving Knowledge Conflicts through Adaptive Decoding with Contextual Information-Entropy Constraint
About
Large language models internalize enormous parametric knowledge during pre-training. Concurrently, realistic applications necessitate external contextual knowledge to aid models on the underlying tasks. This raises a crucial dilemma known as knowledge conflicts, where the contextual knowledge clashes with the However, existing decoding works are specialized in resolving knowledge conflicts and could inadvertently deteriorate performance in absence of conflicts. In this paper, we propose an adaptive decoding method, termed as contextual information-entropy constraint decoding (COIECD), to discern whether the knowledge conflicts occur and resolve them. It can improve the model's faithfulness to conflicting context, and simultaneously maintain high performance among non- Our experiments show that COIECD exhibits strong performance and robustness over knowledge conflicts in realistic datasets. Code is available.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | TriviaQA | EM80.95 | 182 | |
| Abstractive Text Summarization | CNN/Daily Mail (test) | ROUGE-L21.17 | 169 | |
| Question Answering | SQuAD | F184.99 | 134 | |
| Question Answering | SQuAD | Exact Match88.9 | 83 | |
| Multi-hop QA | HotpotQA | Exact Match19.1 | 76 | |
| Question Answering | NQ | EM48.84 | 69 | |
| Open-domain Question Answering | MS Marco | -- | 48 | |
| Abstractive Summarization | XSum (test) | ROUGE-L15.77 | 44 | |
| Faithfulness Evaluation | FaithEval | F1 Score66.6 | 42 | |
| Multiple-choice Question Answering | ConFiQA MC | F1 Score66.7 | 42 |