Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Making Retrieval-Augmented Language Models Robust to Irrelevant Context

About

Retrieval-augmented language models (RALMs) hold promise to produce language understanding systems that are are factual, efficient, and up-to-date. An important desideratum of RALMs, is that retrieved information helps model performance when it is relevant, and does not harm performance when it is not. This is particularly important in multi-hop reasoning scenarios, where misuse of irrelevant evidence can lead to cascading errors. However, recent work has shown that retrieval augmentation can sometimes have a negative effect on performance. In this work, we present a thorough analysis on five open-domain question answering benchmarks, characterizing cases when retrieval reduces accuracy. We then propose two methods to mitigate this issue. First, a simple baseline that filters out retrieved passages that do not entail question-answer pairs according to a natural language inference (NLI) model. This is effective in preventing performance reduction, but at a cost of also discarding relevant passages. Thus, we propose a method for automatically generating data to fine-tune the language model to properly leverage retrieved passages, using a mix of relevant and irrelevant contexts at training time. We empirically show that even 1,000 examples suffice to train the model to be robust to irrelevant contexts while maintaining high performance on examples with relevant ones.

Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant• 2023

Related benchmarks

TaskDatasetResultRank
Open-domain Question AnsweringNaturalQuestions (NQ)
SubEM49.5
40
Open-domain Question AnsweringTriviaQA
SubEM69.12
40
Multi-hop Question AnsweringHotpotQA
SubEM32.77
40
Question AnsweringNQ, TriviaQA, and WebQ (test)
Accuracy51.6
21
Retrieval-Augmented GenerationRAG-Bench
F1 (Golden Only)80.1
11
Retrieval-Augmented GenerationPubMedQA
Accuracy28.4
8
Retrieval-Augmented GenerationCRAG
Finance Accuracy14.6
5
Retrieval-Augmented GenerationBioASQ
Accuracy24.7
5
Question AnsweringPopQA
Pattern-based Score56.68
3
Showing 9 of 9 rows

Other info

Code

Follow for update