Generation-Augmented Retrieval for Open-domain Question Answering
About
We propose Generation-Augmented Retrieval (GAR) for answering open-domain questions, which augments a query through text generation of heuristically discovered relevant contexts without external resources as supervision. We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR. We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy. Moreover, as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance. GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader, and consistently outperforms other retrieval methods when the same generative reader is used.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open Question Answering | Natural Questions (NQ) (test) | Exact Match (EM)41.8 | 134 | |
| Open-domain Question Answering | TriviaQA (test) | Exact Match62.7 | 80 | |
| Passage retrieval | TriviaQA (test) | Top-100 Acc90.1 | 67 | |
| Retrieval | Natural Questions (test) | Top-5 Recall81.9 | 62 | |
| Open-domain Question Answering | TriviaQA open (test) | EM62.7 | 59 | |
| Open-domain Question Answering | Natural Questions (NQ) | Exact Match (EM)45.3 | 46 | |
| Passage retrieval | Natural Questions (NQ) (test) | Top-20 Accuracy74.4 | 45 | |
| Passage retrieval | WebQuestions (WQ) (test) | Top-20 Accuracy85.4 | 37 | |
| Retrieval | Entity Questions (test) | Top-100 Retrieval Accuracy91.8 | 20 | |
| Passage retrieval | TREC (test) | Top-20 Accuracy95.5 | 17 |