Domain-matched Pre-training Tasks for Dense Retrieval
About
Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.
Barlas O\u{g}uz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, Yashar Mehdad• 2021
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Passage retrieval | MsMARCO (dev) | MRR@1034 | 116 | |
| Retrieval | MS MARCO (dev) | MRR@100.314 | 84 | |
| Passage Ranking | MS MARCO (dev) | MRR@1034 | 73 | |
| Retrieval | Natural Questions (test) | Top-5 Recall76.9 | 62 | |
| Passage retrieval | Natural Questions (NQ) (test) | Top-20 Accuracy84.68 | 45 | |
| Information Retrieval | Natural Questions (test) | Recall@2084 | 25 | |
| Retrieval | NQ (test) | Top-20 Accuracy0.847 | 11 | |
| Dialogue Retrieval | ConvAI2 | R@190.7 | 9 | |
| Dialogue Retrieval | Ubuntu v2 | R@186.3 | 9 | |
| Dialogue Retrieval | DSTC7 | R@168.2 | 9 |
Showing 10 of 11 rows