Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Domain-matched Pre-training Tasks for Dense Retrieval

About

Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.

Barlas O\u{g}uz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, Yashar Mehdad• 2021

Related benchmarks

TaskDatasetResultRank
Passage retrievalMsMARCO (dev)
MRR@1034
116
RetrievalMS MARCO (dev)
MRR@100.314
84
Passage RankingMS MARCO (dev)
MRR@1034
73
RetrievalNatural Questions (test)
Top-5 Recall76.9
62
Passage retrievalNatural Questions (NQ) (test)
Top-20 Accuracy84.68
45
Information RetrievalNatural Questions (test)
Recall@2084
25
RetrievalNQ (test)
Top-20 Accuracy0.847
11
Dialogue RetrievalConvAI2
R@190.7
9
Dialogue RetrievalUbuntu v2
R@186.3
9
Dialogue RetrievalDSTC7
R@168.2
9
Showing 10 of 11 rows

Other info

Follow for update