Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?

About

Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data. It has been argued that this is an inherent limitation of dense models. We rebut this claim by introducing the Salient Phrase Aware Retriever (SPAR), a dense retriever with the lexical matching capacity of a sparse model. We show that a dense Lexical Model {\Lambda} can be trained to imitate a sparse one, and SPAR is built by augmenting a standard dense retriever with {\Lambda}. Empirically, SPAR shows superior performance on a range of tasks including five question answering datasets, MS MARCO passage retrieval, as well as the EntityQuestions and BEIR benchmarks for out-of-domain evaluation, exceeding the performance of state-of-the-art dense and sparse retrievers. The code and models of SPAR are available at: https://github.com/facebookresearch/dpr-scale/tree/main/spar

Xilun Chen, Kushal Lakhotia, Barlas O\u{g}uz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, Wen-tau Yih• 2021

Related benchmarks

TaskDatasetResultRank
Passage retrievalMsMARCO (dev)
MRR@1038.6
116
Open-domain Question AnsweringTriviaQA (test)--
80
Passage retrievalTriviaQA (test)
Top-100 Acc83.2
67
Passage retrievalNatural Questions (NQ) (test)
Top-20 Accuracy62.9
45
Zero-shot Information RetrievalBEIR
Trec-Covid NDCG@10 (Zero-shot)76.4
27
Open-domain Question AnsweringCuratedTREC (test)--
26
Information RetrievalNatural Questions (test)
Recall@2083.6
25
Passage retrievalSQuAD (test)
Top-100 Accuracy83.6
22
RetrievalEntity Questions (test)
Top-100 Retrieval Accuracy80
20
Information RetrievalMS MARCO in-domain
NDCG@100.228
18
Showing 10 of 18 rows

Other info

Code

Follow for update