Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval

About

In neural Information Retrieval (IR), ongoing research is directed towards improving the first retriever in ranking pipelines. Learning dense embeddings to conduct retrieval using efficient approximate nearest neighbors methods has proven to work well. Meanwhile, there has been a growing interest in learning \emph{sparse} representations for documents and queries, that could inherit from the desirable properties of bag-of-words models such as the exact matching of terms and the efficiency of inverted indexes. Introduced recently, the SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches. In this paper, we build on SPLADE and propose several significant improvements in terms of effectiveness and/or efficiency. More specifically, we modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation. We also report results on the BEIR benchmark. Overall, SPLADE is considerably improved with more than $9$\% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.

Thibault Formal, Carlos Lassance, Benjamin Piwowarski, St\'ephane Clinchant• 2021

Related benchmarks

TaskDatasetResultRank
Passage retrievalMsMARCO (dev)
MRR@1036.8
116
Information RetrievalBEIR (test)
FiQA-2018 Score62.1
90
RetrievalMS MARCO (dev)
MRR@100.365
84
RetrievalTREC DL 2019
NDCG@1072.9
83
Information RetrievalSciFact (test)
NDCG@100.708
65
Information RetrievalNFCorpus (test)
NDCG@100.34
65
Information RetrievalTREC DL 19
nDCG@1070.6
61
Information RetrievalTREC DL20
NDCG@1068.7
50
Passage RankingTREC DL 2019 (test)
NDCG@1068.4
33
Passage RankingTREC DL 2019
NDCG@100.729
32
Showing 10 of 34 rows

Other info

Code

Follow for update