Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Condenser: a Pre-training Architecture for Dense Retrieval

About

Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text comparison and retrieval. However, dense encoders require a lot of data and sophisticated techniques to effectively train and suffer in low data situations. This paper finds a key reason is that standard LMs' internal attention structure is not ready-to-use for dense encoders, which needs to aggregate text information into the dense representation. We propose to pre-train towards dense encoder with a novel Transformer architecture, Condenser, where LM prediction CONditions on DENSE Representation. Our experiments show Condenser improves over standard LM by large margins on various text retrieval and similarity tasks.

Luyu Gao, Jamie Callan• 2021

Related benchmarks

TaskDatasetResultRank
Passage retrievalMsMARCO (dev)
MRR@1036.6
116
RetrievalMS MARCO (dev)
MRR@100.366
84
Information RetrievalBEIR v1.0.0 (test)
ArguAna29.8
55
Passage retrievalNatural Questions (NQ) (test)
Top-20 Accuracy83.2
45
Passage RankingTREC DL 2019
NDCG@100.698
24
Passage retrievalMS MARCO (dev)
MRR@1036.6
17
Dense RetrievalBEIR zero-shot
TREC-COVID75
13
Dense RetrievalNatural Question (test)
Recall@1075.62
9
Information RetrievalNatural Question
Recall@1079.03
9
Passage retrievalMS MARCO DL'19
NDCG@1069.8
8
Showing 10 of 10 rows

Other info

Follow for update