Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Stage Document Ranking with BERT

About

The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing. This work explores one such popular model, BERT, in the context of document ranking. We propose two variants, called monoBERT and duoBERT, that formulate the ranking problem as pointwise and pairwise classification, respectively. These two models are arranged in a multi-stage ranking architecture to form an end-to-end search system. One major advantage of this design is the ability to trade off quality against latency by controlling the admission of candidates into each pipeline stage, and by doing so, we are able to find operating points that offer a good balance between these two competing metrics. On two large-scale datasets, MS MARCO and TREC CAR, experiments show that our model produces results that are either at or comparable to the state of the art. Ablation studies show the contributions of each component and characterize the latency/quality tradeoff space.

Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin• 2019

Related benchmarks

TaskDatasetResultRank
Passage RankingMS MARCO (dev)
MRR@1039
73
Nugget Coverage RerankingCRUX-MDS DUC 2004 (test)
nDCG83.9
18
Nugget Coverage RerankingNeuCLIR ReportGen 2024 (test)
nDCG90.7
18
Text RankingMS MARCO In-domain suite (TREC DL19, TREC DL20) v1 (dev test)
NDCG@10 (Sparse, BM25, MS MARCO)0.44
13
Document RerankingTREC DL
NDCG@10 (DL19)70.5
13
Document RerankingBEIR
NDCG@10 (Covid)73.45
13
Text RankingBEIR out-of-domain
Arguana Score51.5
9
Document RetrievalMS 300K (test)
MRR@2046.83
3
Information RetrievalGov 500K (test)
nDCG@569.53
3
Information RetrievalMS 500K (test)
MRR@2058.62
3
Showing 10 of 10 rows

Other info

Follow for update