Multi-Stage Document Ranking with BERT
About
The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing. This work explores one such popular model, BERT, in the context of document ranking. We propose two variants, called monoBERT and duoBERT, that formulate the ranking problem as pointwise and pairwise classification, respectively. These two models are arranged in a multi-stage ranking architecture to form an end-to-end search system. One major advantage of this design is the ability to trade off quality against latency by controlling the admission of candidates into each pipeline stage, and by doing so, we are able to find operating points that offer a good balance between these two competing metrics. On two large-scale datasets, MS MARCO and TREC CAR, experiments show that our model produces results that are either at or comparable to the state of the art. Ablation studies show the contributions of each component and characterize the latency/quality tradeoff space.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Passage Ranking | MS MARCO (dev) | MRR@1039 | 73 | |
| Nugget Coverage Reranking | CRUX-MDS DUC 2004 (test) | nDCG83.9 | 18 | |
| Nugget Coverage Reranking | NeuCLIR ReportGen 2024 (test) | nDCG90.7 | 18 | |
| Text Ranking | MS MARCO In-domain suite (TREC DL19, TREC DL20) v1 (dev test) | NDCG@10 (Sparse, BM25, MS MARCO)0.44 | 13 | |
| Document Reranking | TREC DL | NDCG@10 (DL19)70.5 | 13 | |
| Document Reranking | BEIR | NDCG@10 (Covid)73.45 | 13 | |
| Text Ranking | BEIR out-of-domain | Arguana Score51.5 | 9 | |
| Document Retrieval | MS 300K (test) | MRR@2046.83 | 3 | |
| Information Retrieval | Gov 500K (test) | nDCG@569.53 | 3 | |
| Information Retrieval | MS 500K (test) | MRR@2058.62 | 3 |