Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Document Ranking with a Pretrained Sequence-to-Sequence Model

About

This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words", and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent models. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art models requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model's use of latent knowledge.

Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin• 2020

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA (test)--
198
Passage RankingMS MARCO (dev)
MRR@1038.3
73
Information RetrievalScientific QA Base setting
HitRate@153.33
38
Question AnsweringScientific QA Base setting
F1 Score43.88
38
Passage RankingNQ
MRR50.47
29
Passage RankingWebQuestions (WQ)
R@1063.97
28
Passage RankingTREC DL 2019
R@10100
28
Passage RankingTREC DL 2020
R@10100
28
Passage retrievalNatural Questions (NQ)
Top-10 Accuracy66.59
28
AttributionVerifiability-Granular (test)
Attribution Accuracy67.43
28
Showing 10 of 45 rows

Other info

Code

Follow for update