Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking the Role of Token Retrieval in Multi-Vector Retrieval

About

Multi-vector retrieval models such as ColBERT [Khattab and Zaharia, 2020] allow token-level interactions between queries and documents, and hence achieve state of the art on many information retrieval benchmarks. However, their non-linear scoring function cannot be scaled to millions of documents, necessitating a three-stage process for inference: retrieving initial candidates via token retrieval, accessing all token vectors, and scoring the initial candidate documents. The non-linear scoring function is applied over all token vectors of each candidate document, making the inference process complicated and slow. In this paper, we aim to simplify the multi-vector retrieval by rethinking the role of token retrieval. We present XTR, ConteXtualized Token Retriever, which introduces a simple, yet novel, objective function that encourages the model to retrieve the most important document tokens first. The improvement to token retrieval allows XTR to rank candidates only using the retrieved tokens rather than all tokens in the document, and enables a newly designed scoring stage that is two-to-three orders of magnitude cheaper than that of ColBERT. On the popular BEIR benchmark, XTR advances the state-of-the-art by 2.8 nDCG@10 without any distillation. Detailed analysis confirms our decision to revisit the token retrieval stage, as XTR demonstrates much better recall of the token retrieval stage compared to ColBERT.

Jinhyuk Lee, Zhuyun Dai, Sai Meher Karthik Duddu, Tao Lei, Iftekhar Naim, Ming-Wei Chang, Vincent Y. Zhao• 2023

Related benchmarks

TaskDatasetResultRank
Passage retrievalTriviaQA (test)
Top-100 Acc87.1
67
Passage retrievalNatural Questions (NQ) (test)
Top-20 Accuracy84.9
45
Zero-shot Information RetrievalBEIR
Trec-Covid NDCG@10 (Zero-shot)78.9
27
Passage retrievalSQuAD (test)
Top-100 Accuracy87.6
22
RetrievalEntity Questions (test)
Top-100 Retrieval Accuracy85.9
20
Information RetrievalMS MARCO in-domain
NDCG@100.466
18
Multilingual Document RetrievalMIRACL (Evaluation set)--
14
Information RetrievalBEIR v1.0 (test)
ARCD Score95.6
10
Information RetrievalLoTTE Search zero-shot
Writing Score83.3
2
Information RetrievalLoTTE Forum zero-shot
Writing Score83.4
2
Showing 10 of 10 rows

Other info

Follow for update