A Two-Stage Adaptation of Large Language Models for Text Ranking
About
Text ranking is a critical task in information retrieval. Recent advances in pre-trained language models (PLMs), especially large language models (LLMs), present new opportunities for applying them to text ranking. While supervised fine-tuning (SFT) with ranking data has been widely explored to better align PLMs with text ranking goals, previous studies have focused primarily on encoder-only and encoder-decoder PLMs. Research on leveraging decoder-only LLMs for text ranking remains scarce. An exception to this is RankLLaMA, which uses direct SFT to explore LLaMA's potential for text ranking. In this work, we propose a two-stage progressive paradigm to better adapt LLMs to text ranking. First, we conduct continual pre-training (CPT) of LLMs on a large weakly-supervised corpus. Second, we perform SFT, and propose an improved optimization strategy building upon RankLLaMA. Our experimental results on multiple benchmarks show that our approach outperforms previous methods in both in-domain and out-domain scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Ranking | MS MARCO In-domain suite (TREC DL19, TREC DL20) v1 (dev test) | NDCG@10 (Sparse, BM25, MS MARCO)0.48 | 13 | |
| Text Ranking | BEIR out-of-domain | Arguana Score55.6 | 9 | |
| Text Ranking | BEIR (out-domain) | Arguana56.8 | 5 | |
| Text Ranking | BEIR NFCorpus out-domain | nDCG@1040.2 | 4 | |
| Text Ranking | BEIR DBPedia out-domain | nDCG@1049.2 | 4 | |
| Text Ranking | BEIR SciFact (out-domain) | nDCG@1078.3 | 4 | |
| Text Ranking | BEIR COVID (out-domain) | nDCG@1084 | 4 | |
| Text Ranking | BEIR Touche (out-domain) | nDCG@1032 | 4 |