Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking

About

Reranking is fundamental to information retrieval and retrieval-augmented generation, with recent Large Language Models (LLMs) significantly advancing reranking quality. Most current works rely on large-scale LLMs (>7B parameters), presenting high computational costs. Small Language Models (SLMs) offer a promising alternative because of computational efficiency. However, our preliminary quantitative analysis reveals key limitations of SLMs: their representation space is narrow, leading to reduced expressiveness, and they struggle with understanding task prompts without fine-tuning. To address these issues, we introduce a novel two-stage training approach, ProRank, for SLM-based document reranking. We propose using reinforcement learning to improve the understanding of task prompts. Additionally, we introduce fine-grained score learning to enhance representation expressiveness and further improve document reranking quality. Extensive experiments suggest that ProRank consistently outperforms both the most advanced open-source and proprietary reranking models. Notably, our 0.5B ProRank even surpasses powerful LLM reranking models on the BEIR benchmark, establishing that properly trained SLMs can achieve superior document reranking performance while maintaining computational efficiency.

Xianming Li, Aamir Shakir, Rui Huang, Tsz-fung Andrew Lee, Julius Lipp, Benjamin Clavi\'e, Jing Li• 2025

Related benchmarks

TaskDatasetResultRank
Document RerankingBEIR
SF80.87
13
Code RetrievalCOSQA Code
NDCG@1032.05
8
Document Retrievalnews EN
NDCG@1049.06
8
Document Retrievalsignal EN
NDCG@1034.12
8
Document RetrievalCovid CN
NDCG@1089.78
8
Document RetrievalDuReader CN
NDCG@1078.54
8
Document RetrievalRobust04 EN
NDCG@1054.32
8
Document RerankingSciFact
NDCG@1080.15
4
Showing 8 of 8 rows

Other info

Follow for update