Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

REARANK: Reasoning Re-ranking Agent via Reinforcement Learning

About

We present REARANK, a large language model (LLM)-based listwise reasoning reranking agent. REARANK explicitly reasons before reranking, significantly improving both performance and interpretability. Leveraging reinforcement learning and data augmentation, REARANK achieves substantial improvements over baseline models across popular information retrieval benchmarks, notably requiring only 179 annotated samples. Built on top of Qwen2.5-7B, our REARANK-7B demonstrates performance comparable to GPT-4 on both in-domain and out-of-domain benchmarks and even surpasses GPT-4 on reasoning-intensive BRIGHT benchmarks. These results underscore the effectiveness of our approach and highlight how reinforcement learning can enhance LLM reasoning capabilities in reranking.

Le Zhang, Bo Wang, Xipeng Qiu, Siva Reddy, Aishwarya Agrawal• 2025

Related benchmarks

TaskDatasetResultRank
Question Answering2Wiki
F129.4
152
Multi-hop Question Answering2Wiki
Exact Match16.6
152
Question AnsweringHotpotQA
F143.2
128
Question AnsweringMuSiQue
EM5.6
84
Multi-hop Question AnsweringHotpotQA
F142
79
Information RetrievalBRIGHT 1.0 (test)
nDCG@10 (Avg)24.6
35
Long-context Memory Retrieval and ReasoningPersonaMem 128K
F1 Score22.95
20
Long-context Memory Retrieval and ReasoningWebDancer 128K
F1 Score37.23
20
Long-context Memory Retrieval and ReasoningZH4O 128K
F1 Score49.02
20
Long-context Memory Retrieval and ReasoningLoCoMo 32K
F1 Score39.19
20
Showing 10 of 26 rows

Other info

Follow for update