LimRank: Less is More for Reasoning-Intensive Information Reranking
About
Existing approaches typically rely on large-scale fine-tuning to adapt LLMs for information reranking tasks, which is computationally expensive. In this work, we demonstrate that modern LLMs can be effectively adapted using only minimal, high-quality supervision. To enable this, we design LIMRANK-SYNTHESIZER, a reusable and open-source pipeline for generating diverse, challenging, and realistic reranking examples. Using this synthetic data, we fine-tune our reranker model, LIMRANK. We evaluate LIMRANK on two challenging benchmarks, i.e., BRIGHT for reasoning-intensive retrieval and FollowIR for instruction-following retrieval. Our experiments demonstrate that LIMRANK achieves competitive performance, while being trained on less than 5% of the data typically used in prior work. Further ablation studies demonstrate the effectiveness of LIMRANK-SYNTHESIZER and the strong generalization capabilities of LIMRANK across downstream tasks, including scientific literature search and retrieval-augmented generation for knowledge-intensive problem solving.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Information Retrieval | Scientific QA Base setting | HitRate@152.05 | 38 | |
| Question Answering | Scientific QA Base setting | F1 Score43.12 | 38 | |
| Reranking | SciRAG-SSLI hard 1.0 (test) | Hit Rate @ 135.14 | 19 | |
| Reranking | SciRAG-SSLI easy 1.0 (test) | Hit Rate @ 114.86 | 19 | |
| Scientific Question Answering | SciRAG-SSLI easy 1.0 (test) | F1 Score33.19 | 19 | |
| Scientific Question Answering | SciRAG-SSLI hard 1.0 (test) | F1 Score37.44 | 19 |