Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dynamic Ranked List Truncation for Reranking Pipelines via LLM-generated Reference-Documents

About

Large Language Models (LLM) have been widely used in reranking. Computational overhead and large context lengths remain a challenging issue for LLM rerankers. Efficient reranking usually involves selecting a subset of the ranked list from the first stage, known as ranked list truncation (RLT). The truncated list is processed further by a reranker. For LLM rerankers, the ranked list is often partitioned and processed sequentially in batches to reduce the context length. Both these steps involve hyperparameters and topic-agnostic heuristics. Recently, LLMs have been shown to be effective for relevance judgment. Equivalently, we propose that LLMs can be used to generate reference documents that can act as a pivot between relevant and non-relevant documents in a ranked list. We propose methods to use these generated reference documents for RLT as well as for efficient listwise reranking. While reranking, we process the ranked list in either parallel batches of non-overlapping windows or overlapping windows with adaptive strides, improving the existing fixed stride setup. The generated reference documents are also shown to improve existing efficient listwise reranking frameworks. Experiments on TREC Deep Learning benchmarks show that our approach outperforms existing RLT-based approaches. In-domain and out-of-domain benchmarks demonstrate that our proposed methods accelerate LLM-based listwise reranking by up to 66\% compared to existing approaches. This work not only establishes a practical paradigm for efficient LLM-based reranking but also provides insight into the capability of LLMs to generate semantically controlled documents using relevance signals.

Nilanjan Sinhababu, Soumedhik Bharati, Debasis Ganguly, Pabitra Mitra• 2026

Related benchmarks

TaskDatasetResultRank
Document RankingTREC DL Track 2019 (test)
nDCG@1071.1
133
Document RankingTREC DL Track 2020 (test)
nDCG@100.685
63
Information RetrievalSciFact BEIR (test)
nDCG@1072
31
Information RetrievalBEIR COVID v1 (test)
nDCG@1063.5
26
Information RetrievalDBPedia BEIR (test)
nDCG@1042.3
21
Information RetrievalTouche BEIR 2020 (test)
nDCG@1030.9
15
RerankingCOVID
nDCG69
12
RerankingSciFact
nDCG76.4
12
RerankingTouché
nDCG30.1
12
RerankingDBpedia
nDCG48
12
Showing 10 of 12 rows

Other info

Follow for update