Dynamic Ranked List Truncation for Reranking Pipelines via LLM-generated Reference-Documents
About
Large Language Models (LLM) have been widely used in reranking. Computational overhead and large context lengths remain a challenging issue for LLM rerankers. Efficient reranking usually involves selecting a subset of the ranked list from the first stage, known as ranked list truncation (RLT). The truncated list is processed further by a reranker. For LLM rerankers, the ranked list is often partitioned and processed sequentially in batches to reduce the context length. Both these steps involve hyperparameters and topic-agnostic heuristics. Recently, LLMs have been shown to be effective for relevance judgment. Equivalently, we propose that LLMs can be used to generate reference documents that can act as a pivot between relevant and non-relevant documents in a ranked list. We propose methods to use these generated reference documents for RLT as well as for efficient listwise reranking. While reranking, we process the ranked list in either parallel batches of non-overlapping windows or overlapping windows with adaptive strides, improving the existing fixed stride setup. The generated reference documents are also shown to improve existing efficient listwise reranking frameworks. Experiments on TREC Deep Learning benchmarks show that our approach outperforms existing RLT-based approaches. In-domain and out-of-domain benchmarks demonstrate that our proposed methods accelerate LLM-based listwise reranking by up to 66\% compared to existing approaches. This work not only establishes a practical paradigm for efficient LLM-based reranking but also provides insight into the capability of LLMs to generate semantically controlled documents using relevance signals.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document Ranking | TREC DL Track 2019 (test) | nDCG@1071.1 | 133 | |
| Document Ranking | TREC DL Track 2020 (test) | nDCG@100.685 | 63 | |
| Information Retrieval | SciFact BEIR (test) | nDCG@1072 | 31 | |
| Information Retrieval | BEIR COVID v1 (test) | nDCG@1063.5 | 26 | |
| Information Retrieval | DBPedia BEIR (test) | nDCG@1042.3 | 21 | |
| Information Retrieval | Touche BEIR 2020 (test) | nDCG@1030.9 | 15 | |
| Reranking | COVID | nDCG69 | 12 | |
| Reranking | SciFact | nDCG76.4 | 12 | |
| Reranking | Touché | nDCG30.1 | 12 | |
| Reranking | DBpedia | nDCG48 | 12 |