Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers
About
Information retrieval (IR) systems have played a vital role in modern digital life and have cemented their continued usefulness in this new era of generative AI via retrieval-augmented generation. With strong language processing capabilities and remarkable versatility, large language models (LLMs) have become popular choices for zero-shot re-ranking in IR systems. So far, LLM-based re-ranking methods rely on strong generative capabilities, which restricts their use to either specialized or powerful proprietary models. Given these restrictions, we ask: is autoregressive generation necessary and optimal for LLMs to perform re-ranking? We hypothesize that there are abundant signals relevant to re-ranking within LLMs that might not be used to their full potential via generation. To more directly leverage such signals, we propose in-context re-ranking (ICR), a novel method that leverages the change in attention pattern caused by the search query for accurate and efficient re-ranking. To mitigate the intrinsic biases in LLMs, we propose a calibration method using a content-free query. Due to the absence of generation, ICR only requires two ($O(1)$) forward passes to re-rank $N$ documents, making it substantially more efficient than generative re-ranking methods that require at least $O(N)$ forward passes. Our novel design also enables ICR to be applied to any LLM without specialized training while guaranteeing a well-formed ranking. Extensive experiments with two popular open-weight LLMs on standard single-hop and multi-hop information retrieval benchmarks show that ICR outperforms RankGPT while cutting the latency by more than 60% in practice. Through detailed analyses, we show that ICR's performance is specially strong on tasks that require more complex re-ranking signals. Our findings call for further exploration on novel ways of utilizing open-weight LLMs beyond text generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Abstract generation | LongLaMP | R141.9 | 32 | |
| Citation Recommendation | LaMP Citation | Accuracy71.6 | 24 | |
| Movie Recommendation | LaMP Movie | Accuracy56.8 | 24 | |
| Product Rating Prediction | LaMP Rating | MAE0.238 | 24 | |
| Scholarly Abstract Generation | LaMP Scholar | ROUGE-144.3 | 24 | |
| News Headline Generation | LaMP News | RG118.3 | 24 | |
| Tweet Paraphrasing/Generation | LaMP Tweet | ROUGE-138.6 | 24 | |
| Re-ranking | BEIR (test) | NQ54 | 23 | |
| End-to-End Performance | LongMemEval | Top-5 Recall59.3 | 20 | |
| End-to-End Performance | Clipper | Top-3 Recall43.8 | 20 |