Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers

About

Information retrieval (IR) systems have played a vital role in modern digital life and have cemented their continued usefulness in this new era of generative AI via retrieval-augmented generation. With strong language processing capabilities and remarkable versatility, large language models (LLMs) have become popular choices for zero-shot re-ranking in IR systems. So far, LLM-based re-ranking methods rely on strong generative capabilities, which restricts their use to either specialized or powerful proprietary models. Given these restrictions, we ask: is autoregressive generation necessary and optimal for LLMs to perform re-ranking? We hypothesize that there are abundant signals relevant to re-ranking within LLMs that might not be used to their full potential via generation. To more directly leverage such signals, we propose in-context re-ranking (ICR), a novel method that leverages the change in attention pattern caused by the search query for accurate and efficient re-ranking. To mitigate the intrinsic biases in LLMs, we propose a calibration method using a content-free query. Due to the absence of generation, ICR only requires two ($O(1)$) forward passes to re-rank $N$ documents, making it substantially more efficient than generative re-ranking methods that require at least $O(N)$ forward passes. Our novel design also enables ICR to be applied to any LLM without specialized training while guaranteeing a well-formed ranking. Extensive experiments with two popular open-weight LLMs on standard single-hop and multi-hop information retrieval benchmarks show that ICR outperforms RankGPT while cutting the latency by more than 60% in practice. Through detailed analyses, we show that ICR's performance is specially strong on tasks that require more complex re-ranking signals. Our findings call for further exploration on novel ways of utilizing open-weight LLMs beyond text generation.

Shijie Chen, Bernal Jim\'enez Guti\'errez, Yu Su• 2024

Related benchmarks

TaskDatasetResultRank
Citation ControlCITECONTROL
Re Score100
54
Citation AttributabilityTransfer
QA Score71
54
Multi-hop QA RetrievalMuSiQue
R@245.7
36
Abstract generationLongLaMP
R141.9
32
Citation RecommendationLaMP Citation
Accuracy71.6
24
Movie RecommendationLaMP Movie
Accuracy56.8
24
Product Rating PredictionLaMP Rating
MAE0.238
24
Scholarly Abstract GenerationLaMP Scholar
ROUGE-144.3
24
News Headline GenerationLaMP News
RG118.3
24
Tweet Paraphrasing/GenerationLaMP Tweet
ROUGE-138.6
24
Showing 10 of 19 rows

Other info

Follow for update