Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models

About

We propose a novel zero-shot document ranking approach based on Large Language Models (LLMs): the Setwise prompting approach. Our approach complements existing prompting approaches for LLM-based zero-shot ranking: Pointwise, Pairwise, and Listwise. Through the first-of-its-kind comparative evaluation within a consistent experimental framework and considering factors like model size, token consumption, latency, among others, we show that existing approaches are inherently characterised by trade-offs between effectiveness and efficiency. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. Our Setwise approach, instead, reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, compared to previous methods. This significantly improves the efficiency of LLM-based zero-shot ranking, while also retaining high zero-shot ranking effectiveness. We make our code and results publicly available at \url{https://github.com/ielab/llm-rankers}.

Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon• 2023

Related benchmarks

TaskDatasetResultRank
RankingBEIR selected subset v1.0.0 (test)
TREC-COVID81.6
38
Document RankingTREC DL Track 2020 (test)
nDCG@100.6531
26
Nugget Coverage RerankingNeuCLIR ReportGen 2024 (test)
nDCG91
18
Nugget Coverage RerankingCRUX-MDS DUC 2004 (test)
nDCG75.3
18
Showing 4 of 4 rows

Other info

Follow for update