Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from LLMs

About

In search settings, calibrating the scores during the ranking process to quantities such as click-through rates or relevance levels enhances a system's usefulness and trustworthiness for downstream users. While previous research has improved this notion of calibration for low complexity learning-to-rank models, the larger data demands and parameter count specific to modern neural text rankers produce unique obstacles that hamper the efficacy of methods intended for the learning-to-rank setting. This paper proposes exploiting large language models (LLMs) to provide relevance and uncertainty signals for these neural text rankers to produce scale-calibrated scores through Monte Carlo sampling of natural language explanations (NLEs). Our approach transforms the neural ranking task from ranking textual query-document pairs to ranking corresponding synthesized NLEs. Comprehensive experiments on two popular document ranking datasets show that the NLE-based calibration approach consistently outperforms past calibration methods and LLM-based methods for ranking, calibration, and query performance prediction tasks.

Puxuan Yu, Daniel Cohen, Hemank Lamba, Joel Tetreault, Alex Jaimes• 2024

Related benchmarks

TaskDatasetResultRank
CalibrationTREC
CB-ECE0.0086
8
RankingTREC
nDCG0.822
8
RankingNTCIR
nDCG74.2
8
CalibrationNTCIR
CB-ECE1.405
8
Showing 4 of 4 rows

Other info

Code

Follow for update