Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM as Explainable Re-Ranker for Recommendation System

About

The application of large language models (LLMs) in recommendation systems has recently gained traction. Traditional recommendation systems often lack explainability and suffer from issues such as popularity bias. Previous research has also indicated that LLMs, when used as standalone predictors, fail to achieve accuracy comparable to traditional models. To address these challenges, we propose to use LLM as an explainable re-ranker, a hybrid approach that combines traditional recommendation models with LLMs to enhance both accuracy and interpretability. We constructed a dataset to train the re-ranker LLM and evaluated the alignment between the generated dataset and human expectations. Leveraging a two-stage training process, our model significantly improved NDCG, a key ranking metric. Moreover, the re-ranker outperformed a zero-shot baseline in ranking accuracy and interpretability. These results highlight the potential of integrating traditional recommendation models with LLMs to address limitations in existing systems and pave the way for more explainable and fair recommendation frameworks.

Yaqi Wang, Haojia Sun, Shuting Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Re-rankingRandomly Retrieved Candidates
Hit Ratio@30.9167
3
Re-rankingItem-based CF (test)
Hit Ratio@348.4
3
Re-rankingKnowledge Graph candidates
Hit Ratio@348.4
3
Showing 3 of 3 rows

Other info

Follow for update