Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning

About

We propose Rec-R1, a general reinforcement learning framework that bridges large language models (LLMs) with recommendation systems through closed-loop optimization. Unlike prompting and supervised fine-tuning (SFT), Rec-R1 directly optimizes LLM generation using feedback from a fixed black-box recommendation model, without relying on synthetic SFT data from proprietary models such as GPT-4o. This avoids the substantial cost and effort required for data distillation. To verify the effectiveness of Rec-R1, we evaluate it on two representative tasks: product search and sequential recommendation. Experimental results demonstrate that Rec-R1 not only consistently outperforms prompting- and SFT-based methods, but also achieves significant gains over strong discriminative baselines, even when used with simple retrievers such as BM25. Moreover, Rec-R1 preserves the general-purpose capabilities of the LLM, unlike SFT, which often impairs instruction-following and reasoning. These findings suggest Rec-R1 as a promising foundation for continual task-specific adaptation without catastrophic forgetting.

Jiacheng Lin, Tian Wang, Kun Qian• 2025

Related benchmarks

TaskDatasetResultRank
Maximizing InterestKuaiRec dense
N@557.2
9
RankingKuaiRec Explore New Topics (test)
N@573
8
RankingMovieLens 1M
NDCG@50.554
8
RankingMovieLens 1M Trend Promotion (test)
Hit Rate@560.7
8
RankingKuaiRec
NDCG@539.1
8
RankingKuaiRec Trend Promotion (test)
N@549.8
8
RankingMovieLens-1M Explore New Topics (test)
N@572.6
8
Product SearchAmazon Product Search ESCI
NDCG@550.4
7
Showing 8 of 8 rows

Other info

Follow for update