Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Pseudo-Labeling for Pre-Ranking with LLMs

About

Pre-ranking is a critical stage in industrial recommendation systems, tasked with efficiently scoring thousands of recalled items for downstream ranking. A key challenge is the train-serving discrepancy: pre-ranking models are trained only on exposed interactions, yet must score all recalled candidates -- including unexposed items -- during online serving. This mismatch not only induces severe sample selection bias but also degrades generalization, especially for long-tail content. Existing debiasing approaches typically rely on heuristics (e.g., negative sampling) or distillation from biased rankers, which either mislabel plausible unexposed items as negatives or propagate exposure bias into pseudo-labels. In this work, we propose Generative Pseudo-Labeling (GPL), a framework that leverages large language models (LLMs) to generate unbiased, content-aware pseudo-labels for unexposed items, explicitly aligning the training distribution with the online serving space. By offline generating user-specific interest anchors and matching them with candidates in a frozen semantic space, GPL provides high-quality supervision without adding online latency. Deployed in a large-scale production system, GPL improves click-through rate by 3.07%, while significantly enhancing recommendation diversity and long-tail item discovery.

Junyu Bi, Xinting Niu, Daixuan Cheng, Kun Yuan, Tao Wang, Binbin Cao, Jian Wu, Yuning Jiang• 2026

Related benchmarks

TaskDatasetResultRank
RecommendationIndustrial Dataset Taobao (test)
HR@30.5254
9
RecommendationTaobao-MM (test)
HR@349.12
9
Recommender SystemTaobao 'Guess What You Like' A/B Test (Online Deployment)
IPV3.53
1
Showing 3 of 3 rows

Other info

Follow for update