Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Drift-Aware Continual Tokenization for Generative Recommendation

About

Generative recommendation commonly adopts a two-stage pipeline in which a learnable tokenizer maps items to discrete token sequences (i.e. identifiers) and an autoregressive generative recommender model (GRM) performs prediction based on these identifiers. Recent tokenizers further incorporate collaborative signals so that items with similar user-behavior patterns receive similar codes, substantially improving recommendation quality. However, real-world environments evolve continuously: new items cause identifier collision and shifts, while new interactions induce collaborative drift in existing items (e.g., changing co-occurrence patterns and popularity). Fully retraining both tokenizer and GRM is often prohibitively expensive, yet naively fine-tuning the tokenizer can alter token sequences for the majority of existing items, undermining the GRM's learned token-embedding alignment. To balance plasticity and stability for collaborative tokenizers, we propose DACT, a Drift-Aware Continual Tokenization framework with two stages: (i) tokenizer fine-tuning, augmented with a jointly trained Collaborative Drift Identification Module (CDIM) that outputs item-level drift confidence and enables differentiated optimization for drifting and stationary items; and (ii) hierarchical code reassignment using a relaxed-to-strict strategy to update token sequences while limiting unnecessary changes. Experiments on three real-world datasets with two representative GRMs show that DACT consistently achieves better performance than baselines, demonstrating effective adaptation to collaborative evolution with reduced disruption to prior knowledge. Our implementation is publicly available at https://github.com/HomesAmaranta/DACT for reproducibility.

Yuebo Feng, Jiahao Liu, Mingzhe Han, Dongsheng Li, Hansu Gu, Peng Zhang, Tun Lu, Ning Gu• 2026

Related benchmarks

TaskDatasetResultRank
Generative RecommendationBeauty Period 1
Hit Rate@50.0262
8
Generative RecommendationBeauty (Period 2)
H@52.39
8
Generative RecommendationBeauty (Period 3)
H@52.7
8
Generative RecommendationTools (Period 1)
Hit Rate @ 52.26
8
Generative RecommendationTools (Period 2)
H@51.81
8
Generative RecommendationTools (Period 4)
H@52.46
8
Generative RecommendationBeauty (Period 4)
Hit Rate @53
8
Generative RecommendationTools (Period 3)
Hit Rate @ 52.09
8
RecommendationBeauty TIGER Backbone (Period 1)
H@52.99
7
RecommendationBeauty TIGER Backbone (Period 2)
H@52.67
7
Showing 10 of 20 rows

Other info

Follow for update