Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learnable Item Tokenization for Generative Recommendation

About

Utilizing powerful Large Language Models (LLMs) for generative recommendation has attracted much attention. Nevertheless, a crucial challenge is transforming recommendation data into the language space of LLMs through effective item tokenization. Current approaches, such as ID, textual, and codebook-based identifiers, exhibit shortcomings in encoding semantic information, incorporating collaborative signals, or handling code assignment bias. To address these limitations, we propose LETTER (a LEarnable Tokenizer for generaTivE Recommendation), which integrates hierarchical semantics, collaborative signals, and code assignment diversity to satisfy the essential requirements of identifiers. LETTER incorporates Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias. We instantiate LETTER on two models and propose a ranking-guided generation loss to augment their ranking ability theoretically. Experiments on three datasets validate the superiority of LETTER, advancing the state-of-the-art in the field of LLM-based generative recommendation.

Wenjie Wang, Honghui Bao, Xinyu Lin, Jizhi Zhang, Yongqi Li, Fuli Feng, See-Kiong Ng, Tat-Seng Chua• 2024

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationAmazon Beauty (test)
NDCG@103.4
117
Sequential RecommendationSports
Recall@103.91
62
RecommendationBeauty
NDCG@52.86
48
Sequential RecommendationSports
Recall@50.0141
43
Sequential RecommendationBeauty
Recall@106.16
42
Sequential RecommendationMovieLens 1M (test)
Hit@1026.44
42
Sequential RecommendationAmazon Instruments (test)
NDCG@105.83
35
RecommendationYelp
NDCG@101.47
35
Sequential RecommendationAmazon Office (test)
NDCG@1011.39
31
Sequential RecommendationMicroLens (test)
Recall@50.53
31
Showing 10 of 47 rows

Other info

Follow for update