Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation

About

There is a growing interest in utilizing large-scale language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and in-context learning capabilities. In this scenario, tokenizing (i.e., indexing) users and items becomes essential for ensuring a seamless alignment of LLMs with recommendations. While several studies have made progress in representing users and items through textual contents or latent representations, challenges remain in efficiently capturing high-order collaborative knowledge into discrete tokens that are compatible with LLMs. Additionally, the majority of existing tokenization approaches often face difficulties in generalizing effectively to new/unseen users or items that were not in the training corpus. To address these challenges, we propose a novel framework called TokenRec, which introduces not only an effective ID tokenization strategy but also an efficient retrieval paradigm for LLM-based recommendations. Specifically, our tokenization strategy, Masked Vector-Quantized (MQ) Tokenizer, involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving a smooth incorporation of high-order collaborative knowledge and a generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.

Haohao Qu, Wenqi Fan, Zihuai Zhao, Qing Li• 2024

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationML1M
HR@2015.06
15
Sequential RecommendationGames
HR@100.1018
11
Sequential RecommendationLastFM
HR@104.89
11
Sequential RecommendationBeauty
HR@104.38
11
Generative RecommendationClothing
Recall@101.7
7
Generative RecommendationAmazon Review Data Toys (test)
Recall@105.38
7
Generative RecommendationLastFM
Recall@105.3
7
Generative RecommendationML1M
Recall@1010.1
7
Generative RecommendationAmazon Beauty Review Data (test)
Recall@105.63
7
Discriminative RecommendationIndustrial dataset offline
UAUC0.6553
5
Showing 10 of 10 rows

Other info

Follow for update