Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Causality-Enhanced Behavior Sequence Modeling in LLMs for Personalized Recommendation

About

Recent advancements in recommender systems have focused on leveraging Large Language Models (LLMs) to improve user preference modeling, yielding promising outcomes. However, current LLM-based approaches struggle to fully leverage user behavior sequences, resulting in suboptimal preference modeling for personalized recommendations. In this study, we propose a novel Counterfactual Fine-Tuning (CFT) method to address this issue by explicitly emphasizing the role of behavior sequences when generating recommendations. Specifically, we employ counterfactual reasoning to identify the causal effects of behavior sequences on model output and introduce a task that directly fits the ground-truth labels based on these effects, achieving the goal of explicit emphasis. Additionally, we develop a token-level weighting mechanism to adjust the emphasis strength for different item tokens, reflecting the diminishing influence of behavior sequences from earlier to later tokens during predicting an item. Extensive experiments on real-world datasets demonstrate that CFT effectively improves behavior sequence modeling. Our codes are available at https://github.com/itsmeyjt/CFT.

Yang Zhang, Juntao You, Yimeng Bai, Jizhi Zhang, Keqin Bao, Wenjie Wang, Tat-Seng Chua• 2024

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationMovieLens 1M (test)
Hit@1026.87
22
Sequential RecommendationYelp (test)
H@103.9
19
Sequential RecommendationAmazon Toy
N@51.38
15
Sequential RecommendationAmazon Office
N@51.83
15
Sequential RecommendationAmazon Clothing
N@50.0045
15
Sequential RecommendationAmazon-Book
N@50.51
15
Sequential RecommendationMusical Instruments Amazon (test)
Hit Rate @ 50.0321
8
Sequential RecommendationAmazon Industrial and Scientific (test)
H@50.0248
8
Showing 8 of 8 rows

Other info

Follow for update