Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation

About

Sequential recommender systems (SRS) aim to predict users' subsequent choices based on their historical interactions and have found applications in diverse fields such as e-commerce and social media. However, in real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed. These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing SRS. These challenges can adversely affect user experience and seller benefits, making them crucial to address. Though a few works have addressed the challenges, they still struggle with the seesaw or noisy issues due to the intrinsic scarcity of interactions. The advancements in large language models (LLMs) present a promising solution to these problems from a semantic perspective. As one of the pioneers in this field, we propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR). This framework utilizes semantic embeddings derived from LLMs to enhance SRS without adding extra inference load from LLMs. To address the long-tail item challenge, we design a dual-view modeling framework that combines semantics from LLMs and collaborative signals from conventional SRS. For the long-tail user challenge, we propose a retrieval augmented self-distillation method to enhance user preference representation using more informative interactions from similar users. To verify the effectiveness and versatility of our proposed enhancement framework, we conduct extensive experiments on three real-world datasets using three popular SRS models. The results show that our method surpasses existing baselines consistently, and benefits long-tail users and items especially. The implementation code is available at https://github.com/Applied-Machine-Learning-Lab/LLM-ESR.

Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, Xiangyu Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationAmazon Beauty (test)
NDCG@105.181
107
Sequential RecommendationAmazon Toy (test)
NDCG@100.0491
42
Sequential RecommendationYelp (Overall)
Hit Rate @100.6573
36
Sequential RecommendationBeauty
HR@1055.44
30
Sequential RecommendationInstrument
Recall@1058.81
20
Sequential RecommendationSteam Standard (test)
NDCG@1015.5
15
Sequential RecommendationBeauty Tail Item
Hit Rate @ 1021.98
14
RecommendationDouban Movie
HR@52.96
13
RecommendationAmazon Baby
HR@51.82
13
RecommendationAmazon Beauty
HR@51.91
13
Showing 10 of 21 rows

Other info

Follow for update