Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Leveraging Large Language Models for Sequential Recommendation

About

Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.

Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, Marios Fragkoulis• 2023

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationYelp (Overall)
Hit Rate @100.0415
63
Sequential RecommendationYelp (Tail)
Hit Rate@100.57
39
Sequential RecommendationBeauty Overall
H@103.83
27
Sequential RecommendationGrocery Tail
Hit Rate @100.91
27
Sequential RecommendationGrocery Overall
Hit Rate@104.51
27
Sequential RecommendationBeauty Tail
Hit Rate@100.0103
27
RecommendationCDs sparse
NDCG@112
20
RecommendationDouban Movie
HR@51.43
13
RecommendationAmazon Baby
HR@51.37
13
RecommendationAmazon Beauty
HR@51.26
13
Showing 10 of 13 rows

Other info

Follow for update