Enhancing High-order Interaction Awareness in LLM-based Recommender Model
About
Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users' earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sequential Recommendation | Beauty | HR@107.5 | 58 | |
| Recommendation | Beauty | NDCG@548.52 | 48 | |
| Sequential Recommendation | Toys | Recall@50.0713 | 42 | |
| Recommendation | Sports | nDCG@100.4852 | 28 | |
| Sequential Recommendation | Sports | Hit Rate @55.38 | 22 | |
| Direct Recommendation | Toys | Hit Rate@551.78 | 9 |