E-CARE: An Efficient LLM-based Commonsense-Augmented Framework for E-Commerce
About
Finding relevant products given a user query is pivotal to an e-commerce platform, as it can drive shopping behavior and generate revenue. The challenge lies in accurately predicting the correlation between queries and products. Recently, mining commonsense knowledge between queries and products using Large Language Models (LLMs) has shown promising results in boosting recommendation performance. However, such methods incur high costs due to intensive real-time LLM decoding during inference, as well as human annotation and potential Supervised Fine-Tuning (SFT) during training. To boost efficiency while leveraging LLMs' commonsense reasoning for various e-commerce tasks, we propose the Efficient Commonsense-Augmented Recommendation Enhancer (E-CARE), which requires neither SFT nor human annotation. The recommendation models augmented with E-CARE can access commonsense reasoning by leveraging a reasoning factor graph that encodes most of the reasoning schema from powerful LLMs, without requiring real-time LLM decoding. The experiments on 2 downstream tasks show improvements of up to 12.1% in precision@5.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Search Relevance | ESCI | Macro F161.03 | 14 | |
| Search Relevance | WANDs | Macro F191.39 | 12 | |
| App Recall | Private Dataset | Recall@562.4 | 2 |