RecGPT: Generative Pre-training for Text-based Recommendation
About
We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT
Hoang Ngo, Dat Quoc Nguyen• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sequential Recommendation | ML 1M | NDCG@100.0986 | 49 | |
| Negative Constraint Recommendation | ML 1M | Recall@100.1277 | 22 | |
| Positive Constraint Recommendation | ML1M | Recall@1014.65 | 8 |
Showing 3 of 3 rows