Embarrassingly Shallow Autoencoders for Sparse Data
About
Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments.
Harald Steck• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Recommendation | Gowalla (test) | Recall@200.1765 | 126 | |
| Recommendation | Amazon-Book (test) | Recall@200.071 | 101 | |
| Recommendation | Yelp 2018 (test) | Recall@206.57 | 90 | |
| Recommendation | Netflix (test) | NDCG@10039.4 | 30 | |
| Next-item recommendation | Men Amazon (test) | HR@1019.3 | 29 | |
| Next-item recommendation | Fashion Amazon (test) | HR@100.213 | 29 | |
| Next-item recommendation | Games Amazon (test) | HR@100.623 | 27 | |
| Recommendation | MovieLens 20M (test) | -- | 24 | |
| Recommendation | Goodbooks-10k (test) | nDCG@1000.482 | 22 | |
| Top-N Recommendation | Netflix Prize Dataset | NCDG@1000.393 | 22 |
Showing 10 of 34 rows