Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

It's Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation

About

Linear autoencoder models learn an item-to-item weight matrix via convex optimization with L2 regularization and zero-diagonal constraints. Despite their simplicity, they have shown remarkable performance compared to sophisticated non-linear models. This paper aims to theoretically understand the properties of two terms in linear autoencoders. Through the lens of singular value decomposition (SVD) and principal component analysis (PCA), it is revealed that L2 regularization enhances the impact of high-ranked PCs. Meanwhile, zero-diagonal constraints reduce the impact of low-ranked PCs, leading to performance degradation for unpopular items. Inspired by this analysis, we propose simple-yet-effective linear autoencoder models using diagonal inequality constraints, called Relaxed Linear AutoEncoder (RLAE) and Relaxed Denoising Linear AutoEncoder (RDLAE). We prove that they generalize linear autoencoders by adjusting the degree of diagonal constraints. Experimental results demonstrate that our models are comparable or superior to state-of-the-art linear and non-linear models on six benchmark datasets; they significantly improve the accuracy of long-tail items. These results also support our theoretical insights on regularization and diagonal constraints in linear autoencoders.

Jaewan Moon, Hye-young Kim, Jongwuk Lee• 2023

Related benchmarks

TaskDatasetResultRank
Collaborative FilteringMSD strong generalization
AOA Recall@200.3341
14
Collaborative FilteringML-20M strong generalization
AOA Recall@200.3932
14
Collaborative FilteringNetflix strong generalization
AOA Recall@2036.61
14
Showing 3 of 3 rows

Other info

Follow for update