Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Secure Federated Matrix Factorization

About

To protect user privacy and meet law regulations, federated (machine) learning is obtaining vast interests in recent years. The key principle of federated learning is training a machine learning model without needing to know each user's personal raw private data. In this paper, we propose a secure matrix factorization framework under the federated learning setting, called FedMF. First, we design a user-level distributed matrix factorization framework where the model can be learned when each user only uploads the gradient information (instead of the raw preference data) to the server. While gradient information seems secure, we prove that it could still leak users' raw data. To this end, we enhance the distributed matrix factorization framework with homomorphic encryption. We implement the prototype of FedMF and test it with a real movie rating dataset. Results verify the feasibility of FedMF. We also discuss the challenges for applying FedMF in practice for future research.

Di Chai, Leye Wang, Kai Chen, Qiang Yang• 2019

Related benchmarks

TaskDatasetResultRank
RecommendationMovieLens-100K (test)
RMSE0.948
55
RecommendationMovieLens 1M (test)--
34
RecommendationYahoo (test)
RMSE22.2
10
RecommendationDouban (test)
RMSE0.817
10
RecommendationML-10M (test)
RMSE0.841
10
RecommendationFlixster (test)--
10
RecommendationAmazon Toys (test)
Recall@100.0164
8
RecommendationMovieLens-100K (test)
Recall@1016.39
8
RecommendationSteam (test)
Recall@100.0485
8
RecommendationBook Amazon (test)
Recall@101.4
8
Showing 10 of 15 rows

Other info

Follow for update