Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Preservation of the Global Knowledge by Not-True Distillation in Federated Learning

About

In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models. Although this precludes the need to access clients' data directly, the global model's convergence often suffers from data heterogeneity. This study starts from an analogy to continual learning and suggests that forgetting could be the bottleneck of federated learning. We observe that the global model forgets the knowledge from previous rounds, and the local training induces forgetting the knowledge outside of the local distribution. Based on our findings, we hypothesize that tackling down forgetting will relieve the data heterogeneity problem. To this end, we propose a novel and effective algorithm, Federated Not-True Distillation (FedNTD), which preserves the global perspective on locally available data only for the not-true classes. In the experiments, FedNTD shows state-of-the-art performance on various setups without compromising data privacy or incurring additional communication costs.

Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy96.97
882
Image ClassificationCIFAR-100
Top-1 Accuracy56.6
622
Image ClassificationCIFAR-10--
507
Image ClassificationMNIST--
395
Image ClassificationCIFAR-100--
302
Image ClassificationTiny-ImageNet
Top-1 Accuracy46.17
143
Image ClassificationCIFAR-100
Nominal Accuracy32.37
116
Image ClassificationImageNet-100 (test)
Clean Accuracy44.08
109
Image ClassificationCINIC-10
Accuracy48.07
59
Image ClassificationTiny-ImageNet
Validation Accuracy0.4617
57
Showing 10 of 37 rows

Other info

Code

Follow for update