Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adaptive Federated Optimization

About

Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general non-convex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.

Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Kone\v{c}n\'y, Sanjiv Kumar, H. Brendan McMahan• 2020

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy72.5
1455
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench
Accuracy34.6
637
Image ClassificationCIFAR10 (test)
Accuracy70
585
Multimodal ReasoningMM-Vet
MM-Vet Score24.6
431
Image ClassificationCIFAR-10 (test)
Accuracy86.44
410
Image ClassificationTiny ImageNet (test)
Accuracy44.81
362
Multimodal UnderstandingSEED
Accuracy25.3
183
Multimodal Perception and CognitionMME
Overall Score1.00e+3
182
Image ClassificationCIFAR-100 (test)--
175
Showing 10 of 94 rows
...

Other info

Follow for update