Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fair Resource Allocation in Federated Learning

About

Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.

Tian Li, Maziar Sanjabi, Ahmad Beirami, Virginia Smith• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy70.16
3381
Image ClassificationMNIST i.i.d. (test)
Test Accuracy91.873
54
MRI prostate segmentationProstate MRI (test)
Client 1 Score90.94
34
Federated Learning FairnessFashion MNIST (test)
Accuracy Variance1.151
28
Image ClassificationEMNIST Dir(0.1) (test)
Test Accuracy69.2
28
Fundus SegmentationFundus (test)
Client 1 Score86.24
17
Federated Learning Image ClassificationDirtyMNIST
Max r_k(θ)-0.062
12
Federated Learning ClassificationEMNIST dir(alpha=0.1) (test)
Max r_k(theta)0.097
10
Image ClassificationEMNIST alpha = 0.1
Max r_k(theta)0.118
10
Medical Image SegmentationRIF (test)
Site 1 Score0.7783
9
Showing 10 of 19 rows

Other info

Follow for update