Fair Resource Allocation in Federated Learning
About
Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 (test) | Accuracy70.16 | 3381 | |
| Image Classification | MNIST i.i.d. (test) | Test Accuracy91.873 | 54 | |
| MRI prostate segmentation | Prostate MRI (test) | Client 1 Score90.94 | 34 | |
| Federated Learning Fairness | Fashion MNIST (test) | Accuracy Variance1.151 | 28 | |
| Image Classification | EMNIST Dir(0.1) (test) | Test Accuracy69.2 | 28 | |
| Fundus Segmentation | Fundus (test) | Client 1 Score86.24 | 17 | |
| Federated Learning Image Classification | DirtyMNIST | Max r_k(θ)-0.062 | 12 | |
| Federated Learning Classification | EMNIST dir(alpha=0.1) (test) | Max r_k(theta)0.097 | 10 | |
| Image Classification | EMNIST alpha = 0.1 | Max r_k(theta)0.118 | 10 | |
| Medical Image Segmentation | RIF (test) | Site 1 Score0.7783 | 9 |