One-Shot Federated Learning
About
We present one-shot federated learning, where a central server learns a global model over a network of federated devices in a single round of communication. Our approach - drawing on ensemble learning and knowledge aggregation - achieves an average relative gain of 51.5% in AUC over local baselines and comes within 90.1% of the (unattainable) global ideal. We discuss these methods and identify several promising directions of future work.
Neel Guha, Ameet Talwalkar, Virginia Smith• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy32.25 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy79.91 | 3381 | |
| Image Classification | SVHN (test) | Accuracy85.7 | 199 | |
| Classification | fMNIST (test) | Accuracy66.19 | 149 | |
| Graph Classification | ENZYMES 1.0 (test) | AUC49 | 25 | |
| Graph Classification | IMDB-BINARY 1.0 (test) | AUC0.54 | 25 | |
| Graph Classification | IMDB-MULTI (IMDB-M) 1.0 (test) | AUC52 | 25 | |
| Graph Classification | PROTEINS 1.0 (test) | AUC0.76 | 17 | |
| Graph Classification | MUTAG 1.0 (test) | AUC0.72 | 17 | |
| Image Classification | Tiny-ImageNet | Accuracy (alpha=0.1)30.85 | 9 |
Showing 10 of 15 rows