Fed-Focal Loss for imbalanced data classification in Federated Learning
About
The Federated Learning setting has a central server coordinating the training of a model on a network of devices. One of the challenges is variable training performance when the dataset has a class imbalance. In this paper, we address this by introducing a new loss function called Fed-Focal Loss. We propose to address the class imbalance by reshaping cross-entropy loss such that it down-weights the loss assigned to well-classified examples along the lines of focal loss. Additionally, by leveraging a tunable sampling framework, we take into account selective client model contributions on the central server to further focus the detector during training and hence improve its robustness. Using a detailed experimental analysis with the VIRTUAL (Variational Federated Multi-Task Learning) approach, we demonstrate consistently superior performance in both the balanced and unbalanced scenarios for MNIST, FEMNIST, VSN and HAR benchmarks. We obtain a more than 9% (absolute percentage) improvement in the unbalanced MNIST benchmark. We further show that our technique can be adopted across multiple Federated Learning algorithms to get improvements.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ISIC 2019 (test) | -- | 43 | |
| Medical Image Classification | SC (Skin Cancer) (test) | Accuracy54.23 | 33 | |
| Medical Image Classification | BT (Brain Tumor) (test) | Accuracy67.26 | 31 | |
| Medical Image Classification | Real (last five rounds average) | BACC44 | 11 | |
| Medical Image Classification | ICH (test) | Balanced Accuracy63.04 | 11 | |
| Medical Image Classification | real (test) | Accuracy59.97 | 7 | |
| Medical Image Classification | BT2 (whole dataset) | Global Accuracy0.7999 | 7 |