Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FedRD: Reducing Divergences for Generalized Federated Learning via Heterogeneity-aware Parameter Guidance

About

Heterogeneous federated learning (HFL) aims to ensure effective and privacy-preserving collaboration among different entities. As newly joined clients require significant adjustments and additional training to align with the existing system, the problem of generalizing federated learning models to unseen clients under heterogeneous data has become progressively crucial. Consequently, we highlight two unsolved challenging issues in federated domain generalization: Optimization Divergence and Performance Divergence. To tackle the above challenges, we propose FedRD, a novel heterogeneity-aware federated learning algorithm that collaboratively utilizes parameter-guided global generalization aggregation and local debiased classification to reduce divergences, aiming to obtain an optimal global model for participating and unseen clients. Extensive experiments on public multi-domain datasets demonstrate that our approach exhibits a substantial performance advantage over competing baselines in addressing this specific problem.

Kaile Wang, Jiannong Cao, Yu Yang, Xiaoyin Li, Mingjin Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationPACS
Overall Average Accuracy71.52
230
Image ClassificationOffice-Home (test)
Mean Accuracy52.01
199
ClassificationVLCS
Average Accuracy66.38
15
Image ClassificationDomainNet mini (test)
Accuracy (Clipart)55.74
8
Showing 4 of 4 rows

Other info

Follow for update