Towards Explainable Privacy Preservation in Federated Learning via Shapley Value-Guided Noise Injection
About
This paper proposes FedSVA, an explainable differential privacy (DP) mechanism for federated learning (FL) that dynamically calibrates noise injection based on the privacy contribution of attributes via Shapley Values. Unlike heuristic DP methods, FedSVA quantifies each attribute's influence on model training and adjusts noise accordingly, providing rigorous privacy guarantees while minimizing utility loss. Theoretical analysis confirms convergence and DP properties. Experiments on CIFAR-10 and FEMNIST show state-of-the-art privacy-utility trade-offs and robust defense against reconstruction attacks.
Yunbo Li, Jiaping Gui, Yue Wu• 2025
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | FEMNIST (test) | Accuracy76.06 | 104 | |
| Image Classification | CIFAR-10 (test) | Best Accuracy82.89 | 21 |
Showing 2 of 2 rows