Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning

About

Data heterogeneity and backdoor attacks rank among the most significant challenges facing federated learning (FL). For data heterogeneity, personalized federated learning (PFL) enables each client to maintain a private personalized model to cater to client-specific knowledge. Meanwhile, vanilla FL has proven vulnerable to backdoor attacks. However, recent advancements in PFL community have demonstrated a potential immunity against such attacks. This paper explores this intersection further, revealing that existing federated backdoor attacks fail in PFL because backdoors about manually designed triggers struggle to survive in personalized models. To tackle this, we design Bad-PFL, which employs features from natural data as our trigger. As long as the model is trained on natural data, it inevitably embeds the backdoor associated with our trigger, ensuring its longevity in personalized models. Moreover, our trigger undergoes mutual reinforcement training with the model, further solidifying the backdoor's durability and enhancing attack effectiveness. The large-scale experiments across three benchmark datasets demonstrate the superior performance of our attack against various PFL methods, even when equipped with state-of-the-art defense mechanisms.

Mingyuan Fan, Zhanyi Hu, Fuyi Wang, Cen Chen• 2025

Related benchmarks

TaskDatasetResultRank
Backdoor AttackCIFAR-10 (test)
Backdoor Accuracy79.62
30
Backdoor AttackGTSRB (test)
Accuracy96.44
4
Backdoor AttackCIFAR-100 (test)
Accuracy51.18
4
Backdoor AttackSVHN (test)
Accuracy0.9228
4
Showing 4 of 4 rows

Other info

Follow for update