Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses

About

Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources. FL for non-graph data has shown to be vulnerable to backdoor attacks, which inject a shared backdoor trigger into the training data such that the trained backdoored FL model can predict the testing data containing the trigger as the attacker desires. However, FedGL against backdoor attacks is largely unexplored, and no effective defense exists. In this paper, we aim to address such significant deficiency. First, we propose an effective, stealthy, and persistent backdoor attack on FedGL. Our attack uses a subgraph as the trigger and designs an adaptive trigger generator that can derive the effective trigger location and shape for each graph. Our attack shows that empirical defenses are hard to detect/remove our generated triggers. To mitigate it, we further develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location. Our defense involves carefully dividing a testing graph into multiple subgraphs and designing a majority vote-based ensemble classifier on these subgraphs. We then derive the deterministic certified robustness based on the ensemble classifier and prove its tightness. We extensively evaluate our attack and defense on six graph datasets. Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets. Our defense results show, in certain cases, the certified accuracy for clean testing graphs against an arbitrary trigger with size 20 can be close to the normal accuracy under no attack, while there is a moderate gap in other cases. Moreover, the certified backdoor accuracy is always 0 for backdoored testing graphs generated by our attack, implying our defense can fully mitigate the attack. Source code is available at: https://github.com/Yuxin104/Opt-GDBA.

Yuxin Yang, Qiang Li, Jinyuan Jia, Yuan Hong, Binghui Wang (2) __INSTITUTION_5__ College of Computer Science, Technology, Jilin University, (2) Illinois Institute of Technology, (3) The Pennsylvania State University, (4) University of Connecticut)• 2024

Related benchmarks

TaskDatasetResultRank
Graph Backdoor AttackGossipcop
ASR94
25
Graph Backdoor AttackEth-Phish&Hack
ASR44
20
Backdoor AttackMutagenicity
ASR54
15
Backdoor AttackFRANKENSTEIN
Attack Success Rate (ASR)40
15
Backdoor AttackNCI109
Attack Success Rate (ASR)47
15
Backdoor AttackDD
ASR18
15
Adversarial AttackNCI109
Average Attack Success (AAS)56
5
Adversarial AttackMutagenicity
AAS46
5
Backdoor Attack on Federated Graph LearningDD
AAS5
5
Graph Backdoor AttackDD
AAS8
5
Showing 10 of 21 rows

Other info

Follow for update