Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification

About

Vertical Federated Learning (VFL) focuses on handling vertically partitioned data over FL participants. Recent studies have discovered a significant vulnerability in VFL to backdoor attacks which specifically target the distinct characteristics of VFL. Therefore, these attacks may neutralize existing defense mechanisms designed primarily for Horizontal Federated Learning (HFL) and deep neural networks. In this paper, we present the first backdoor defense, called VFLIP, specialized for VFL. VFLIP employs the identification and purification techniques that operate at the inference stage, consequently improving the robustness against backdoor attacks to a great extent. VFLIP first identifies backdoor-triggered embeddings by adopting a participant-wise anomaly detection approach. Subsequently, VFLIP conducts purification which removes the embeddings identified as malicious and reconstructs all the embeddings based on the remaining embeddings. We conduct extensive experiments on CIFAR10, CINIC10, Imagenette, NUS-WIDE, and BankMarketing to demonstrate that VFLIP can effectively mitigate backdoor attacks in VFL. https://github.com/blingcho/VFLIP-esorics24

Yungi Cho, Woorim Han, Miseon Yu, Younghan Lee, Ho Bae, Yunheung Paek• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST
Clean Accuracy94
71
Image ClassificationCINIC-10
Accuracy69
59
Backdoor DefenseCIFAR-10 (test)
Clean Accuracy72
58
Backdoor DefenseSVHN (test)
Model Accuracy (MA)76
18
Backdoor DefenseBank Marketing (test)
Misclassification Accuracy73
18
Poisoning Defense in U-shape Split LearningImagenette
Accuracy53
10
Poisoning Defense in U-shape Split LearningCIFAR-10
Accuracy56
10
Poisoning Defense in U-shape Split LearningMNIST
Accuracy86
10
Poisoning Defense in U-shape Split LearningCINIC-10
Accuracy52
10
Showing 9 of 9 rows

Other info

Follow for update