ProtoGuard-SL: Prototype Consistency Based Backdoor Defense for Vertical Split Learning
About
Vertical split learning (SL) enables collaborative model training across parties holding complementary features without sharing raw data, but recent work has shown that it is highly vulnerable to poisoning-based backdoor attacks operating on intermediate embeddings. By compromising malicious clients, adversaries can inject stealthy triggers that manipulate the server-side model while remaining difficult to detect, and existing defenses provide limited robustness against adaptive attacks. In this paper, we propose ProtoGuard-SL, a server-side defense that improves the robustness of split learning by exploiting class-conditional representation consistency in the embedding space. Our approach is motivated by the observation that benign embeddings within the same class exhibit stable semantic alignment, whereas poisoned embeddings inevitably disrupt this structure. ProtoGuard-SL adopts a two-stage framework that constructs robust class prototypes and transforms embeddings into a prototype-consistency representation, followed by a class-conditional, distribution-free conformal filtering strategy to identify and remove anomalous embeddings. Extensive experiments are conducted on three datasets, CIFAR-10, SVHN, and Bank Marketing, under three different attack settings demonstrate that our method achieves state-of-the-art performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Backdoor Defense | CIFAR-10 (test) | Clean Accuracy85 | 58 | |
| Backdoor Defense | SVHN (test) | Model Accuracy (MA)87 | 18 | |
| Backdoor Defense | Bank Marketing (test) | Misclassification Accuracy86 | 18 |