Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Defending the Edge: Representative-Attention Defense against Backdoor Attacks in Federated Learning

About

Federated learning (FL) remains highly vulnerable to adaptive backdoor attacks that preserve stealth by closely imitating benign update statistics. Existing defenses predominantly rely on anomaly detection in parameter or gradient space, overlooking behavioral constraints that backdoor attacks must satisfy to ensure reliable trigger activation. These anomaly-centric methods fail against adaptive attacks that normalize update magnitudes and mimic benign statistical patterns while preserving backdoor functionality, creating a fundamental detection gap. To address this limitation, this paper introduces FeRA (Federated Representative Attention) -- a novel attention-driven defense that shifts the detection paradigm from anomaly-centric to consistency-centric analysis. FeRA exploits the intrinsic need for backdoor persistence across training rounds, identifying malicious clients through suppressed representation-space variance, an orthogonal property to traditional magnitude-based statistics. The framework conducts multi-dimensional behavioral analysis combining spectral and spatial attention, directional alignment, mutual similarity, and norm inflation across two complementary detection mechanisms: consistency analysis and norm-inflation detection. Through this mechanism, FeRA isolates malicious clients that exhibit low-variance consistency or magnitude amplification. Extensive evaluation across six datasets, nine attacks, and three model architectures under both Independent and Identically Distributed (IID) and non-IID settings confirm FeRA achieves superior backdoor mitigation. Under different non-IID settings, FeRA achieved the lowest average Backdoor Accuracy (BA), about 1.67% while maintaining high clean accuracy compared to other state-of-the-art defenses. The code is available at https://github.com/Peatech/FeRA_defense.git.

Chibueze Peace Obioma, Youcheng Sun, Mustafa A. Mustafa• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy61.84
3518
Image ClassificationTiny ImageNet (test)--
265
Backdoor DefenseCIFAR-10 (test)
Clean Accuracy84.26
40
Image ClassificationCIFAR-10 IID
Average BA0.0508
37
Backdoor DefenseCIFAR-10 alpha=0.5 Neurotoxin attack, Round 2001, 100 rounds (test)
MA87.16
7
Backdoor DefenseCIFAR-10 alpha=0.7 Neurotoxin attack Round 2001 100 rounds (test)
Mean Accuracy86.94
7
Backdoor DefenseCIFAR-10 alpha=0.5 (Non-IID)
MA85.74
7
Backdoor DefenseCIFAR-10 (Aggregate (IID & Non-IID))
Average Model Accuracy87
7
Backdoor DefenseCIFAR-10 Average alpha Neurotoxin attack (test)
MA86.05
7
Backdoor DefenseCIFAR-10 alpha=0.2 Neurotoxin attack Round 2001 100 rounds (test)
Mean Accuracy84.05
7
Showing 10 of 13 rows

Other info

Follow for update