Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Denial-of-Service: The Puppeteer's Attack for Fine-Grained Control in Ranking-Based Federated Learning

About

Federated Rank Learning (FRL) is a promising Federated Learning (FL) paradigm designed to be resilient against model poisoning attacks due to its discrete, ranking-based update mechanism. Unlike traditional FL methods that rely on model updates, FRL leverages discrete rankings as a communication parameter between clients and the server. This approach significantly reduces communication costs and limits an adversary's ability to scale or optimize malicious updates in the continuous space, thereby enhancing its robustness. This makes FRL particularly appealing for applications where system security and data privacy are crucial, such as web-based auction and bidding platforms. While FRL substantially reduces the attack surface, we demonstrate that it remains vulnerable to a new class of local model poisoning attack, i.e., fine-grained control attacks. We introduce the Edge Control Attack (ECA), the first fine-grained control attack tailored to ranking-based FL frameworks. Unlike conventional denial-of-service (DoS) attacks that cause conspicuous disruptions, ECA enables an adversary to precisely degrade a competitor's accuracy to any target level while maintaining a normal-looking convergence trajectory, thereby avoiding detection. ECA operates in two stages: (i) identifying and manipulating Ascending and Descending Edges to align the global model with the target model, and (ii) widening the selection boundary gap to stabilize the global model at the target accuracy. Extensive experiments across seven benchmark datasets and nine Byzantine-robust aggregation rules (AGRs) show that ECA achieves fine-grained accuracy control with an average error of only 0.224%, outperforming the baseline by up to 17x. Our findings highlight the need for stronger defenses against advanced poisoning attacks. Our code is available at: https://github.com/Chenzh0205/ECA

Zhihao Chen, Zirui Gong, Jianting Ning, Yanjun Zhang, Leo Yu Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Poisoning Attack ControlCIFAR10 Conv8 (test)
Control Error0.06
18
Targeted Adversarial AttackCIFAR-10 (test)
Control Error (ξ)4
12
Fine-grained accuracy control attackFashionMNIST
Control Error Rate0.59
10
Federated Learning Model ControlCIFAR10
FRL Accuracy0.18
7
Model Poisoning Attack ControlCIFAR100 (test)
Average Control Error3.00e-4
2
Model Poisoning Attack ControlLocation30 (test)
Avg Control Error7
2
Model Poisoning Attack ControlPurchase100 (test)
Avg Control Error ξ15
2
Model Poisoning Attack ControlTexas100 (test)
Avg Control Error0.06
2
Poisoning Attack ControlEMNIST LeNet (test)
Control Error0.47
2
Poisoning Attack ControlFashionMNIST (test)
Control Error0.4
2
Showing 10 of 10 rows

Other info

Follow for update