Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HO-SFL: Hybrid-Order Split Federated Learning with Backprop-Free Clients and Dimension-Free Aggregation

About

Fine-tuning large models on edge devices is severely hindered by the memory-intensive backpropagation (BP) in standard frameworks like federated learning and split learning. While substituting BP with zeroth-order optimization can significantly reduce memory footprints, it typically suffers from prohibitively degraded convergence speed. To resolve this dilemma, we propose Hybrid-Order Split Federated Learning (HO-SFL). By reformulating the split learning process within a Lagrangian framework, HO-SFL decouples the optimization landscape: The server performs precise first-order updates (i.e., BP), whereas clients conduct memory-efficient zeroth-order optimization. This hybrid design not only eliminates the need for client-side BP but also enables dimension-free model aggregation, drastically lowering communication costs. Crucially, we provide a theoretical convergence analysis, demonstrating that HO-SFL mitigates the dimension-dependent convergence slowdown of zeroth-order optimization, achieving a convergence rate comparable to first-order methods. Extensive experiments on tasks across vision and language modalities validate that HO-SFL achieves convergence speeds comparable to first-order baselines while significantly reducing communication costs and client memory footprints.

Qiyuan Chen, Xian Wu, Yi Wang, Xianhao Chen• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 IID
Accuracy75
166
Image ClassificationCIFAR10 non-iid
Accuracy69.6
162
Image ClassificationCIFAR-100 non-IID (test)
Test Accuracy (Avg Best)42.5
113
Image ClassificationCIFAR-100 IID
Accuracy44.2
42
Natural Language UnderstandingGLUE
SST-2 Accuracy93.9
9
Showing 5 of 5 rows

Other info

Follow for update