AI & ML Efficiency Breakthrough

HO-SFL enables backprop-free fine-tuning on edge devices without the convergence penalty typical of zeroth-order methods.

March 17, 2026

Original Paper

HO-SFL: Hybrid-Order Split Federated Learning with Backprop-Free Clients and Dimension-Free Aggregation

Qiyuan Chen, Xian Wu, Yi Wang, Xianhao Chen

arXiv · 2603.14773

The Takeaway

By decoupling server-side first-order updates from client-side zeroth-order updates, this method removes the memory bottleneck of backpropagation on edge hardware. It makes practical on-device federated learning feasible for large models for the first time.

From the abstract

Fine-tuning large models on edge devices is severely hindered by the memory-intensive backpropagation (BP) in standard frameworks like federated learning and split learning. While substituting BP with zeroth-order optimization can significantly reduce memory footprints, it typically suffers from prohibitively degraded convergence speed. To resolve this dilemma, we propose Hybrid-Order Split Federated Learning (HO-SFL). By reformulating the split learning process within a Lagrangian framework, HO