Closed-form merging of parameter-efficient modules for Federated Continual Learning
About
Model merging has emerged as a crucial technique in Deep Learning, enabling the integration of multiple models into a unified system while preserving perfor-mance and scalability. In this respect, the compositional properties of low-rank adaptation techniques (e.g., LoRA) have proven beneficial, as simple averaging LoRA modules yields a single model that mostly integrates the capabilities of all individual modules. Building on LoRA, we take a step further by imposing that the merged model matches the responses of all learned modules. Solving this objective in closed form yields an indeterminate system with A and B as unknown variables, indicating the existence of infinitely many closed-form solutions. To address this challenge, we introduce LoRM, an alternating optimization strategy that trains one LoRA matrix at a time. This allows solving for each unknown variable individually, thus finding a unique solution. We apply our proposed methodology to Federated Class-Incremental Learning (FCIL), ensuring alignment of model responses both between clients and across tasks. Our method demonstrates state-of-the-art performance across a range of FCIL scenarios. The code to reproduce our experiments is available at github.com/aimagelab/fed-mammoth.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continual Learning | Standard CL Benchmark | Avg Final Acc0.77 | 50 | |
| Continual Learning | Large Number of Tasks | Average Performance70.2 | 50 | |
| Continual Learning | SuperNI Benchmark | Average Score24.7 | 14 | |
| Continual Learning | Large Number of Tasks (test) | Backward Transfer (BWT)-4.1 | 13 | |
| Continual Learning | SuperNI Standard CL Benchmark (test) | Average Performance79.7 | 13 | |
| Continual Learning | SuperNI Large Number of Tasks (test) | Average Performance72.4 | 13 |