Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Little By Little: Continual Learning via Incremental Mixture of Rank-1 Associative Memory Experts

About

Continual learning (CL) with large pre-trained models is challenged by task interference and catastrophic forgetting. Existing LoRA-based Mixture-of-Experts (MoE) methods mitigate forgetting by adding new task-specific adapters and freezing old ones, but often suffer from redundancy, interference, and ambiguous routing due to coarse-grained experts and routing. Coarse-grained experts (i.e., full LoRA adapters with large rank) encode low-specialty information. Newly added experts often duplicate or conflict with existing ones, causing redundancy and interference. Their low specialization further confuses the router, accelerating routing degradation and forgetting as experts accumulate. In this work, we propose MoRAM (Mixture of Rank-1 Associative Memory). Grounded in the view that weight matrices function as linear associative memories, MoRAM achieves CL as gradual incrementing of atomic rank-1 memory experts. Each rank-1 adapter acts as a fine-grained MoE expert or an associative memory unit. By viewing rank-1 adapters as key-value pairs, we eliminate explicit routers in MoE-LoRA, using a self-activation mechanism where each memory atom evaluates its own relevance via its intrinsic key. This transforms the adaptation process into robust, content-addressable retrieval. Extensive experiments on CLIP and LLMs demonstrate that MoRAM significantly outperforms state-of-the-art baselines, achieving superior plasticity-stability trade-offs, improving generalization while mitigating forgetting.

Haodong Lu, Chongyang Zhao, Jason Xue, Lina Yao, Kristen Moore, Dong Gong• 2025

Related benchmarks

TaskDatasetResultRank
Continual LearningStandard CL Benchmark
Avg Final Acc0.776
50
Continual LearningLarge Number of Tasks
Average Performance69.7
50
Showing 2 of 2 rows

Other info

Follow for update