Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Continual Learning with Progressive Neural Collapse

About

Continual Learning (CL) seeks to build an agent that can continuously learn a sequence of tasks, where a key challenge, namely Catastrophic Forgetting, persists due to the potential knowledge interference among different tasks. On the other hand, deep neural networks (DNNs) are shown to converge to a terminal state termed Neural Collapse during training, where all class prototypes geometrically form a static simplex equiangular tight frame (ETF). These maximally and equally separated class prototypes make the ETF an ideal target for model learning in CL to mitigate knowledge interference. Thus inspired, several studies have emerged very recently to leverage a fixed global ETF in CL, which however suffers from key drawbacks, such as impracticability and limited performance.To address these challenges and fully unlock the potential of ETF in CL, we propose Progressive Neural Collapse (ProNC), a novel framework that completely removes the need of a fixed global ETF in CL. Specifically, ProNC progressively expands the ETF target in a principled way by adding new class prototypes as vertices for new tasks, ensuring maximal separability across all encountered classes with minimal shifts from the previous ETF. We next develop a new CL framework by plugging ProNC into commonly used CL algorithm designs, where distillation is further leveraged to balance between target shifting for old classes and target aligning for new classes. Extensive experiments show that our approach significantly outperforms related baselines while maintaining superior flexibility, simplicity, and efficiency. Our code is available at https://github.com/Continue-Edge-AI-Lab/ProNC

Zheng Wang, Wanhao Yu, Li Yang, Sen Lin• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 Seq
Final Average Accuracy96.95
52
Image ClassificationSeq-CIFAR-100
Accuracy86.38
52
Image ClassificationSeq-Tiny-ImageNet
Final Average Accuracy69.77
44
Task-Incremental LearningSeq-CIFAR-10
FAA96.95
28
Task-Incremental LearningCIFAR-100 Seq
FAA86.38
28
Class-incremental learningCIFAR-10 Seq
Final Average Accuracy (FAA)73.95
28
Class-incremental learningTinyImageNet Seq
FAA29.06
24
Task-Incremental LearningTiny ImageNet Seq
FAA69.77
24
Showing 8 of 8 rows

Other info

Follow for update