Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Make Continual Learning Stronger via C-Flat

About

Model generalization ability upon incrementally acquiring dynamically updating knowledge from sequentially arriving tasks is crucial to tackle the sensitivity-stability dilemma in Continual Learning (CL). Weight loss landscape sharpness minimization seeking for flat minima lying in neighborhoods with uniform low loss or smooth gradient is proven to be a strong training regime improving model generalization compared with loss minimization based optimizer like SGD. Yet only a few works have discussed this training regime for CL, proving that dedicated designed zeroth-order sharpness optimizer can improve CL performance. In this work, we propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods. A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases. Code is available at https://github.com/WanNaa/C-Flat.

Ang Bian, Wei Li, Hangjie Yuan, Chengrong Yu, Mang Wang, Zixiang Zhao, Aojun Lu, Pengliang Ji, Tao Feng• 2024

Related benchmarks

TaskDatasetResultRank
Class-incremental learningImageNet-100 B=50, C=10 1.0
Avg Incremental Acc86.64
42
Class-incremental learningCIFAR-100 B0_Inc5
Average Accuracy71.11
36
Class-incremental learningCIFAR-100 B0_Inc10
Avg Accuracy72.08
14
Class-incremental learningCIFAR-100 B0_Inc20
Accuracy72.01
14
Class-incremental learningImageNet-100 B50 Inc25
Avg Accuracy87.96
14
Class-incremental learningTiny-ImageNet B0_Inc40
Average Accuracy60.14
14
Showing 6 of 6 rows

Other info

Follow for update