Learning Equi-angular Representations for Online Continual Learning
About
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e.g., single-epoch training). To address the challenge, we propose an efficient online continual learning method using the neural collapse phenomenon. In particular, we induce neural collapse to form a simplex equiangular tight frame (ETF) structure in the representation space so that the continuously learned model with a single epoch can better fit to the streamed data by proposing preparatory data training and residual correction in the representation space. With an extensive set of empirical validations using CIFAR-10/100, TinyImageNet, ImageNet-200, and ImageNet-1K, we show that our proposed method outperforms state-of-the-art methods by a noticeable margin in various online continual learning scenarios such as disjoint and Gaussian scheduled continuous (i.e., boundary-free) data setups.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Online Continual Learning | CIFAR-10 | Average AUC78.31 | 20 | |
| Online Continual Learning | CIFAR-100 | AAUC57.12 | 20 | |
| Online Continual Learning | TinyImageNet | AAUC41.77 | 18 | |
| Online Continual Learning | ImageNet-200 | AAUC44.88 | 18 | |
| Online Continual Learning | ImageNet-1K (Disjoint) | AAUC34.33 | 9 | |
| Online Continual Learning | ImageNet-1K Gaussian-Scheduled | AAUC30.53 | 9 |