Fly-CL: A Fly-Inspired Framework for Enhancing Efficient Decorrelation and Reduced Training Time in Pre-trained Model-based Continual Representation Learning
About
Using a nearly-frozen pretrained model, the continual representation learning paradigm reframes parameter updates as a similarity-matching problem to mitigate catastrophic forgetting. However, directly leveraging pretrained features for downstream tasks often suffers from multicollinearity in the similarity-matching stage, and more advanced methods can be computationally prohibitive for real-time, low-latency applications. Inspired by the fly olfactory circuit, we propose Fly-CL, a bio-inspired framework compatible with a wide range of pretrained backbones. Fly-CL substantially reduces training time while achieving performance comparable to or exceeding that of current state-of-the-art methods. We theoretically show how Fly-CL progressively resolves multicollinearity, enabling more effective similarity matching with low time complexity. Extensive simulation experiments across diverse network architectures and data regimes validate Fly-CL's effectiveness in addressing this challenge through a biologically inspired design. Code is available at https://github.com/gfyddha/Fly-CL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continual Learning | CIFAR-100 | -- | 56 | |
| Image Classification | ImageNet A | -- | 50 | |
| Continual Learning | CIFAR-100 | Training Time per Task8.31 | 21 | |
| Continual Learning | CUB-200 2011 | Avg Training Time per Task2.76 | 21 | |
| Continual Learning | VTAB | Average Training Time per Task1.7 | 21 | |
| Continual Learning | CUB | Backward Transfer (BwT)-3.8 | 17 | |
| Continual Learning | VTAB | Average Task Accuracy (AT)94.61 | 9 | |
| Continual Learning | CIFAR-100 | Memory Usage (GB)6.7 | 9 | |
| Continual Learning | VTAB | Memory Usage (GB)4.3 | 9 | |
| Image Classification | ImageNet-R | Tau (Training)7.55 | 9 |