REMIND Your Neural Network to Prevent Catastrophic Forgetting
About
People learn throughout life. However, incrementally updating conventional neural networks leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the brain consolidates memory. Replay involves fine-tuning a network on a mixture of new and old instances. While there is neuroscientific evidence that the brain replays compressed memories, existing methods for convolutional networks replay raw images. Here, we propose REMIND, a brain-inspired approach that enables efficient replay with compressed representations. REMIND is trained in an online manner, meaning it learns one example at a time, which is closer to how humans learn. Under the same constraints, REMIND outperforms other methods for incremental class learning on the ImageNet ILSVRC-2012 dataset. We probe REMIND's robustness to data ordering schemes known to induce catastrophic forgetting. We demonstrate REMIND's generality by pioneering online learning for Visual Question Answering (VQA).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Online Continual Learning | CIFAR-10 | Average AUC69.55 | 20 | |
| Online Continual Learning | CIFAR-100 | AAUC40.87 | 20 | |
| Online Continual Learning | ImageNet-200 | AAUC39.25 | 18 | |
| Online Continual Learning | TinyImageNet | AAUC28.37 | 18 | |
| Online Continual Learning | ImageNet-1K (Disjoint) | AAUC32.47 | 9 | |
| Online Continual Learning | ImageNet-1K Gaussian-Scheduled | AAUC17.42 | 9 |