Continual Learning Through Synaptic Intelligence
About
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continual Learning | Sequential MNIST | Avg Acc96 | 149 | |
| Continual Learning | CIFAR100 Split | Average Per-Task Accuracy50.4 | 85 | |
| Class-incremental learning | CIFAR-100 10 (test) | -- | 75 | |
| Image Classification | permuted MNIST (pMNIST) (test) | Accuracy95.33 | 63 | |
| Image Classification | CIFAR-100 Split | Accuracy74.84 | 61 | |
| Class-incremental learning | CIFAR10 (test) | Average Accuracy27.43 | 59 | |
| Continual Learning | Tiny ImageNet Split | Forgetting Rate49.7 | 57 | |
| Continual Learning | ImageNet Split Tiny | Avg Accuracy22.2 | 57 | |
| Task-Incremental Learning | CIFAR-10 Split (test) | Average Accuracy68.05 | 46 | |
| Continual Learning | Permuted MNIST | Mean Test Accuracy97.1 | 44 |