Continual Learning with Deep Generative Replay
About
Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | CIFAR-100 10 (test) | Average Top-1 Accuracy8.1 | 105 | |
| Image Classification | CIFAR-10 original (test) | Accuracy37.93 | 87 | |
| Recommendation | Yelp (test) | -- | 82 | |
| Image Classification | Split MNIST | Average Accuracy91.79 | 49 | |
| Image Classification | EMNIST Balanced (test) | Accuracy63.55 | 34 | |
| Recommendation | MovieLens 10M (Set-up (S)) | Recall@1026.75 | 32 | |
| Incremental Task Learning (ITL) | split-MNIST (test) | Retained Accuracy99.47 | 32 | |
| Recommendation | MovieLens 10M (test) | Recall@108.87 | 32 | |
| Recommendation | Yelp Set-up (S) | Recall@105.72 | 32 | |
| Incremental Task Learning (ITL) | Permuted MNIST (test) | Retained Accuracy92.52 | 32 |