Variational Continual Learning
About
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.
Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | Split MNIST | Average Accuracy96.45 | 49 | |
| Continual Learning | Permuted MNIST | Mean Test Accuracy95.5 | 44 | |
| Continual Learning | Split MNIST | Mean Test Accuracy98.4 | 19 | |
| Continual Learning | PMNIST (test) | Accuracy90 | 17 | |
| Continual Learning | Sequential Omniglot (S-OMNIGLOT) (test) | Accuracy53.86 | 12 | |
| Continual Learning | CW10 (sequence) | Performance55 | 11 | |
| Continual Reinforcement Learning | CW20 sequence | Performance50 | 11 | |
| Image Classification | SplitCIFAR-10 (test) | Accuracy (TInfer Final)0.1597 | 7 |
Showing 8 of 8 rows