Class-Incremental Learning with Generative Classifiers
About
Incrementally training deep neural networks to recognize new classes is a challenging problem. Most existing class-incremental learning methods store data or use generative replay, both of which have drawbacks, while 'rehearsal-free' alternatives such as parameter regularization or bias-correction methods do not consistently achieve high performance. Here, we put forward a new strategy for class-incremental learning: generative classification. Rather than directly learning the conditional distribution p(y|x), our proposal is to learn the joint distribution p(x,y), factorized as p(x|y)p(y), and to perform classification using Bayes' rule. As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned and by using importance sampling to estimate the likelihoods p(x|y). This simple approach performs very well on a diverse set of continual learning benchmarks, outperforming generative replay and other existing baselines that do not store data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Online Continual Learning | CIFAR-100 1 (test) | Accuracy1.97e+3 | 20 | |
| Online Continual Learning | CIFAR-10 10/1 (test) | Accuracy42.7 | 20 | |
| Online Continual Learning | MNIST 10/1 (test) | Accuracy84 | 20 | |
| Online Continual Learning | CIFAR-10 | -- | 20 | |
| Online Continual Learning | miniImageNet 100/1 (test) | Accuracy12.1 | 19 | |
| Online Continual Learning | CIFAR100 | Accuracy19.7 | 8 | |
| Online Continual Learning | MNIST | Accuracy84 | 7 | |
| Online Continual Learning | Mini-ImageNet | Accuracy12.1 | 4 |