Mnemonics Training: Multi-Class Incremental Learning without Forgetting
About
Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | CIFAR100 (test) | Avg Acc63.34 | 76 | |
| Class-incremental learning | ImageNet-100 | Avg Acc75.54 | 74 | |
| Class-incremental learning | CIFAR-100 | Average Accuracy64.59 | 60 | |
| Incremental Learning | ImageNet subset | Average Accuracy72.6 | 58 | |
| Incremental Learning | CIFAR-100 | Average Accuracy63.34 | 51 | |
| Incremental Learning | ImageNet full | Average Accuracy65.4 | 48 | |
| Class-incremental learning | CIFAR-100 (incremental) | Avg Incremental Acc63.34 | 26 | |
| Class-incremental learning | ImageNet-1K standard (test val) | Average Accuracy64.54 | 22 | |
| Class-incremental learning | ImageNet-100 | -- | 20 | |
| Class-incremental learning | ImageNet1000 (incremental) | Avg Incremental Accuracy63.01 | 15 |