Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

About

Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks --a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively. Code is available at https://github.com/arthurdouillard/incremental_learning.pytorch

Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, Eduardo Valle• 2020

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)--
539
Class-incremental learningCIFAR-100
Averaged Incremental Accuracy61.8
234
Named Entity RecognitionOntoNotes 5.0 (test)--
90
Incremental LearningTinyImageNet
Avg Incremental Accuracy40.28
83
Class-incremental learningCIFAR100 (test)
Avg Acc64.83
76
Class-incremental learningCIFAR-100 10 (test)
Average Top-1 Accuracy66.41
75
Class-incremental learningImageNet-100
Avg Acc76.96
74
Class-incremental learningCIFAR100 B50 (test)
Average Accuracy71.3
67
Class-incremental learningCIFAR-100
Average Accuracy64.6
60
Incremental LearningImageNet subset
Average Accuracy76.45
58
Showing 10 of 77 rows
...

Other info

Code

Follow for update