Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Progressive Neural Networks

About

Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.

Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell• 2016

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy67.2
3518
Image ClassificationSVHN (test)
Accuracy96.8
362
Continual LearningSequential MNIST
Avg Acc99.23
149
Image ClassificationImageNet (val)
Accuracy76.16
115
Continual LearningCIFAR100 Split
Average Per-Task Accuracy59.2
85
Image ClassificationStanford Cars (val)
Accuracy89.21
56
Continual LearningPermuted MNIST
Mean Test Accuracy93.5
44
Image ClassificationS-CIFAR-10 Task-IL
Accuracy95.13
33
Task-Incremental LearningCIFAR100 (test)
Accuracy54.9
31
Image ClassificationS-CIFAR-10
Task-IL Accuracy95.13
27
Showing 10 of 36 rows

Other info

Follow for update