Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Supermasks in Superposition

About

We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly initialized, fixed base network and for each task finds a subnetwork (supermask) that achieves good performance. If task identity is given at test time, the correct subnetwork can be retrieved with minimal memory usage. If not provided, SupSup can infer the task using gradient-based optimization to find a linear superposition of learned supermasks which minimizes the output entropy. In practice we find that a single gradient step is often sufficient to identify the correct mask, even among 2500 tasks. We also showcase two promising extensions. First, SupSup models can be trained entirely without task identity information, as they may detect when they are uncertain about new data and allocate an additional supermask for the new training distribution. Finally the entire, growing set of supermasks can be stored in a constant-sized reservoir by implicitly storing them as attractors in a fixed-sized Hopfield network.

Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, Ali Farhadi• 2020

Related benchmarks

TaskDatasetResultRank
Incremental LearningTinyImageNet
Avg Incremental Accuracy10.27
83
Image ClassificationS-MNIST (test)
Average Accuracy99.6
18
NLP ClassificationWebNLP
S2 Test Accuracy75.9
16
Image ClassificationS-TinyImageNet (test)
Average Accuracy50.6
14
ClassificationFITZ
Type-I Accuracy42.5
14
Image ClassificationS-CIFAR100 (test)
Average Accuracy62.1
14
NLP ClassificationGLUE
Average Test Accuracy (S1)78.3
14
Continual LearningGLUE (val)
Aggregate Score78.1
12
Continual LearningWebNLP (val)
Stage 2 Score75.7
12
Continual LearningWebNLP Length-5 sub-sampled (test)
Accuracy (S2)74.01
11
Showing 10 of 15 rows

Other info

Follow for update