Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fortuitous Forgetting in Connectionist Networks

About

Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce "forget-and-relearn" as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements.

Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationAircraft
Accuracy4.65
302
Image ClassificationCUB
Accuracy5.33
249
Image ClassificationTiny-ImageNet
Accuracy56.92
227
Image ClassificationStanford Dogs
Accuracy8.69
130
Few-shot classificationEuroSAT
Accuracy67.79
67
Few-shot Image ClassificationISIC (test)
Accuracy32.26
36
Few-shot Image ClassificationCD-FSL 5-way 5-shot (test)
ChestX Accuracy22.03
8
5-way Few-shot ClassificationChestX 20-shot (test)
Accuracy22.54
8
5-way Few-shot ClassificationChestX 50-shot (test)
Accuracy0.2378
8
5-way Few-shot ClassificationISIC 50-shot (test)
Accuracy41.53
8
Showing 10 of 16 rows

Other info

Follow for update