Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML

About

An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of a MAML-trained network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.

Aniruddh Raghu, Maithra Raghu, Samy Bengio, Oriol Vinyals• 2019

Related benchmarks

TaskDatasetResultRank
Few-shot classificationtieredImageNet (test)
Accuracy66.52
282
5-way Few-shot ClassificationCUB
5-shot Acc55.82
95
5-way Few-shot ClassificationminiImageNet standard (test)
Accuracy61.5
91
Few-shot classificationMini-ImageNet 1-shot 5-way (test)
Accuracy55.25
82
5-way Few-shot ClassificationtieredImageNet
Accuracy (1-shot)52.82
49
5-way Few-shot ClassificationminiImageNet 5-way (test)
1-shot Acc48
39
Few-shot classificationCUB meta (test)
Accuracy55.82
35
Few-shot classificationMiniImageNet 5-Shot 5-Way (test)
Accuracy0.7003
27
Few-shot classificationBongard-LOGO (test)
Free-Form Score56.6
21
Image ClassificationFlower (test)
Accuracy61.27
18
Showing 10 of 18 rows

Other info

Follow for update