Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Meta-Learning with Adaptive Hyperparameters

About

Despite its popularity, several recent works question the effectiveness of MAML when test tasks are different from training tasks, thus suggesting various task-conditioned methodology to improve the initialization. Instead of searching for better task-aware initialization, we focus on a complementary factor in MAML framework, inner-loop optimization (or fast adaptation). Consequently, we propose a new weight update rule that greatly enhances the fast adaptation process. Specifically, we introduce a small meta-network that can adaptively generate per-step hyperparameters: learning rate and weight decay coefficients. The experimental results validate that the Adaptive Learning of hyperparameters for Fast Adaptation (ALFA) is the equally important ingredient that was often neglected in the recent few-shot learning approaches. Surprisingly, fast adaptation from random initialization with ALFA can already outperform MAML.

Sungyong Baik, Myungsub Choi, Janghoon Choi, Heewon Kim, Kyoung Mu Lee• 2020

Related benchmarks

TaskDatasetResultRank
5-way ClassificationminiImageNet (test)--
231
5-way Few-shot ClassificationminiImageNet standard (test)
Accuracy69.12
91
Few-shot Image ClassificationtieredImageNet (test)
Accuracy70.54
86
Few-shot classificationMeta-Dataset 1.0 (test)
ILSVRC Accuracy52.8
42
Few-shot Image ClassificationMeta-Dataset (test)
Omniglot Accuracy78.4
40
Few-shot Image ClassificationminiImageNet original (test)
5-way 1-shot Acc59.74
30
Few-shot Image ClassificationtieredImageNet original (test)
5-way 1-shot Accuracy64.62
18
Few-shot domain generalizationMeta-Dataset ImageNet-only v1 (train)
Accuracy (ImageNet)52.8
8
Showing 8 of 8 rows

Other info

Follow for update