Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Meta-Learning Requires Meta-Augmentation

About

Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task. This additional level of learning can be powerful, but it also creates another potential source for overfitting, since we can now overfit in either the model or the base learner. We describe both of these forms of metalearning overfitting, and demonstrate that they appear experimentally in common meta-learning benchmarks. We then use an information-theoretic framework to discuss meta-augmentation, a way to add randomness that discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques.

Janarthanan Rajendran, Alex Irpan, Eric Jang• 2020

Related benchmarks

TaskDatasetResultRank
Rotation PredictionShapeNet1D (meta-test)
MSE4.312
7
Rotation PredictionPASCAL3D (meta-test)
MSE2.298
7
Object DiscoveryDistractor (intra-category (IC))
Mean Pixel Error3.2
4
Object DiscoveryDistractor cross-category (CC)
Mean Pixel Error6.07
4
Showing 4 of 4 rows

Other info

Follow for update