Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Toward Multimodal Model-Agnostic Meta-Learning

About

Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.

Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationAircraft
Accuracy67.31
302
Image ClassificationminiImageNet standard (test)
Accuracy49.86
61
Image ClassificationBird
Accuracy70.49
29
Image ClassificationFungi
Accuracy53.96
18
ClassificationTexture
Accuracy45.89
17
Toy RegressionToy Regression 5-shot (test)
MSE1.096
6
Toy RegressionToy Regression 10-shot (test)
MSE0.256
6
Showing 7 of 7 rows

Other info

Follow for update