Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
About
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | Cars | Accuracy48.82 | 314 | |
| Image Classification | Aircraft | Accuracy66.18 | 302 | |
| Object Counting | FSC-147 (test) | MAE24.9 | 297 | |
| Few-shot classification | tieredImageNet (test) | Accuracy72.41 | 282 | |
| Image Classification | CUB | Accuracy64.17 | 249 | |
| Few-shot Image Classification | Mini-Imagenet (test) | Accuracy66.19 | 235 | |
| 5-way Classification | miniImageNet (test) | Accuracy63.11 | 231 | |
| Object Counting | FSC-147 (val) | MAE25.54 | 211 | |
| Image Classification | MiniImagenet | Accuracy62.13 | 206 | |
| Few-shot classification | Mini-ImageNet | 1-shot Acc49.6 | 175 |