Meta Networks
About
Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot Image Classification | Mini-Imagenet (test) | Accuracy49.21 | 235 | |
| 5-way Classification | miniImageNet (test) | -- | 231 | |
| 5-way Few-shot Classification | MiniImagenet | -- | 150 | |
| Few-shot classification | Omniglot (test) | -- | 109 | |
| Few-shot classification | Mini-ImageNet 1-shot 5-way (test) | Accuracy49.21 | 82 | |
| 5-way Classification | miniImageNet 5-way (test) | -- | 47 | |
| 5-way Few-shot Classification | Omniglot (test) | Accuracy (1-shot)98.95 | 27 | |
| 20-way Few-shot Classification | Omniglot (test) | 1-shot Accuracy97 | 18 | |
| Few-shot Image Classification | Omniglot 20-Way | Accuracy97 | 16 | |
| Few-shot classification | Omniglot 20-way 1-shot | Accuracy97 | 15 |