Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning

About

In this work we propose a HyperTransformer, a Transformer-based model for supervised and semi-supervised few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity Transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.

Andrey Zhmoginov, Mark Sandler, Max Vladymyrov• 2022

Related benchmarks

TaskDatasetResultRank
Few-shot classificationMini-Imagenet 5-way 5-shot
Accuracy68.1
87
Image ClassificationMini-Imagenet (test)
Acc (5-shot)68.5
75
5-way Few-shot Image ClassificationtieredImageNet 5-shot (test)
Accuracy73.9
41
Image ClassificationtieredImageNet (test)--
32
Few-shot Image ClassificationMiniImageNet 5-way 1-shot
Accuracy55.1
28
5-way Few-shot Image ClassificationtieredImageNet 1-shot (meta-test)
Accuracy56.3
18
Few-shot classificationOmniglot 20-way 1-shot
Accuracy96.2
15
Few-shot classificationOmniglot 20-way 5-shot
Accuracy98.8
15
Showing 8 of 8 rows

Other info

Code

Follow for update