Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Low-Shot Learning with Imprinted Weights

About

Human vision is able to immediately recognize novel visual categories after seeing just one or a few training examples. We describe how to add a similar capability to ConvNet classifiers by directly setting the final layer weights from novel training examples during low-shot learning. We call this process weight imprinting as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example. The imprinting process provides a valuable complement to training with stochastic gradient descent, as it provides immediate good classification performance and an initialization for any further fine-tuning in the future. We show how this imprinting process is related to proxy-based embeddings. However, it differs in that only a single imprinted weight vector is learned for each novel category, rather than relying on a nearest-neighbor distance to training instances as typically used with embedding methods. Our experiments show that using averaging of imprinted weights provides better generalization than using nearest-neighbor instance embeddings.

Hang Qi, Matthew Brown, David G. Lowe• 2017

Related benchmarks

TaskDatasetResultRank
3D Object DetectionScanNet V2 (val)--
361
Few-shot classificationCUB
Accuracy85.3
96
Generalized Few-Shot LearningAWA2
Accuracy93.5
48
3D Object DetectionSUN RGB-D
Base AP@0.2566.69
40
Batch incremental 3D object detectionScanNet V2 (val)
mAP@0.25 (Base)72.77
28
Generalized Few-Shot LearningCUB
Accuracy79.5
24
Few-shot LearningSUN
Accuracy70.2
24
Generalized Few-Shot LearningSUN
Accuracy42.5
24
Generalized Few-Shot LearningminiImageNet GFSL
Accuracy (Novel)59.27
20
Generalized Few-Shot Learningtiered-ImageNet
Novel Accuracy74.01
18
Showing 10 of 16 rows

Other info

Follow for update