Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Semantic-Guided Multi-Attention Localization for Zero-Shot Learning

About

Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. Existing approaches predominantly focus on learning the proper mapping function for visual-semantic embedding, while neglecting the effect of learning discriminative visual features. In this paper, we study the significance of the discriminative region localization. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations. Our model jointly learns cooperative global and local features from the whole object as well as the detected parts to categorize objects based on semantic descriptions. Moreover, with the joint supervision of embedding softmax loss and class-center triplet loss, the model is encouraged to learn features with high inter-class dispersion and intra-class compactness. Through comprehensive experiments on three widely used zero-shot learning benchmarks, we show the efficacy of the multi-attention localization and our proposed approach improves the state-of-the-art results by a considerable margin.

Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, Ahmed Elgammal• 2019

Related benchmarks

TaskDatasetResultRank
Generalized Zero-Shot LearningCUB
H Score48.5
250
Generalized Zero-Shot LearningAWA2
S Score87.1
165
Zero-shot LearningCUB
Top-1 Accuracy71
144
Zero-shot LearningAWA2
Top-1 Accuracy0.688
95
Image ClassificationCUB
Unseen Top-1 Acc69.9
89
Image ClassificationSUN
Harmonic Mean Top-1 Accuracy53.7
86
Zero-shot LearningSUN (unseen)
Top-1 Accuracy (%)70.9
50
Zero-shot LearningCUB (unseen)
Top-1 Accuracy74.7
49
Zero-shot LearningAWA2 (unseen)
Top-1 Acc92.1
37
Image ClassificationAWA2 GZSL
Acc (Unseen)87.3
32
Showing 10 of 11 rows

Other info

Follow for update