Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions

About

One of the main challenges in Zero-Shot Learning of visual categories is gathering semantic attributes to accompany images. Recent work has shown that learning from textual descriptions, such as Wikipedia articles, avoids the problem of having to explicitly define these attributes. We present a new model that can classify unseen categories from their textual description. Specifically, we use text features to predict the output weights of both the convolutional and the fully connected layers in a deep convolutional neural network (CNN). We take advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches. The proposed model also allows us to automatically generate a list of pseudo- attributes for each visual category consisting of words from Wikipedia articles. We train our models end-to-end us- ing the Caltech-UCSD bird and flower datasets and evaluate both ROC and Precision-Recall curves. Our empirical results show that the proposed model significantly outperforms previous methods.

Jimmy Ba, Kevin Swersky, Sanja Fidler, Ruslan Salakhutdinov• 2015

Related benchmarks

TaskDatasetResultRank
Image ClassificationAnimals with Attributes (AwA) (Standard Split)
Hit@1 Accuracy69.3
21
Zero-shot ClassificationAwA 10-way 0-shot conventional setting
Hit@1 Accuracy69.3
18
Image ClassificationCaltech-UCSD Birds-200-2011 (CUB) Standard
Hit@1 Accuracy34
16
Zero-shot ClassificationCUB 50-way 0-shot conventional setting
Top-1 Accuracy34
16
Showing 4 of 4 rows

Other info

Follow for update