Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeepGaze II: Reading fixations from deep features trained on object recognition

About

Here we present DeepGaze II, a model that predicts where people look in images. The model uses the features from the VGG-19 deep neural network trained to identify objects in images. Contrary to other saliency models that use deep features, here we use the VGG features for saliency prediction with no additional fine-tuning (rather, a few readout layers are trained on top of the VGG features to predict saliency). The model is therefore a strong test of transfer learning. After conservative cross-validation, DeepGaze II explains about 87% of the explainable information gain in the patterns of fixations and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark. These results corroborate the finding from DeepGaze I (which explained 56% of the explainable information gain), that deep features trained on object recognition provide a versatile feature space for performing related visual tasks. We explore the factors that contribute to this success and present several informative image examples. A web service is available to compute model predictions at http://deepgaze.bethgelab.org.

Matthias K\"ummerer, Thomas S. A. Wallis, Matthias Bethge• 2016

Related benchmarks

TaskDatasetResultRank
Saliency PredictionMIT300 (test)
CC0.52
56
Visual Saliency PredictionSALICON (test)
CC0.479
12
Affordance GroundingOPRA 28 x 28 (test)
KLD1.9
11
Affordance GroundingEPIC-Hotspots 28 x 28 (test)
KLD1.35
10
Grounded affordance predictionOPRA (seen classes)
KLD1.897
9
Affordance GroundingAGD20k v1 (Unseen)
KLD1.99
8
Grounded affordance predictionEPIC (seen classes)
KLD1.352
8
Generalization to novel objectsEPIC novel objects
KLD1.297
8
Affordance GroundingAGD20k v1 (Seen)
KLD1.858
8
Generalization to novel objectsOPRA novel objects
KLD1.757
8
Showing 10 of 10 rows

Other info

Follow for update