Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Saccader: Improving Accuracy of Hard Attention Models for Vision

About

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which uses only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader. Key to Saccader is a pretraining step that requires only class labels and provides initial attention locations for policy gradient optimization. Our best models narrow the gap to common ImageNet baselines, achieving $75\%$ top-1 and $91\%$ top-5 while attending to less than one-third of the image.

Gamaleldin F. Elsayed, Simon Kornblith, Quoc V. Le• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10
Accuracy77.8
508
Image ClassificationImageNet100 (test)
Top-1 Acc75.9
87
ClassificationCOCO
Accuracy57.1
31
Scanpath AlignmentCIFAR-10
DTW928.4
18
Image ClassificationImageNet-1k (val)
Top-1 Accuracy72.27
14
Image ClassificationImageNet
Top-1 Accuracy70.31
10
Visual SearchCOCO-Search18 cross-task
Accuracy (%)16.7
7
Showing 7 of 7 rows

Other info

Follow for update