Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Robust Global Representations by Penalizing Local Predictive Power

About

Despite their renowned predictive power on i.i.d. data, convolutional neural networks are known to rely more on high-frequency patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structures of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization out of the domain. Also, to evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.

Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc50.63
553
Image ClassificationEuroSAT
Accuracy45.37
497
Image ClassificationFood-101
Accuracy86.06
494
Image ClassificationImageNet V2
Top-1 Acc64.07
487
Image ClassificationDTD
Accuracy45.73
487
Image ClassificationFlowers102
Accuracy71.88
478
Image ClassificationImageNet-R
Top-1 Acc76.18
474
Image ClassificationSUN397
Accuracy67.36
425
Image ClassificationUCF101
Top-1 Acc68.21
404
Image ClassificationStanfordCars
Accuracy65.32
266
Showing 10 of 43 rows

Other info

Code

Follow for update