Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When Neural Networks Fail to Generalize? A Model Sensitivity Perspective

About

Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions. This paper considers a more realistic yet more challenging scenario,namely Single Domain Generalization (Single-DG), where only a single source domain is available for training. To tackle this challenge, we first try to understand when neural networks fail to generalize? We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity". Based on our analysis, we propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies. Models trained with these hard-to-learn samples can effectively suppress the sensitivity in the frequency space, which leads to improved generalization performance. Extensive experiments on multiple public datasets demonstrate the superiority of our approach, which surpasses the state-of-the-art single-DG methods.

Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10-C
Accuracy77.33
127
Image ClassificationPACS
Accuracy65.71
100
Person Re-IdentificationDukeMTMC-reID to Market1501
mAP30
67
Image ClassificationDigits
Average Accuracy76.56
23
Person Re-IdentificationMarket1501 to DukeMTMC
mAP33.1
10
Showing 5 of 5 rows

Other info

Follow for update