Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Sample Reweighting for Accuracy and Adversarial Robustness

About

There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy. We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin, with the goal of improving robust generalization. We formulate weighted adversarial training as a bilevel optimization problem with the upper-level problem corresponding to learning a robust classifier, and the lower-level problem corresponding to learning a parametric function that maps from a sample's \textit{multi-class margin} to an importance weight. Extensive experiments demonstrate that our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.

Chester Holtz, Tsui-Wei Weng, Gal Mishne• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100--
622
Image ClassificationStanford Cars
Accuracy81.07
477
Image ClassificationAircraft
Accuracy80.37
302
Image ClassificationImageNet-1K
Accuracy75.6
190
Image ClassificationOxford-IIIT Pet
Accuracy92.38
161
Image ClassificationImageNet-100
Accuracy87.25
84
Image ClassificationDR In-Distribution
Accuracy91.12
11
Showing 7 of 7 rows

Other info

Follow for update