Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Adversarial Training under Long-Tailed Distributions

About

Deep neural networks are vulnerable to adversarial attacks, often leading to erroneous outputs. Adversarial training has been recognized as one of the most effective methods to counter such attacks. However, existing adversarial training techniques have predominantly been tested on balanced datasets, whereas real-world data often exhibit a long-tailed distribution, casting doubt on the efficacy of these methods in practical scenarios. In this paper, we delve into adversarial training under long-tailed distributions. Through an analysis of the previous work "RoBal", we discover that utilizing Balanced Softmax Loss alone can achieve performance comparable to the complete RoBal approach while significantly reducing training overheads. Additionally, we reveal that, similar to uniform distributions, adversarial training under long-tailed distributions also suffers from robust overfitting. To address this, we explore data augmentation as a solution and unexpectedly discover that, unlike results obtained with balanced data, data augmentation not only effectively alleviates robust overfitting but also significantly improves robustness. We further investigate the reasons behind the improvement of robustness through data augmentation and identify that it is attributable to the increased diversity of examples. Extensive experiments further corroborate that data augmentation alone can significantly improve robustness. Finally, building on these findings, we demonstrate that compared to RoBal, the combination of BSL and data augmentation leads to a +6.66% improvement in model robustness under AutoAttack on CIFAR-10-LT. Our code is available at https://github.com/NISPLab/AT-BSL .

Xinli Yue, Ningping Mou, Qian Wang, Lingchen Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR100-LT (test)--
45
Image ClassificationCIFAR-10 long-tailed (test)
Clean Accuracy77.27
42
Image ClassificationCIFAR-10-LT
Clean Accuracy81.92
26
Image ClassificationCIFAR-100-LT IR=50 (test)
Top-1 Acc (IR 50)49.46
23
Image ClassificationCIFAR-100 LT IR=10 (test)
Accuracy48.41
21
Adversarial RobustnessCIFAR-100-LT (test)
Clean Accuracy50.66
20
Image ClassificationCIFAR-100 Long-Tailed (test)
Clean Accuracy55.55
20
Image ClassificationCIFAR-10-LT IR=10 (test)
Accuracy (Clean)74.34
15
Image ClassificationMedMNIST (test)
Clean Accuracy48.55
11
Image ClassificationCIFAR-10 IR=100 long-tail (test)
Clean Accuracy54.73
5
Showing 10 of 14 rows

Other info

Code

Follow for update