Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large-Scale Methods for Distributionally Robust Optimization

About

We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $\chi^2$ divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independent of training set size and number of parameters, making them suitable for large-scale applications. For $\chi^2$ uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the $\chi^2$ problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to [Blanchet & Glynn, 2015]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.

Daniel Levy, Yair Carmon, John C. Duchi, Aaron Sidford• 2020

Related benchmarks

TaskDatasetResultRank
ClassificationCelebA
Avg Accuracy87.7
137
ClassificationCelebA (test)
Average Accuracy87.7
92
Image ClassificationWaterbirds (test)
Worst-Group Accuracy75.9
92
Image ClassificationWaterbirds
WG Accuracy77.2
79
ClassificationCivilComments (test)
Worst-case Accuracy64.2
47
ClassificationCamelyon17
Accuracy70.5
46
Object ClassificationWaterbirds (test)
Worst-Group Accuracy77.2
22
Natural Language InferenceMultiNLI (test)
Accuracy82
21
Comment ClassificationCivil Comments
Accuracy89.4
21
Toxicity DetectionCivilComments-WILDS (test)
Average Accuracy92.5
19
Showing 10 of 15 rows

Other info

Follow for update