Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differentially Private Non-convex Distributionally Robust Optimization

About

Real-world deployments routinely face distribution shifts, group imbalances, and adversarial perturbations, under which the traditional Empirical Risk Minimization (ERM) framework can degrade severely. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case expected loss over an uncertainty set of distributions, offering a principled approach to robustness. Meanwhile, as training data in DRO always involves sensitive information, safeguarding it against leakage under Differential Privacy (DP) is essential. In contrast to classical DP-ERM, DP-DRO has received much less attention due to its minimax optimization structure with uncertainty constraint. To bridge the gap, we provide a comprehensive study of DP-(finite-sum)-DRO with $\psi$-divergence and non-convex loss. First, we study DRO with general $\psi$-divergence by reformulating it as a minimization problem, and develop a novel $(\varepsilon, \delta)$-DP optimization method, called DP Double-Spider, tailored to this structure. Under mild assumptions, we show that it achieves a utility bound of $\mathcal{O}(\frac{1}{\sqrt{n}}+ (\frac{\sqrt{d \log (1/\delta)}}{n \varepsilon})^{2/3})$ in terms of the gradient norm, where $n$ denotes the data size and $d$ denotes the model dimension. We further improve the utility rate for specific divergences. In particular, for DP-DRO with KL-divergence, by transforming the problem into a compositional finite-sum optimization problem, we develop a DP Recursive-Spider method and show that it achieves a utility bound of $\mathcal{O}((\frac{\sqrt{d \log(1/\delta)}}{n\varepsilon})^{2/3} )$, matching the best-known result for non-convex DP-ERM. Experimentally, we demonstrate that our proposed methods outperform existing approaches for DP minimax optimization.

Difei Xu, Meng Ding, Zebin Ma, Huanyi Xie, Youming Tao, Aicha Slaitane, Di Wang• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationFashionMNIST (test)--
218
Image ClassificationCelebA (test)
Accuracy90.45
37
Image ClassificationCIFAR10-ST (test)
Accuracy57.26
17
Image ClassificationMNIST-ST (test)
Accuracy99.66
16
Showing 4 of 4 rows

Other info

Follow for update