Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generative Adversarial Regression (GAR): Learning Conditional Risk Scenarios

About

We propose Generative Adversarial Regression (GAR), a framework for learning conditional risk scenarios through generators aligned with downstream risk objectives. GAR builds on a regression characterization of conditional risk for elicitable functionals, including quantiles, expectiles, and jointly elicitable pairs. We extend this principle from point prediction to generative modeling by training generators whose policy-induced risk matches that of real data under the same context. To ensure robustness across all policies, GAR adopts a minimax formulation in which an adversarial policy identifies worst-case discrepancies in risk evaluation while the generator adapts to eliminate them. This structure preserves alignment with the risk functional across a broad class of policies rather than a fixed, pre-specified set. We illustrate GAR through a tail-risk instantiation based on jointly elicitable $(\mathrm{VaR}, \mathrm{ES})$ objectives. Experiments on S\&P 500 data show that GAR produces scenarios that better preserve downstream risk than unconditional, econometric, and direct predictive baselines while remaining stable under adversarially selected policies.

Saeed Asadi, Jonathan Yu-Meng Li• 2026

Related benchmarks

TaskDatasetResultRank
Financial Risk EstimationS&P 500 1984-06-01 to 2025-08-20 (train)
VaR-ES Score-3.932
6
Financial Risk EstimationS&P 500 1984-06-01 to 2025-08-20 (val)
VaR-ES Score-3.95
6
Financial Risk EstimationS&P 500 1984-06-01 to 2025-08-20 (test)
VaR-ES Score3.922
6
Showing 3 of 3 rows

Other info

Follow for update