Preconditioned One-Step Generative Modeling for Bayesian Inverse Problems in Function Spaces
About
We propose a machine-learning algorithm for Bayesian inverse problems in the function-space regime based on one-step generative transport. Building on the Mean Flows, we learn a fully conditional amortized sampler with a neural-operator backbone that maps a reference Gaussian noise to approximate posterior samples. We show that while white-noise references may be admissible at fixed discretization, they become incompatible with the function-space limit, leading to instability in inference for Bayesian problems arising from PDEs. To address this issue, we adopt a prior-aligned anisotropic Gaussian reference distribution and establish the Lipschitz regularity of the resulting transport. Our method is not distilled from MCMC: training relies only on prior samples and simulated partial and noisy observations. Once trained, it generates a $64\times64$ posterior sample in $\sim 10^{-3}$s, avoiding the repeated PDE solves of MCMC while matching key posterior summaries.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Posterior Sampling | Advection | Time per Sample (s)3.5 | 6 | |
| Posterior Sampling | Darcy | Time per Sample (s)3.5 | 6 | |
| Posterior Sampling | Reaction-diffusion | Time per Sample (s)3.5 | 6 | |
| Posterior Sampling | Navier-Stokes | Time per Sample (s)3.5 | 6 |