Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Koopman-based surrogate modeling for reinforcement-learning-control of Rayleigh-Benard convection

About

Training reinforcement learning (RL) agents to control fluid dynamics systems is computationally expensive due to the high cost of direct numerical simulations (DNS) of the governing equations. Surrogate models offer a promising alternative by approximating the dynamics at a fraction of the computational cost, but their feasibility as training environments for RL is limited by distribution shifts, as policies induce state distributions not covered by the surrogate training data. In this work, we investigate the use of Linear Recurrent Autoencoder Networks (LRANs) for accelerating RL-based control of 2D Rayleigh-B\'enard convection. We evaluate two training strategies: a surrogate trained on precomputed data generated with random actions, and a policy-aware surrogate trained iteratively using data collected from an evolving policy. Our results show that while surrogate-only training leads to reduced control performance, combining surrogates with DNS in a pretraining scheme recovers state-of-the-art performance while reducing training time by more than 40%. We demonstrate that policy-aware training mitigates the effects of distribution shift, enabling more accurate predictions in policy-relevant regions of the state space.

Tim Plotzki, Sebastian Peitz• 2026

Related benchmarks

TaskDatasetResultRank
Active Flow Control2D Rayleigh-Bénard convection
Nusselt Number (Nu)3.31
5
Active Flow ControlRayleigh-Bénard convection DNS environment
Nusselt Number (Nu)2.75
3
Showing 2 of 2 rows

Other info

Follow for update