Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Forecasting in Offline Reinforcement Learning for Non-stationary Environments

About

Offline Reinforcement Learning (RL) provides a promising avenue for training policies from pre-collected datasets when gathering additional interaction data is infeasible. However, existing offline RL methods often assume stationarity or only consider synthetic perturbations at test time, assumptions that often fail in real-world scenarios characterized by abrupt, time-varying offsets. These offsets can lead to partial observability, causing agents to misperceive their true state and degrade performance. To overcome this challenge, we introduce Forecasting in Non-stationary Offline RL (FORL), a framework that unifies (i) conditional diffusion-based candidate state generation, trained without presupposing any specific pattern of future non-stationarity, and (ii) zero-shot time-series foundation models. FORL targets environments prone to unexpected, potentially non-Markovian offsets, requiring robust agent performance from the onset of each episode. Empirical evaluations on offline RL benchmarks, augmented with real-world time-series data to simulate realistic non-stationarity, demonstrate that FORL consistently improves performance compared to competitive baselines. By integrating zero-shot forecasting with the agent's experience, we aim to bridge the gap between offline RL and the complexities of real-world, non-stationary environments.

Suzan Ece Ada, Georg Martius, Emre Ugur, Erhan Oztop• 2025

Related benchmarks

TaskDatasetResultRank
cube-single-playreal-data-A australian-electricity-demand
Normalized Score23.7
6
cube-single-playreal-data-B electricity
Normalized Score60
6
cube-single-playreal-data-C electricity-hourly
Normalized Score42.1
6
cube-single-playreal-data-D electricity-nips
Normalized Score70
6
cube-single-playreal-data-E
Normalized Score32.7
6
Showing 5 of 5 rows

Other info

Follow for update