Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Regularized Policy Iteration under Transition Uncertainty

About

Offline reinforcement learning (RL) enables data-efficient and safe policy learning without online exploration, but its performance often degrades under distribution shift. The learned policy may visit out-of-distribution state-action pairs where value estimates and learned dynamics are unreliable. To address policy-induced extrapolation and transition uncertainty in a unified framework, we formulate offline RL as robust policy optimization, treating the transition kernel as a decision variable within an uncertainty set and optimizing the policy against the worst-case dynamics. We propose Robust Regularized Policy Iteration (RRPI), which replaces the intractable max-min bilevel objective with a tractable KL-regularized surrogate and derives an efficient policy iteration procedure based on a robust regularized Bellman operator. We provide theoretical guarantees by showing that the proposed operator is a $\gamma$-contraction and that iteratively updating the surrogate yields monotonic improvement of the original robust objective with convergence. Experiments on D4RL benchmarks demonstrate that RRPI achieves strong average performance, outperforming recent baselines including percentile-based methods on the majority of environments while remaining competitive on the rest. Moreover, RRPI exhibits robust performance by aligning lower $Q$-values with high epistemic uncertainty, which prevents the policy from executing unreliable out-of-distribution actions.

Hongqiang Lin, Zhenghui Fu, Weihao Tang, Pengfei Wang, Yiding Sun, Qixian Huang, Dongxu Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score105.3
155
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score111.9
153
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score115.7
124
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score106.6
97
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score75.2
97
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score97.5
96
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score23.7
93
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score35.5
86
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score74.4
84
Offline Reinforcement LearningD4RL hopper-random
Normalized Score35
78
Showing 10 of 18 rows

Other info

Follow for update