Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VIPO: Value Function Inconsistency Penalized Offline Reinforcement Learning

About

Offline reinforcement learning (RL) learns effective policies from pre-collected datasets, offering a practical solution for applications where online interactions are risky or costly. Model-based approaches are particularly advantageous for offline RL, owing to their data efficiency and generalizability. However, due to inherent model errors, model-based methods often artificially introduce conservatism guided by heuristic uncertainty estimation, which can be unreliable. In this paper, we introduce VIPO, a novel model-based offline RL algorithm that incorporates self-supervised feedback from value estimation to enhance model training. Specifically, the model is learned by additionally minimizing the inconsistency between the value learned directly from the offline data and the one estimated from the model. We perform comprehensive evaluations from multiple perspectives to show that VIPO can learn a highly accurate model efficiently and consistently outperform existing methods. In particular, it achieves state-of-the-art performance on almost all tasks in both D4RL and NeoRL benchmarks. Overall, VIPO offers a general framework that can be readily integrated into existing model-based offline RL algorithms to systematically enhance model accuracy.

Xuyang Chen, Guojian Wang, Keyu Yan, Lin Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score110
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score113.2
115
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score20
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score109.6
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score42.5
70
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score80
59
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score77.2
59
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score93.1
58
Offline Reinforcement LearningD4RL walker2d medium-replay
Normalized Score98.4
45
Offline Reinforcement LearningD4RL Adroit pen (cloned)
Normalized Return71.1
32
Showing 10 of 24 rows

Other info

Follow for update