Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty-Aware Rank-One MIMO Q Network Framework for Accelerated Offline Reinforcement Learning

About

Offline reinforcement learning (RL) has garnered significant interest due to its safe and easily scalable paradigm. However, training under this paradigm presents its own challenge: the extrapolation error stemming from out-of-distribution (OOD) data. Existing methodologies have endeavored to address this issue through means like penalizing OOD Q-values or imposing similarity constraints on the learned policy and the behavior policy. Nonetheless, these approaches are often beset by limitations such as being overly conservative in utilizing OOD data, imprecise OOD data characterization, and significant computational overhead. To address these challenges, this paper introduces an Uncertainty-Aware Rank-One Multi-Input Multi-Output (MIMO) Q Network framework. The framework aims to enhance Offline Reinforcement Learning by fully leveraging the potential of OOD data while still ensuring efficiency in the learning process. Specifically, the framework quantifies data uncertainty and harnesses it in the training losses, aiming to train a policy that maximizes the lower confidence bound of the corresponding Q-function. Furthermore, a Rank-One MIMO architecture is introduced to model the uncertainty-aware Q-function, \TP{offering the same ability for uncertainty quantification as an ensemble of networks but with a cost nearly equivalent to that of a single network}. Consequently, this framework strikes a harmonious balance between precision, speed, and memory efficiency, culminating in improved overall performance. Extensive experimentation on the D4RL benchmark demonstrates that the framework attains state-of-the-art performance while remaining computationally efficient. By incorporating the concept of uncertainty quantification, our framework offers a promising avenue to alleviate extrapolation errors and enhance the efficiency of offline RL.

Thanh Nguyen, Tung Luu, Tri Ton, Sungwoong Kim, Chang D. Yoo• 2026

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return92.2
67
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score59.9
58
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score105.4
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score112.7
56
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score107.8
56
Offline Reinforcement LearningD4RL Hopper-medium-replay v2
Normalized Return102.9
54
Offline Reinforcement LearningD4RL Hopper-medium-expert v2
Normalized Return111.6
49
Offline Reinforcement LearningD4RL walker2d-medium-expert v2
Normalized Score112.9
44
Offline Reinforcement LearningD4RL HalfCheetah Medium v2
Average Normalized Return68.6
43
Offline Reinforcement LearningD4RL walker2d medium-replay v2
Normalized Score100.6
36
Showing 10 of 14 rows

Other info

Follow for update