Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Quantile Q-Learning: Revisiting Offline Extreme Q-Learning with Quantile Regression

About

Offline reinforcement learning (RL) enables policy learning from fixed datasets without further environment interaction, making it particularly valuable in high-risk or costly domains. Extreme $Q$-Learning (XQL) is a recent offline RL method that models Bellman errors using the Extreme Value Theorem, yielding strong empirical performance. However, XQL and its stabilized variant MXQL suffer from notable limitations: both require extensive hyperparameter tuning specific to each dataset and domain, and also exhibit instability during training. To address these issues, we proposed a principled method to estimate the temperature coefficient $\beta$ via quantile regression under mild assumptions. To further improve training stability, we introduce a value regularization technique with mild generalization, inspired by recent advances in constrained value learning. Experimental results demonstrate that the proposed algorithm achieves competitive or superior performance across a range of benchmark tasks, including D4RL and NeoRL2, while maintaining stable training dynamics and using a consistent set of hyperparameters across all datasets and domains.

Xinming Gao, Shangzhe Li, Yujin Cai, Wenwu Yu• 2025

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL Gym walker2d (medium-replay)
Normalized Return90.2
68
Offline Reinforcement LearningD4RL Gym halfcheetah-medium
Normalized Return49.5
60
Offline Reinforcement LearningD4RL Gym walker2d medium
Normalized Return85.2
58
Offline Reinforcement LearningD4RL antmaze-umaze (diverse)
Normalized Score81.3
47
Offline Reinforcement LearningD4RL Gym hopper (medium-replay)
Normalized Return101.1
44
Offline Reinforcement LearningD4RL Gym halfcheetah-medium-replay
Normalized Average Return46.6
43
Offline Reinforcement LearningD4RL Gym hopper-medium
Normalized Return77.3
41
Offline Reinforcement LearningD4RL Adroit pen (human)
Normalized Return128.3
39
Offline Reinforcement LearningD4RL Adroit pen (cloned)
Normalized Return115.2
39
Offline Reinforcement LearningD4RL Gym walker2d medium-expert
Normalized Average Return113.2
38
Showing 10 of 19 rows

Other info

Follow for update