Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GB-DQN: Gradient Boosted DQN Models for Non-stationary Reinforcement Learning

About

Non-stationary environments pose a fundamental challenge for deep reinforcement learning, as changes in dynamics or rewards invalidate learned value functions and cause catastrophic forgetting. We propose \emph{Gradient-Boosted Deep Q-Networks (GB-DQN)}, an adaptive ensemble method that addresses model drift through incremental residual learning. Instead of retraining a single Q-network, GB-DQN constructs an additive ensemble in which each new learner is trained to approximate the Bellman residual of the current ensemble after drift. We provide theoretical results showing that each boosting step reduces the empirical Bellman residual and that the ensemble converges to the post-drift optimal value function under standard assumptions. Experiments across a diverse set of control tasks with controlled dynamics changes demonstrate faster recovery, improved stability, and greater robustness compared to DQN and common non-stationary baselines.

Chang-Hwan Lee, Chanseung Lee• 2025

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAcrobot v1
Mean Return-140.2
14
Reinforcement LearningHopper v5 (strong-drift)
Final Return20.24
5
Showing 2 of 2 rows

Other info

Follow for update