Understanding, Predicting and Better Resolving Q-Value Divergence in Offline-RL
About
The divergence of the Q-value estimation has been a prominent issue in offline RL, where the agent has no access to real dynamics. Traditional beliefs attribute this instability to querying out-of-distribution actions when bootstrapping value targets. Though this issue can be alleviated with policy constraints or conservative Q estimation, a theoretical understanding of the underlying mechanism causing the divergence has been absent. In this work, we aim to thoroughly comprehend this mechanism and attain an improved solution. We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL. Then, we propose a novel Self-Excite Eigenvalue Measure (SEEM) metric based on Neural Tangent Kernel (NTK) to measure the evolving property of Q-network at training, which provides an intriguing explanation of the emergence of divergence. For the first time, our theory can reliably decide whether the training will diverge at an early stage, and even predict the order of the growth for the estimated Q-value, the model's norm, and the crashing step when an SGD optimizer is used. The experiments demonstrate perfect alignment with this theoretic analysis. Building on our insights, we propose to resolve divergence from a novel perspective, namely improving the model's architecture for better extrapolating behavior. Through extensive empirical studies, we identify LayerNorm as a good solution to effectively avoid divergence without introducing detrimental bias, leading to superior performance. Experimental results prove that it can still work in some most challenging settings, i.e. using only 1 transitions of the dataset, where all previous methods fail. Moreover, it can be easily plugged into modern offline RL methods and achieve SOTA results on many challenging tasks. We also give unique insights into its effectiveness.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hand Manipulation | Adroit door-human | Normalized Avg Score11.8 | 33 | |
| Hand Manipulation | Adroit door-cloned | Normalized Score-0.2 | 23 | |
| Offline Reinforcement Learning | D4RL AntMaze v2 (various) | UMaze Success Rate48.1 | 20 | |
| Hammer | Adroit Hammer Human v0 | Normalized Score5.4 | 19 | |
| Pen | Adroit Pen v0 (Cloned) | Normalized Score3.25e+3 | 19 | |
| Pen | Adroit Pen Human v0 | Normalized Score45.3 | 19 | |
| Hammer | Adroit Hammer Cloned v0 | Normalized Score0.6 | 19 | |
| Relocate | Adroit Relocate Cloned v0 | Normalized Score-0.2 | 19 | |
| Offline Reinforcement Learning | D4RL Maze2D | Return (Dense, UMaze)86.5 | 15 | |
| Offline Reinforcement Learning | antmaze umaze-diverse v0 | Avg Normalized Score88.5 | 14 |