Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
About
We present an algorithm, HOMER, for exploration and reinforcement learning in rich observation environments that are summarizable by an unknown latent state space. The algorithm interleaves representation learning to identify a new notion of kinematic state abstraction with strategic exploration to reach new states using the learned abstraction. The algorithm provably explores the environment with sample complexity scaling polynomially in the number of latent states and the time horizon, and, crucially, with no dependence on the size of the observation space, which could be infinitely large. This exploration guarantee further enables sample-efficient global policy optimization for any reward function. On the computational side, we show that the algorithm can be implemented efficiently whenever certain supervised learning problems are tractable. Empirically, we evaluate HOMER on a challenging exploration problem, where we show that the algorithm is exponentially more sample efficient than standard reinforcement learning baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Offline Reinforcement Learning | V-D4RL (various) | Cheetah-Run Medium475 | 8 | |
| Exploration | Diabolical Lock H=100 (test) | Mean Farthest Column Reached46 | 6 | |
| Reaching the farthest column | Diabolical Lock 5M frames H=100 | Mean Farthest Column Reached28 | 5 | |
| Reaching the farthest column | Diabolical Lock H=100 (10M frames) | Mean Farthest Column Reached37 | 5 | |
| Reaching the farthest column | Diabolical Lock H=100 (15M frames) | Mean Farthest Column Reached41 | 5 | |
| Reaching the farthest column | Diabolical Lock 20M frames H=100 | Mean Farthest Column Reached46 | 5 |