Task-Aware Exploration via a Predictive Bisimulation Metric
About
Accelerating exploration in visual reinforcement learning under sparse rewards remains challenging due to the substantial task-irrelevant variations. Despite advances in intrinsic exploration, many methods either assume access to low-dimensional states or lack task-aware exploration strategies, thereby rendering them fragile in visual domains. To bridge this gap, we present TEB, a Task-aware Exploration approach that tightly couples task-relevant representations with exploration through a predictive Bisimulation metric. Specifically, TEB leverages the metric not only to learn behaviorally grounded task representations but also to measure behaviorally intrinsic novelty over the learned latent space. To realize this, we first theoretically mitigate the representation collapse of degenerate bisimulation metrics under sparse rewards by internally introducing a simple but effective predicted reward differential. Building on this robust metric, we design potential-based exploration bonuses, which measure the relative novelty of adjacent observations over the latent space. Extensive experiments on MetaWorld and Maze2D show that TEB achieves superior exploration ability and outperforms recent baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| State Exploration | Maze2D Square-b | State Coverage Ratio85 | 22 | |
| State Exploration | Maze2D Square-a | State Coverage Ratio87 | 11 | |
| State Exploration | Maze2D Square-c | State Coverage Ratio74 | 11 | |
| State Exploration | Maze2D Square-d | State Coverage Ratio0.77 | 11 | |
| State Exploration | Maze2D Corridor2 | State Coverage Ratio93 | 11 | |
| State Exploration | Maze2D Square-tree | State Coverage Ratio50 | 11 | |
| Robotic Manipulation | MetaWorld | Success Rate: Pick-out-of-hole93.1 | 7 |