Learning The Minimum Action Distance
About
This paper presents a state representation framework for Markov decision processes (MDPs) that can be learned solely from state trajectories, requiring neither reward signals nor the actions executed by the agent. We propose learning the minimum action distance (MAD), defined as the minimum number of actions required to transition between states, as a fundamental metric that captures the underlying structure of an environment. MAD naturally enables critical downstream tasks such as goal-conditioned reinforcement learning and reward shaping by providing a dense, geometrically meaningful measure of progress. Our self-supervised learning approach constructs an embedding space where the distances between embedded state pairs correspond to their MAD, accommodating both symmetric and asymmetric approximations. We evaluate the framework on a comprehensive suite of environments with known MAD values, encompassing both deterministic and stochastic dynamics, as well as discrete and continuous state spaces, and environments with noisy observations. Empirical results demonstrate that the proposed approach not only efficiently learns accurate MAD representations across these diverse settings but also significantly outperforms existing state representation methods in terms of representation quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Goal-oriented planning | OGBench AntMaze Medium v1 (explore) | Success Rate80 | 4 | |
| Goal-oriented planning | OGBench PointMaze Giant Navigate v1 | Success Rate99 | 4 | |
| Goal-oriented planning | OGBench PointMaze Giant v1 (stitch) | Success Rate99 | 4 | |
| Goal-oriented planning | OGBench PointMaze Large Navigate v1 | Success Rate100 | 4 | |
| Goal-oriented planning | OGBench PointMaze Large v1 (stitch) | Success Rate100 | 4 | |
| Goal-oriented planning | OGBench PointMaze Medium Navigate v1 | Success Rate100 | 4 | |
| Goal-oriented planning | OGBench PointMaze Medium Stitch v1 | Success Rate100 | 4 |