ReCoRe: Regularized Contrastive Representation Learning of World Model
About
While recent model-free Reinforcement Learning (RL) methods have demonstrated human-level effectiveness in gaming environments, their success in everyday tasks like visual navigation has been limited, particularly under significant appearance variations. This limitation arises from (i) poor sample efficiency and (ii) over-fitting to training scenarios. To address these challenges, we present a world model that learns invariant features using (i) contrastive unsupervised learning and (ii) an intervention-invariant regularizer. Learning an explicit representation of the world dynamics i.e. a world model, improves sample efficiency while contrastive learning implicitly enforces learning of invariant features, which improves generalization. However, the na\"ive integration of contrastive loss to world models is not good enough, as world-model-based RL methods independently optimize representation learning and agent policy. To overcome this issue, we propose an intervention-invariant regularizer in the form of an auxiliary task such as depth prediction, image denoising, image segmentation, etc., that explicitly enforces invariance to style interventions. Our method outperforms current state-of-the-art model-based and model-free RL methods and significantly improves on out-of-distribution point navigation tasks evaluated on the iGibson benchmark. With only visual observations, we further demonstrate that our approach outperforms recent language-guided foundation models for point navigation, which is essential for deployment on robots with limited computation capabilities. Finally, we demonstrate that our proposed model excels at the sim-to-real transfer of its perception module on the Gibson benchmark.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Point-Goal navigation | Gibson (held-out scenes) | Average SR (All Scenes)2.65e+3 | 30 | |
| PointGoal Navigation | iGibson Ihlen 0 int 1.0 (test) | SR75.3 | 22 | |
| PointGoal Navigation | iGibson Rs int 1.0 (test) | Success Rate7.73e+3 | 22 | |
| PointGoal Navigation | iGibson Env Avg 1.0 (test) | SR5.97e+3 | 22 | |
| Reinforcement Learning | DMControl Reacher easy (100k steps) | Total Reward982 | 7 | |
| Reinforcement Learning | DMControl Walker walk (100k steps) | Total Reward739 | 7 | |
| Reinforcement Learning | DMControl Ball in cup, catch 100k steps | Total Reward859 | 7 | |
| Reinforcement Learning | DMControl Reacher easy (500k-steps) | Total Reward955 | 7 | |
| Reinforcement Learning | DMControl Cheetah, run (500k steps) | Total Reward731 | 7 | |
| Reinforcement Learning | DMControl Walker walk (500k steps) | Total Reward965 | 7 |