Empowering Embodied Visual Tracking with Visual Foundation Models and Offline RL
About
Embodied visual tracking is to follow a target object in dynamic 3D environments using an agent's egocentric vision. This is a vital and challenging skill for embodied agents. However, existing methods suffer from inefficient training and poor generalization. In this paper, we propose a novel framework that combines visual foundation models(VFM) and offline reinforcement learning(offline RL) to empower embodied visual tracking. We use a pre-trained VFM, such as "Tracking Anything", to extract semantic segmentation masks with text prompts. We then train a recurrent policy network with offline RL, e.g., Conservative Q-Learning, to learn from the collected demonstrations without online interactions. To further improve the robustness and generalization of the policy network, we also introduce a mask re-targeting mechanism and a multi-level data collection strategy. In this way, we can train a robust policy within an hour on a consumer-level GPU, e.g., Nvidia RTX 3090. We evaluate our agent on several high-fidelity environments with challenging situations, such as distraction and occlusion. The results show that our agent outperforms state-of-the-art methods in terms of sample efficiency, robustness to distractors, and generalization to unseen scenarios and targets. We also demonstrate the transferability of the learned agent from virtual environments to a real-world robot.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Embodied Visual Tracking | EVT-Bench Distracted Tracking | SR15.7 | 11 | |
| Embodied Visual Tracking | EVT-Bench Single Target Tracking | SR32.5 | 11 | |
| Person-Following | EVT-Bench single view (Distracted Tracking) | SR15.7 | 9 | |
| Person-Following | EVT-Bench Single-Target Tracking (STT) single view | SR32.5 | 9 | |
| Person-Following | EVT-Bench Ambiguity Tracking (AT) single view | Success Rate (SR)18.3 | 8 |