Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Vision Language Models are In-Context Value Learners

About

Predicting temporal progress from visual trajectories is important for intelligent robots that can learn, adapt, and improve. However, learning such progress estimator, or temporal value function, across different tasks and domains requires both a large amount of diverse data and methods which can scale and generalize. To address these challenges, we present Generative Value Learning (\GVL), a universal value function estimator that leverages the world knowledge embedded in vision-language models (VLMs) to predict task progress. Naively asking a VLM to predict values for a video sequence performs poorly due to the strong temporal correlation between successive frames. Instead, GVL poses value estimation as a temporal ordering problem over shuffled video frames; this seemingly more challenging task encourages VLMs to more fully exploit their underlying semantic and temporal grounding capabilities to differentiate frames based on their perceived task progress, consequently producing significantly better value predictions. Without any robot or task specific training, GVL can in-context zero-shot and few-shot predict effective values for more than 300 distinct real-world tasks across diverse robot platforms, including challenging bimanual manipulation tasks. Furthermore, we demonstrate that GVL permits flexible multi-modal in-context learning via examples from heterogeneous tasks and embodiments, such as human videos. The generality of GVL enables various downstream applications pertinent to visuomotor policy learning, including dataset filtering, success detection, and advantage-weighted regression -- all without any model training or finetuning.

Yecheng Jason Ma, Joey Hejna, Ayzaan Wahid, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Jonathan Tompson, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia• 2024

Related benchmarks

TaskDatasetResultRank
Task Completion ClassificationSARM (real-world rollouts)
Average Accuracy37.2
8
Progress EstimationOpen X-Embodiment
Mean VOC Score0.541
6
Reward ModelingManiRewardBench Lerobot
Mean VOC0.62
6
Reward ModelingManiRewardBench Franka
Mean VOC69.5
6
Reward ModelingManiRewardBench Bimanual YAM
Mean VOC56.6
6
Reward ModelingManiRewardBench Single-arm YAM
Mean VOC0.752
6
Video Frame Rank-CorrelationRoboBrain-X
VOC (Sparse)0.32
6
Video Frame Rank-CorrelationLIBERO
VOC (Sparse)43
6
Video Frame Rank-CorrelationRoboCasa
VOC (Sparse)0.06
6
Video Frame Rank-CorrelationRoboTwin 2.0
VOC (Sparse)28
6
Showing 10 of 14 rows

Other info

Follow for update