Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vision Language Models are In-Context Value Learners

About

Predicting temporal progress from visual trajectories is important for intelligent robots that can learn, adapt, and improve. However, learning such progress estimator, or temporal value function, across different tasks and domains requires both a large amount of diverse data and methods which can scale and generalize. To address these challenges, we present Generative Value Learning (\GVL), a universal value function estimator that leverages the world knowledge embedded in vision-language models (VLMs) to predict task progress. Naively asking a VLM to predict values for a video sequence performs poorly due to the strong temporal correlation between successive frames. Instead, GVL poses value estimation as a temporal ordering problem over shuffled video frames; this seemingly more challenging task encourages VLMs to more fully exploit their underlying semantic and temporal grounding capabilities to differentiate frames based on their perceived task progress, consequently producing significantly better value predictions. Without any robot or task specific training, GVL can in-context zero-shot and few-shot predict effective values for more than 300 distinct real-world tasks across diverse robot platforms, including challenging bimanual manipulation tasks. Furthermore, we demonstrate that GVL permits flexible multi-modal in-context learning via examples from heterogeneous tasks and embodiments, such as human videos. The generality of GVL enables various downstream applications pertinent to visuomotor policy learning, including dataset filtering, success detection, and advantage-weighted regression -- all without any model training or finetuning.

Yecheng Jason Ma, Joey Hejna, Ayzaan Wahid, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Jonathan Tompson, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia• 2024

Related benchmarks

TaskDatasetResultRank
Task Progress EstimationALFRED
pmae6.21
15
Task Progress EstimationEgo4D
PMAE26.8
15
Pairwise progress-judgmentRoboPulse Small hop range
Accuracy (Real)61
11
Pairwise progress-judgmentRoboPulse Medium hop range
Accuracy (Real)71
11
Pairwise progress-judgmentRoboPulse Large hop range
Accuracy (Real)78
11
Pairwise progress-judgmentRoboPulse Overall
Overall Average Accuracy71
11
Trajectory RankingRBM OOD 1.0 (test)
Kendall's Tau-a0.19
8
Reward alignmentRBM-EVAL ID
Pearson r (VOC)0.16
8
Reward alignmentRBM-EVAL OOD
Pearson r (VOC)0.21
8
Task Completion ClassificationSARM (real-world rollouts)
Average Accuracy37.2
8
Showing 10 of 35 rows

Other info

Follow for update