Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Thinker: A vision-language foundation model for embodied intelligence

About

When large vision-language models are applied to the field of robotics, they encounter problems that are simple for humans yet error-prone for models. Such issues include confusion between third-person and first-person perspectives and a tendency to overlook information in video endings during temporal reasoning. To address these challenges, we propose Thinker, a large vision-language foundation model designed for embodied intelligence. We tackle the aforementioned issues from two perspectives. Firstly, we construct a large-scale dataset tailored for robotic perception and reasoning, encompassing ego-view videos, visual grounding, spatial understanding, and chain-of-thought data. Secondly, we introduce a simple yet effective approach that substantially enhances the model's capacity for video comprehension by jointly incorporating key frames and full video sequences as inputs. Our model achieves state-of-the-art results on two of the most commonly used benchmark datasets in the field of task planning.

Baiyu Pan, Daqin Luo, Junpeng Yang, Jiyuan Wang, Yixuan Zhang, Hailin Shi, Jichao Jiao• 2026

Related benchmarks

TaskDatasetResultRank
Egocentric Action PlanningEgoPlan-bench v2 (test)
Daily life Success Rate63.78
7
Robotic Video Question AnsweringRoboVQA (test)
BLEU-172.7
6
Showing 2 of 2 rows

Other info

Follow for update