PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence
About
Robotic generalization relies on physical intelligence: the ability to reason about state changes, contact-rich interactions, and long-horizon planning under egocentric perception and action. Vision Language Models (VLMs) are essential to Vision-Language-Action (VLA) systems, but the reliance on third-person training data creates a viewpoint gap for humanoid robots. Collecting massive robot-centric data is an ideal but impractical solution due to cost and diversity constraints. Conversely, human egocentric videos offer a highly scalable data source with rich interaction context, yet the embodiment mismatch prevents the direct application. To bridge this gap, we propose an Egocentric2Embodiment Translation Pipeline that transforms raw human egocentric videos into multi-level, schema-driven embodiment supervision with enforced evidence grounding and temporal consistency, enabling the construction of the Egocentric2Embodiment dataset (E2E-3M) at scale. An egocentric-aware embodied brain, termed PhysBrain, is obtained by training on the E2E-3M dataset. PhysBrain exhibits substantially improved egocentric understanding, particularly for planning. It provides an egocentric-aware initialization that enables more sample-efficient VLA fine-tuning and higher success rates, demonstrating effective transfer from human egocentric supervision to downstream robot control.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Task Planning | EgoPlan-Benchmark1 (val) | Accuracy47.4 | 11 | |
| Task Planning | EgoPlan-Benchmark 2 | Accuracy46.9 | 11 | |
| Egocentric Understanding | EgoThink | Action Accuracy69 | 11 | |
| Robotic Manipulation | RoboCasa GR1 Tabletop Manipulation (test) | PnP Bottle To Cabinet Close74 | 6 |