Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visual Imitation Enables Contextual Humanoid Control

About

How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably, the simplest way is to just show them-casually capture a human motion video and feed it to humanoids. We introduce VIDEOMIMIC, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills-all from a single policy, conditioned on the environment and global root commands. VIDEOMIMIC offers a scalable path towards teaching humanoids to operate in diverse real-world environments.

Arthur Allshire, Hongsuk Choi, Junyi Zhang, David McAllister, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa• 2025

Related benchmarks

TaskDatasetResultRank
Human-Scene Interaction FidelityPROX 11 sequences
CDbi0.337
6
World-grounded Human Motion RecoveryEMDB subset-2 20 sequences
W-MPJPE100505.3
6
Human Trajectory ReconstructionSLOPER4D (test)
WA-MPJPE112.1
4
Humanoid Policy SimulationPROX & EMDB Aggregate
Success Rate44.8
4
Scene Geometry ReconstructionSLOPER4D (test)
Chamfer Distance0.75
3
Showing 5 of 5 rows

Other info

Follow for update