Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SDS -- See it, Do it, Sorted: Quadruped Skill Synthesis from Single Video Demonstration

About

Imagine a robot learning locomotion skills from any single video, without labels or reward engineering. We introduce SDS ("See it. Do it. Sorted."), an automated pipeline for skill acquisition from unstructured demonstrations. Using GPT-4o, SDS applies novel prompting techniques, in the form of spatio-temporal grid-based visual encoding ($G_{v}$) and structured input decomposition (SUS). These produce executable reward functions (RF) from the raw input videos. The RFs are used to train PPO policies and are optimized through closed-loop evolution, using training footage and performance metrics as self-supervised signals. SDS allows quadrupeds (e.g. Unitree Go1) to learn four gaits -- trot, bound, pace, and hop -- achieving 100% gait matching fidelity, Dynamic Time Warping (DTW) distance in the order of $10^{-6}$, and stable locomotion with zero failures, both in simulation and the real world. SDS generalizes to morphologically different quadrupeds (e.g. ANYmal) and outperforms prior work in data efficiency, training time and engineering effort. Further materials and the code are open-source under: https://rpl-cs-ucl.github.io/SDSweb/.

Maria Stamatopoulou, Jeffrey Li, Dimitrios Kanoulas• 2024

Related benchmarks

TaskDatasetResultRank
Humanoid LocomotionSimple Terrain
Velocity Tracking (m/s)0.549
3
Humanoid LocomotionObstacle Terrain
Velocity Tracking (m/s)0.621
3
Humanoid LocomotionStair Terrain
Locomotion Quality0.342
3
Humanoid LocomotionGap Terrain (test)
Velocity (m/s)0.577
3
Showing 4 of 4 rows

Other info

Follow for update