Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

E-SDS: Environment-aware See it, Do it, Sorted - Automated Environment-Aware Reinforcement Learning for Humanoid Locomotion

About

Vision-language models (VLMs) show promise in automating reward design in humanoid locomotion, which could eliminate the need for tedious manual engineering. However, current VLM-based methods are essentially "blind", as they lack the environmental perception required to navigate complex terrain. We present E-SDS (Environment-aware See it, Do it, Sorted), a framework that closes this perception gap. E-SDS integrates VLMs with real-time terrain sensor analysis to automatically generate reward functions that facilitate training of robust perceptive locomotion policies, grounded by example videos. Evaluated on a Unitree G1 humanoid across four distinct terrains (simple, gaps, obstacles, stairs), E-SDS uniquely enabled successful stair descent, while policies trained with manually-designed rewards or a non-perceptive automated baseline were unable to complete the task. In all terrains, E-SDS also reduced velocity tracking error by 51.9-82.6%. Our framework reduces the human effort of reward design from days to less than two hours while simultaneously producing more robust and capable locomotion policies.

Enis Yalcin, Joshua O'Hara, Maria Stamatopoulou, Chengxu Zhou, Dimitrios Kanoulas• 2025

Related benchmarks

TaskDatasetResultRank
Humanoid LocomotionStair Terrain
Locomotion Quality0.412
3
Humanoid LocomotionGap Terrain (test)
Velocity (m/s)0.66
3
Humanoid LocomotionSimple Terrain
Velocity Tracking (m/s)0.387
3
Humanoid LocomotionObstacle Terrain
Velocity Tracking (m/s)0.492
3
Showing 4 of 4 rows

Other info

Follow for update