Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning and Deploying Robust Locomotion Policies with Minimal Dynamics Randomization

About

Training deep reinforcement learning (DRL) locomotion policies often require massive amounts of data to converge to the desired behaviour. In this regard, simulators provide a cheap and abundant source. For successful sim-to-real transfer, exhaustively engineered approaches such as system identification, dynamics randomization, and domain adaptation are generally employed. As an alternative, we investigate a simple strategy of random force injection (RFI) to perturb system dynamics during training. We show that the application of random forces enables us to emulate dynamics randomization. This allows us to obtain locomotion policies that are robust to variations in system dynamics. We further extend RFI, referred to as extended random force injection (ERFI), by introducing an episodic actuation offset. We demonstrate that ERFI provides additional robustness for variations in system mass offering on average a 53% improved performance over RFI. We also show that ERFI is sufficient to perform a successful sim-to-real transfer on two different quadrupedal platforms, ANYmal C and Unitree A1, even for perceptive locomotion over uneven terrain in outdoor environments.

Luigi Campanaro, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis Havoutis• 2022

Related benchmarks

TaskDatasetResultRank
Forward locomotionIsaacGym S1: Standard DR
Success Rate82.3
3
Forward locomotionIsaacGym S2: Wider DR
Success Rate77.9
3
Humanoid LocomotionIsaacGym S1: Standard DR Sim-to-real gap 2048 envs (test)
Success Rate82.3
3
Humanoid LocomotionIsaacGym S2: Wider DR 2048 envs Sim-to-real gap (test)
Success Rate77.9
3
Showing 4 of 4 rows

Other info

Follow for update