Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control

About

Sample efficiency in Reinforcement Learning (RL) has traditionally been driven by algorithmic enhancements. In this work, we demonstrate that scaling can also lead to substantial improvements. We conduct a thorough investigation into the interplay of scaling model capacity and domain-specific RL enhancements. These empirical findings inform the design choices underlying our proposed BRO (Bigger, Regularized, Optimistic) algorithm. The key innovation behind BRO is that strong regularization allows for effective scaling of the critic networks, which, paired with optimistic exploration, leads to superior performance. BRO achieves state-of-the-art results, significantly outperforming the leading model-based and model-free algorithms across 40 complex tasks from the DeepMind Control, MetaWorld, and MyoSuite benchmarks. BRO is the first model-free algorithm to achieve near-optimal policies in the notoriously challenging Dog and Humanoid tasks.

Michal Nauman, Mateusz Ostaszewski, Krzysztof Jankowski, Piotr Mi{\l}o\'s, Marek Cygan• 2024

Related benchmarks

TaskDatasetResultRank
LocomotionDog & Humanoid suite
IQM0.864
32
Dexterous ManipulationMyoSuite
IQM0.98
28
Humanoid Locomotion and ManipulationHumanoidBench
IQM0.53
28
Continuous ControlDeepMind Control (DMC) Suite (100k steps)
IQM0.294
8
Continuous ControlDeepMind Control (DMC) Suite 200k steps
IQM51.9
8
Continuous ControlDeepMind Control (DMC) Suite (1M steps)
IQM84.6
8
Continuous ControlDeepMind Control (DMC) Suite 500k steps
IQM54.2
8
Showing 7 of 7 rows

Other info

Follow for update