Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Anti-Exploration by Random Network Distillation

About

Despite the success of Random Network Distillation (RND) in various domains, it was shown as not discriminative enough to be used as an uncertainty estimator for penalizing out-of-distribution actions in offline reinforcement learning. In this paper, we revisit these results and show that, with a naive choice of conditioning for the RND prior, it becomes infeasible for the actor to effectively minimize the anti-exploration bonus and discriminativity is not an issue. We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM), resulting in a simple and efficient ensemble-free algorithm based on Soft Actor-Critic. We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin.

Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Sergey Kolesnikov• 2023

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learninghopper medium
Normalized Score97.8
52
Offline Reinforcement Learningwalker2d medium
Normalized Score91.6
51
Offline Reinforcement Learningwalker2d medium-replay
Normalized Score88.7
50
Offline Reinforcement Learninghopper medium-replay
Normalized Score100.5
44
Offline Reinforcement Learninghalfcheetah medium
Normalized Score66.6
43
Offline Reinforcement Learninghalfcheetah medium-replay
Normalized Score54.9
43
Offline Reinforcement LearningD4RL antmaze-umaze (diverse)
Normalized Score66
40
Offline Reinforcement LearningD4RL Adroit pen (human)
Normalized Return5.6
32
Offline Reinforcement LearningD4RL Adroit pen (cloned)
Normalized Return2.5
32
Offline Reinforcement LearningWalker2d medium-expert
Normalized Score105
31
Showing 10 of 34 rows

Other info

Follow for update