Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Target Entropy Annealing for Discrete Soft Actor-Critic

About

Soft Actor-Critic (SAC) is considered the state-of-the-art algorithm in continuous action space settings. It uses the maximum entropy framework for efficiency and stability, and applies a heuristic temperature Lagrange term to tune the temperature $\alpha$, which determines how "soft" the policy should be. It is counter-intuitive that empirical evidence shows SAC does not perform well in discrete domains. In this paper we investigate the possible explanations for this phenomenon and propose Target Entropy Scheduled SAC (TES-SAC), an annealing method for the target entropy parameter applied on SAC. Target entropy is a constant in the temperature Lagrange term and represents the target policy entropy in discrete SAC. We compare our method on Atari 2600 games with different constant target entropy SAC, and analyze on how our scheduling affects SAC.

Yaosheng Xu, Dailin Hu, Litian Liang, Stephen McAleer, Pieter Abbeel, Roy Fox• 2021

Related benchmarks

TaskDatasetResultRank
Continual LearningAtari Normalized continual learning
Max Performance0.18
9
Continual Reinforcement LearningCoinRun Normalized Continual Learning
Max Performance0.9
9
Continual Reinforcement LearningCoinRun
Forgetting-0.012
3
Continual Reinforcement LearningCoinRun Reversed task order
Forgetting-0.029
3
Continual Reinforcement LearningAtari Default task order
Forgetting0.194
3
Continual Reinforcement LearningCoinRun Two-cycle (train)
C1 Final Score0.022
3
Continual Reinforcement LearningAtari Reversed task order
Forgetting0.039
3
Continual Reinforcement LearningAtari Two-cycle (train)
C1 Forward Score0.194
3
Showing 8 of 8 rows

Other info

Follow for update