Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mean Actor Critic

About

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.

Cameron Allen, Kavosh Asadi, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, Michael Littman• 2017

Related benchmarks

TaskDatasetResultRank
Classic ControlCart Pole OpenAI Gym (evaluation)
Mean Score178.3
5
Classic ControlLunar Lander OpenAI Gym (evaluation)
Mean Score163.5
5
Reinforcement LearningAtari 2600 random start condition
Beam Rider Score6.07e+3
5
Showing 3 of 3 rows

Other info

Follow for update