Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Asynchronous Policy Gradient Aggregation for Efficient Distributed Reinforcement Learning

About

We study distributed reinforcement learning (RL) with policy gradient methods under asynchronous and parallel computations and communications. While non-distributed methods are well understood theoretically and have achieved remarkable empirical success, their distributed counterparts remain less explored, particularly in the presence of heterogeneous asynchronous computations and communication bottlenecks. We introduce two new algorithms, Rennala NIGT and Malenia NIGT, which implement asynchronous policy gradient aggregation and achieve state-of-the-art efficiency. In the homogeneous setting, Rennala NIGT provably improves the total computational and communication complexity while supporting the AllReduce operation. In the heterogeneous setting, Malenia NIGT simultaneously handles asynchronous computations and heterogeneous environments with strictly better theoretical guarantees. Our results are further corroborated by experiments, showing that our methods significantly outperform prior approaches.

Alexander Tyurin, Andrei Spiridonov, Varvara Rudenko• 2025

Related benchmarks

TaskDatasetResultRank
Finding an epsilon-stationary pointHomogeneous Setup Problem 1 Theoretical bounds 1.0
Computational Complexity Bound1
6
Finding an epsilon-stationary pointHeterogeneous Setup
Computational Complexity1
3
Showing 2 of 2 rows

Other info

Follow for update