Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ray: A Distributed Framework for Emerging AI Applications

About

The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility. In this paper, we consider these requirements and present Ray---a distributed system to address them. Ray implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system's control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications.

Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, Ion Stoica• 2017

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningProcgen (test)
BigFish Return8.2
21
Reinforcement LearningProcgen BigFish 1.0 (train)
Mean Train Performance20.8
6
Reinforcement LearningProcgen CoinRun 1.0 (train)
Mean Train Performance10
6
Reinforcement LearningProcgen FruitBot 1.0 (train)
Mean Train Performance32.2
6
Reinforcement LearningProcgen StarPilot 1.0 (train)
Mean Train Performance44.1
6
Reinforcement LearningProcgen CaveFlyer 1.0 (train)
Mean Performance (Train)7.3
6
Reinforcement LearningProcgen Jumper 1.0 (train levels)
Mean Train Performance9
6
Reinforcement LearningProcgen Leaper 1.0 (train)
Mean Train Performance6.9
6
Showing 8 of 8 rows

Other info

Follow for update