Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pareto Conditioned Networks

About

In multi-objective optimization, learning all the policies that reach Pareto-efficient solutions is an expensive process. The set of optimal policies can grow exponentially with the number of objectives, and recovering all solutions requires an exhaustive exploration of the entire state space. We propose Pareto Conditioned Networks (PCN), a method that uses a single neural network to encompass all non-dominated policies. PCN associates every past transition with its episode's return. It trains the network such that, when conditioned on this same return, it should reenact said transition. In doing so we transform the optimization problem into a classification problem. We recover a concrete policy by conditioning the network on the desired Pareto-efficient solution. Our method is stable as it learns in a supervised fashion, thus avoiding moving target issues. Moreover, by using a single network, PCN scales efficiently with the number of objectives. Finally, it makes minimal assumptions on the shape of the Pareto front, which makes it suitable to a wider range of problems than previous state-of-the-art multi-objective reinforcement learning algorithms.

Mathieu Reymond, Eugenio Bargiacchi, Ann Now\'e• 2022

Related benchmarks

TaskDatasetResultRank
Multi-objective Reinforcement LearningMO-Gymnasium ResourceGathering
Sparsity642
8
Multi-objective Reinforcement LearningMO-Gymnasium FourRoom
Sparsity342
8
Multi-objective Reinforcement LearningMO-Gymnasium BreakableBottles
Sparsity36.5
8
Multi-objective Reinforcement LearningMO-Gymnasium Fishwood
Sparsity1.66
8
Multi-objective Reinforcement LearningMO-Gymnasium MOLunarLander
Sparsity12.6
8
Multi-objective Reinforcement LearningMO-Gymnasium MOHalfcheetah
Sparsity9.42
8
Multi-objective Reinforcement LearningMO-Gymnasium MOSwimmer
Sparsity9.23
8
Multi-objective Reinforcement LearningMO-Gymnasium MOHumanoid
Sparsity58.7
8
Multi-objective Reinforcement LearningMO-Gymnasium Deep Sea Treasure
Sparsity14.5
8
Multi-objective Reinforcement LearningMO-Gymnasium HopperEnv
Sparsity1.24
8
Showing 10 of 18 rows

Other info

Follow for update