Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance

About

Multi-Objective Reinforcement Learning (MORL) aims to learn a set of policies that optimize trade-offs between multiple, often conflicting objectives. MORL is computationally more complex than single-objective RL, particularly as the number of objectives increases. Additionally, when objectives involve the preferences of agents or groups, incorporating fairness becomes both important and socially desirable. This paper introduces a principled algorithm that incorporates fairness into MORL while improving scalability to many-objective problems. We propose using Lorenz dominance to identify policies with equitable reward distributions and introduce lambda-Lorenz dominance to enable flexible fairness preferences. We release a new, large-scale real-world transport planning environment and demonstrate that our method encourages the discovery of fair policies, showing improved scalability in two large cities (Xi'an and Amsterdam). Our methods outperform common multi-objective approaches, particularly in high-dimensional objective spaces.

Dimitris Michailidis, Willem R\"opke, Diederik M. Roijers, Sennay Ghebreab, Fernando P. Santos• 2024

Related benchmarks

TaskDatasetResultRank
Multi-objective Reinforcement LearningXi’an transport planning environment
Normalized Hypervolume0.86
41
Multi-objective Reinforcement LearningAmsterdam transport planning environment
Normalized Hypervolume0.81
40
Multi-objective Reinforcement LearningDeep Sea Treasure
Hypervolume (HV)2.28e+4
3
Showing 3 of 3 rows

Other info

Follow for update