Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Preference Conditioned Multi-Objective Reinforcement Learning: Decomposed, Diversity-Driven Policy Optimization

About

Multi-objective reinforcement learning (MORL) seeks to learn policies that balance multiple, often conflicting objectives. Although a single preference-conditioned policy is the most flexible and scalable solution, existing approaches remain brittle in practice, frequently failing to recover complete Pareto fronts. We show that this failure stems from two structural issues in current methods: destructive gradient interference caused by premature scalarization and representational collapse across the preference space. We introduce $D^3PO$, a PPO-based framework that reorganizes multi-objective policy optimization to address these issues directly. $D^3PO$ preserves per-objective learning signals through a decomposed optimization pipeline and integrates preferences only after stabilization, enabling reliable credit assignment. In addition, a scaled diversity regularizer enforces sensitivity of policy behavior to preference changes, preventing collapse. Across standard MORL benchmarks, including high-dimensional and many-objective control tasks, $D^3PO$ consistently discovers broader and higher-quality Pareto fronts than prior single- and multi-policy methods, matching or exceeding state-of-the-art hypervolume and expected utility while using a single deployable policy.

Tanmay Ambadkar, Sourav Panda, Shreyash Kale, Jonathan Dodge, Abhinav Verma• 2026

Related benchmarks

TaskDatasetResultRank
Continuous ControlHopper-2d
CT (hours)20
5
Continuous ControlHopper-3d
Completion Time (hours)30
5
Continuous ControlAnt-2d
Completion Time (hours)35
5
Continuous ControlAnt-3d
CT (hours)45
5
Continuous ControlHumanoid-2d
Computation Time (s)1.08e+5
5
Multi-objective Reinforcement LearningMinecart
Hypervolume (HV)7.39
4
Multi-objective Reinforcement LearningLunar Lander 4d
Hypervolume (HV)1.23
4
Continuous ControlBuilding-9d
CT (Time)45
3
Multi-objective Reinforcement LearningFruit Tree
Hypervolume (HV)3.42
3
Showing 9 of 9 rows

Other info

Follow for update