Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PCHC: Enabling Preference Conditioned Humanoid Control via Multi-Objective Reinforcement Learning

About

Humanoid robots often need to balance competing objectives, such as maximizing speed while minimizing energy consumption. While current reinforcement learning (RL) methods can master complex skills like fall recovery and perceptive locomotion, they are constrained by fixed weighting strategies that produce a single suboptimal policy, rather than providing a diverse set of solutions for sophisticated multi-objective control. In this paper, we propose a novel framework leveraging Multi-Objective Reinforcement Learning (MORL) to achieve Preference-Conditioned Humanoid Control (PCHC). Unlike conventional methods that require training a series of policies to approximate the Pareto front, our framework enables a single, preference-conditioned policy to exhibit a wide spectrum of diverse behaviors. To effectively integrate these requirements, we introduce a Beta distribution-based alignment mechanism based on preference vectors modulating a Mixture-of-Experts (MoE) module. We validated our approach on two representative humanoid tasks. Extensive simulations and real-world experiments demonstrate that the proposed framework allows the robot to adaptively shift its objective priorities in real-time based on the input preference condition.

Huanyu Li, Dewei Wang, Xinmiao Wang, Xinzhe Liu, Peng Liu, Chenjia Bai, Xuelong Li• 2026

Related benchmarks

TaskDatasetResultRank
Humanoid Fall RecoveryIsaac Gym Unitree G1 Humanoid
Avg Energy Overall (J)463.4
12
Humanoid LocomotionUnitree G1 Locomotion Isaac Gym (Simulation)
Average Stride0.79
6
Showing 2 of 2 rows

Other info

Follow for update