Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Reward Design from Language: Enhancing Alignment of Agent Behavior with Human Specifications

About

When training artificial intelligence (AI) to perform tasks, humans often care not only about whether a task is completed but also how it is performed. As AI agents tackle increasingly complex tasks, aligning their behavior with human-provided specifications becomes critical for responsible AI deployment. Reward design provides a direct channel for such alignment by translating human expectations into reward functions that guide reinforcement learning (RL). However, existing methods are often too limited to capture nuanced human preferences that arise in long-horizon tasks. Hence, we introduce Hierarchical Reward Design from Language (HRDL): a problem formulation that extends classical reward design to encode richer behavioral specifications for hierarchical RL agents. We further propose Language to Hierarchical Rewards (L2HR) as a solution to HRDL. Experiments show that AI agents trained with rewards designed via L2HR not only complete tasks effectively but also better adhere to human specifications. Together, HRDL and L2HR advance the research on human-aligned AI agents.

Zhiqin Qian, Ryan Diaz, Sangwon Seo, Vaibhav Unhelkar• 2026

Related benchmarks

TaskDatasetResultRank
Hierarchical Reward DesignRescue
High-Level Rewards16.65
3
Hierarchical Reward DesigniTHOR
HL Reward14.19
3
Hierarchical Reward DesignKitchen
High-Level Reward0.39
3
Human-agent alignmentRescue World
Persistence76.92
2
Human-agent alignmentKitchen
Chopping Score71.43
2
Showing 5 of 5 rows

Other info

Follow for update