Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased

About

There is a recent trend of applying multi-agent reinforcement learning (MARL) to train an agent that can cooperate with humans in a zero-shot fashion without using any human data. The typical workflow is to first repeatedly run self-play (SP) to build a policy pool and then train the final adaptive policy against this pool. A crucial limitation of this framework is that every policy in the pool is optimized w.r.t. the environment reward function, which implicitly assumes that the testing partners of the adaptive policy will be precisely optimizing the same reward function as well. However, human objectives are often substantially biased according to their own preferences, which can differ greatly from the environment reward. We propose a more general framework, Hidden-Utility Self-Play (HSP), which explicitly models human biases as hidden reward functions in the self-play objective. By approximating the reward space as linear functions, HSP adopts an effective technique to generate an augmented policy pool with biased policies. We evaluate HSP on the Overcooked benchmark. Empirical results show that our HSP method produces higher rewards than baselines when cooperating with learned human models, manually scripted policies, and real humans. The HSP policy is also rated as the most assistive policy based on human feedback.

Chao Yu, Jiaxuan Gao, Weilin Liu, Botian Xu, Hao Tang, Jiaqi Yang, Yu Wang, Yi Wu• 2023

Related benchmarks

TaskDatasetResultRank
Multi-agent coordinationForaging Experiment 3
IQM Return5.93
5
Multi-agent coordinationForaging Experiment 1
IQM Return4.654
5
Multi-agent coordinationForaging Experiment 2
IQM Return5.485
5
Multi-agent coordinationPredator Prey
IQM Return1.569
5
Multi-agent coordinationOvercooked
IQM Return0.00e+0
5
Showing 5 of 5 rows

Other info

Follow for update