Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Off-Belief Learning

About

The standard problem setting in Dec-POMDPs is self-play, where the goal is to find a set of policies that play optimally together. Policies learned through self-play may adopt arbitrary conventions and implicitly rely on multi-step reasoning based on fragile assumptions about other agents' actions and thus fail when paired with humans or independently trained agents at test time. To address this, we present off-belief learning (OBL). At each timestep OBL agents follow a policy $\pi_1$ that is optimized assuming past actions were taken by a given, fixed policy ($\pi_0$), but assuming that future actions will be taken by $\pi_1$. When $\pi_0$ is uniform random, OBL converges to an optimal policy that does not rely on inferences based on other agents' behavior (an optimal grounded policy). OBL can be iterated in a hierarchy, where the optimal policy from one level becomes the input to the next, thereby introducing multi-level cognitive reasoning in a controlled manner. Unlike existing approaches, which may converge to any equilibrium policy, OBL converges to a unique policy, making it suitable for zero-shot coordination (ZSC). OBL can be scaled to high-dimensional settings with a fictitious transition mechanism and shows strong performance in both a toy-setting and the benchmark human-AI & ZSC problem Hanabi.

Hengyuan Hu, Adam Lerer, Brandon Cui, David Wu, Luis Pineda, Noam Brown, Jakob Foerster• 2021

Related benchmarks

TaskDatasetResultRank
Ad-hoc CoordinationHanabi w/ Color Bot
Game Score21.78
5
Cooperative PlayHanabi Cross-Play
Score23.76
5
Ad-hoc CoordinationHanabi w/ Clone Bot
Score16
5
Cooperative PlayHanabi Self-play
Score24.1
5
Ad-hoc CoordinationHanabi w/ Rank Bot
Score14.46
4
Showing 5 of 5 rows

Other info

Follow for update