Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounded Answers for Multi-agent Decision-making Problem through Generative World Model

About

Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.

Zeyang Liu, Xinrui Yang, Shiguang Sun, Long Qian, Lipeng Wan, Xingyu Chen, Xuguang Lan• 2024

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningStarCraft Multi-Agent Challenge (SMAC)
1c3s5z Win Rate94.59
13
Multi-Agent Reinforcement LearningSMAC 6h_vs_8z (test)
Average Score18.97
12
Multi-Agent Reinforcement LearningSMAC corridor (test)
Average Score19.5
12
Multi-Agent Reinforcement LearningSMAC 2c_vs_64zg (test)
Test Return20.45
6
Multi-Agent Reinforcement LearningSMAC 5m_vs_6m (test)
Test Return18.96
6
Multi-agent Decision MakingSMAC 1c3s unseen (test)
Win Rate56.47
3
Multi-agent Decision MakingSMAC 6m unseen (test)
Win Rate97.85
3
Multi-agent Decision MakingSMAC 1c_vs_32zg unseen (test)
Win Rate58.33
3
Multi-agent Decision MakingSMAC 3s2z_vs_2s3z unseen (test)
Win Rate18.22
3
Multi-agent Decision MakingSMAC 1c3s6z (unseen test)
Win Rate65.38
3
Showing 10 of 15 rows

Other info

Follow for update