Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Multimodal LLMs to Generalist Embodied Agents: Methods and Lessons

About

We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.

Andrew Szot, Bogdan Mazoure, Omar Attia, Aleksei Timofeev, Harsh Agrawal, Devon Hjelm, Zhe Gan, Zsolt Kira, Alexander Toshev• 2024

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationMeta-World
Average Success Rate94.7
27
ManipulationHabitat Pick
Success Rate82.5
3
Video GamesProcgen
Expert Performance44
3
ManipulationCalvin ABC->D
Success Rate90
3
ManipulationManiSkill
Success Rate13.6
3
UI ControlAndroidControl
Success Rate57.3
2
ManipulationHabitat Place
Success Rate0.935
2
NavigationHabitat Nav
Success Rate62.5
2
NavigationBabyAI
Success Rate91.1
2
PlanningLangR
Success Rate50
2
Showing 10 of 11 rows

Other info

Follow for update