Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounding Multimodal Large Language Models in Actions

About

Multimodal Large Language Models (MLLMs) have demonstrated a wide range of capabilities across many domains, including Embodied AI. In this work, we study how to best ground a MLLM into different embodiments and their associated action spaces, with the goal of leveraging the multimodal world knowledge of the MLLM. We first generalize a number of methods through a unified architecture and the lens of action space adaptors. For continuous actions, we show that a learned tokenization allows for sufficient modeling precision, yielding the best performance on downstream tasks. For discrete actions, we demonstrate that semantically aligning these actions with the native output token space of the MLLM leads to the strongest performance. We arrive at these lessons via a thorough study of seven action space adapters on five different environments, encompassing over 114 embodied tasks.

Andrew Szot, Bogdan Mazoure, Harsh Agrawal, Devon Hjelm, Zsolt Kira, Alexander Toshev• 2024

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationMeta-World
Average Success Rate84
27
ManipulationCalvin ABC->D
Success Rate82.4
3
PlanningLangR
Success Rate51
2
Showing 3 of 3 rows

Other info

Follow for update