Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Embodied Generalist Agent in 3D World

About

Leveraging massive knowledge from large language models (LLMs), recent machine learning models show notable successes in general-purpose task solving in diverse domains such as computer vision and robotics. However, several significant challenges remain: (i) most of these models rely on 2D images yet exhibit a limited capacity for 3D input; (ii) these models rarely explore the tasks inherently defined in 3D world, e.g., 3D grounding, embodied reasoning and acting. We argue these limitations significantly hinder current models from performing real-world tasks and approaching general intelligence. To this end, we introduce LEO, an embodied multi-modal generalist agent that excels in perceiving, grounding, reasoning, planning, and acting in the 3D world. LEO is trained with a unified task interface, model architecture, and objective in two stages: (i) 3D vision-language (VL) alignment and (ii) 3D vision-language-action (VLA) instruction tuning. We collect large-scale datasets comprising diverse object-level and scene-level tasks, which require considerable understanding of and interaction with the 3D world. Moreover, we meticulously design an LLM-assisted pipeline to produce high-quality 3D VL data. Through extensive experiments, we demonstrate LEO's remarkable proficiency across a wide spectrum of tasks, including 3D captioning, question answering, embodied reasoning, navigation and manipulation. Our ablative studies and scaling analyses further provide valuable insights for developing future embodied generalist agents. Code and data are available on project page.

Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, Siyuan Huang• 2023

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
CIDEr101.4
133
3D Question AnsweringSQA3D (test)
EM@150
55
3D Dense CaptioningScan2Cap (val)
CIDEr (@0.5)68.4
33
3D Question AnsweringScanQA v1.0 (test)
ROUGE49.2
26
3D Dense CaptioningScan2Cap
BLEU-4 @0.538.2
23
3D Question AnsweringScanQA
C Score101.4
16
Embodied Object QA3D-GRAND
GPT-4 Score0.3928
15
Scene Spatial Awareness QA3D-GRAND
Binary Accuracy49.74
14
3D Question AnsweringScanQA v1.0 (val)
BLEU-411.5
13
Situated 3D Question AnsweringSQA3D (test)
EM@150
12
Showing 10 of 26 rows

Other info

Code

Follow for update