Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning

About

Recent advances in Large Multimodal Models (LMM) have made it possible for various applications in human-machine interactions. However, developing LMMs that can comprehend, reason, and plan in complex and diverse 3D environments remains a challenging topic, especially considering the demand for understanding permutation-invariant point cloud 3D representations of the 3D scene. Existing works seek help from multi-view images, and project 2D features to 3D space as 3D scene representations. This, however, leads to huge computational overhead and performance degradation. In this paper, we present LL3DA, a Large Language 3D Assistant that takes point cloud as direct input and respond to both textual-instructions and visual-prompts. This help LMMs better comprehend human interactions and further help to remove the ambiguities in cluttered 3D scenes. Experiments show that LL3DA achieves remarkable results, and surpasses various 3D vision-language models on both 3D Dense Captioning and 3D Question Answering.

Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, Tao Chen• 2023

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
CIDEr79.08
133
3D Dense CaptioningScan2Cap (val)
CIDEr (@0.5)65.19
33
3D Question AnsweringScanQA v1.0 (test)
ROUGE35.9
26
3D Dense CaptioningScan2Cap
BLEU-4 @0.536.8
23
3D Question AnsweringScanQA
C Score76.8
16
Scene Spatial Awareness QA3D-GRAND
Binary Accuracy53.45
14
3D Question AnsweringScanQA v1.0 (val)
BLEU-413.53
13
Situated 3D Question AnsweringSQA3D (test)
EM@153.6
12
3D Visual Question AnsweringScanQA
C Score76.8
10
3D Question AnsweringMMScan QA
Overall Score38.5
7
Showing 10 of 19 rows

Other info

Follow for update