Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast3D: Accelerating 3D Multi-modal Large Language Models for Efficient 3D Scene Understanding

About

While 3D Multi-modal Large Language Models (MLLMs) demonstrate remarkable scene understanding capabilities, their practical deployment faces critical challenges due to computational inefficiency. The key bottleneck stems from processing excessive object-centric visual tokens required for comprehensive 3D scene representation. Although visual token pruning has shown promise in accelerating 2D MLLMs, its applicability to 3D domains remains largely unexplored due to fundamental disparities in token structures. In this paper, we reveal two critical insights: (1) Significant redundancy exists in object-level 3D token representations, analogous to patch-level redundancy in 2D systems; (2) Global attention patterns exhibit strong predictive power for identifying non-essential tokens in 3D contexts. Building on these observations, we propose Fast3D, a plug-and-play visual token pruning framework for 3D MLLMs featuring two technical innovations: (1) Global Attention Prediction (GAP), where a lightweight neural network learns to predict the global attention distributions of the target model, enabling efficient token importance estimation for precise pruning guidance; (2) Sample-Adaptive visual token Pruning (SAP), which introduces dynamic token budgets through attention-based complexity assessment, automatically adjusting layer-wise pruning ratios based on input characteristics. Both of these two techniques operate without modifying the parameters of the target model. Extensive evaluations across five benchmarks validate the effectiveness of Fast3D, particularly under high visual token pruning ratios. Code is available at https://github.com/wencan25/Fast3D

Wencan Huang, Daizong Liu, Wei Hu• 2025

Related benchmarks

TaskDatasetResultRank
3D Visual GroundingScanRefer
Acc@0.551.02
142
3D Dense CaptioningScan2Cap
CIDEr @0.575.83
96
Multi-object 3D Visual GroundingMulti3DRefer
F1@0.2558.46
24
3D Question AnsweringSQA3D
Exact Match (EM)53.95
21
Visual Question AnsweringScanQA
CIDEr85.23
16
3D scene understandingScanRefer, Multi3DRefer, Scan2Cap, ScanQA, SQA3D
Score Ratio99.79
16
Showing 6 of 6 rows

Other info

Follow for update