Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoE3D: Mixture of Experts meets Multi-Modal 3D Understanding

About

Multi-modal 3D understanding is a fundamental task in computer vision. Previous multi-modal fusion methods typically employ a single, dense fusion network, struggling to handle the significant heterogeneity and complexity across modalities, leading to suboptimal performance. In this paper, we propose MoE3D, which integrates Mixture of Experts (MoE) into the multi-modal learning framework. The core is that we deploy a set of specialized "expert" networks, each adept at processing a specific modality or a mode of cross-modal interaction. Specifically, the MoE-based transformer is designed to better utilize the complementary information hidden in the visual features. Information aggregation module is put forward to further enhance the fusion performance. Top-1 gating is employed to make one expert process features with expert groups, ensuring high efficiency. We further propose a progressive pre-training strategy to better leverage the semantic and 2D prior, thus equipping the network with good initialization. Our MoE3D achieves competitive performance across four prevalent 3D understanding tasks. Notably, our MoE3D surpasses the top-performing counterpart by 6.1 mIoU on Multi3DRefer.

Yu Li, Yuenan Hou, Yingmei Wei, Xinge Zhu, Yuexin Ma, Wenqi Shao, Yanming Guo• 2025

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
CIDEr92.7
133
3D Question AnsweringSQA3D (test)
EM@156
55
Referring 3D Instance SegmentationScanRefer (val)
mIoU44.4
37
3D Referring SegmentationMulti3DRefer (val)
mIoU48.8
7
Showing 4 of 4 rows

Other info

Follow for update