Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoST: Efficient Monarch Sparse Tuning for 3D Representation Learning

About

We introduce Monarch Sparse Tuning (MoST), the first reparameterization-based parameter-efficient fine-tuning (PEFT) method tailored for 3D representation learning. Unlike existing adapter-based and prompt-tuning 3D PEFT methods, MoST introduces no additional inference overhead and is compatible with many 3D representation learning backbones. At its core, we present a new family of structured matrices for 3D point clouds, Point Monarch, which can capture local geometric features of irregular points while offering high expressiveness. MoST reparameterizes the dense update weight matrices as our sparse Point Monarch matrices, significantly reducing parameters while retaining strong performance. Experiments on various backbones show that MoST is simple, effective, and highly generalizable. It captures local features in point clouds, achieving state-of-the-art results on multiple benchmarks, e.g., 97.5% acc. on ScanObjectNN (PB_50_RS) and 96.2% on ModelNet40 classification, while it can also combine with other matrix decompositions (e.g., Low-rank, Kronecker) to further reduce parameters.

Xu Han, Yuan Tang, Jinfeng Xu, Xianzhi Li• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationS3DIS (Area 5)
mIOU58.9
799
Part SegmentationShapeNetPart
mIoU (Instance)86
198
Shape classificationModelNet40
Accuracy94.7
85
Shape classificationScanObjectNN PB_T50_RS
OA92.85
72
3D Point Cloud ClassificationScanObjectNN PB_50_RS 51
Accuracy97.5
18
3D Point Cloud ClassificationModelNet40 55
Accuracy0.9623
8
Few-shot LearningModelNet40
Acc (5w10s)97.1
4
Showing 7 of 7 rows

Other info

Follow for update