Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Graph-Based Multi-Modal Light-weight Network for Adaptive Brain Tumor Segmentation

About

Multi-modal brain tumor segmentation remains challenging for practical deployment due to the high computational costs of mainstream models. In this work, we propose GMLN-BTS, a Graph-based Multi-modal interaction Lightweight Network for brain tumor segmentation. Our architecture achieves high-precision, resource-efficient segmentation through three key components. First, a Modality-Aware Adaptive Encoder (M2AE) facilitates efficient multi-scale semantic extraction. Second, a Graph-based Multi-Modal Collaborative Interaction Module (G2MCIM) leverages graph structures to model complementary cross-modal relationships. Finally, a Voxel Refinement UpSampling Module (VRUM) integrates linear interpolation with multi-scale transposed convolutions to suppress artifacts and preserve boundary details. Experimental results on BraTS 2017, 2019, and 2021 benchmarks demonstrate that GMLN-BTS achieves state-of-the-art performance among lightweight models. With only 4.58M parameters, our method reduces parameter count by 98% compared to mainstream 3D Transformers while significantly outperforming existing compact approaches.

Guohao Huo, Ruiting Dai, Zitong Wang, Junxin Kong, Hao Tang• 2025

Related benchmarks

TaskDatasetResultRank
Brain Tumor SegmentationBraTS 2019
WT Segmentation Score91.3
15
Multi-modal brain tumor segmentationBraTS 2017
Average Score85.1
6
Multi-modal brain tumor segmentationBraTS 2021
Average Score88.7
6
Showing 3 of 3 rows

Other info

Follow for update