Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MEW-UNet: Multi-axis representation learning in frequency domain for medical image segmentation

About

Recently, Visual Transformer (ViT) has been widely used in various fields of computer vision due to applying self-attention mechanism in the spatial domain to modeling global knowledge. Especially in medical image segmentation (MIS), many works are devoted to combining ViT and CNN, and even some works directly utilize pure ViT-based models. However, recent works improved models in the aspect of spatial domain while ignoring the importance of frequency domain information. Therefore, we propose Multi-axis External Weights UNet (MEW-UNet) for MIS based on the U-shape architecture by replacing self-attention in ViT with our Multi-axis External Weights block. Specifically, our block performs a Fourier transform on the three axes of the input feature and assigns the external weight in the frequency domain, which is generated by our Weights Generator. Then, an inverse Fourier transform is performed to change the features back to the spatial domain. We evaluate our model on four datasets and achieve state-of-the-art performances. In particular, on the Synapse dataset, our method outperforms MT-UNet by 10.15mm in terms of HD95. Code is available at https://github.com/JCruan519/MEW-UNet.

Jiacheng Ruan, Mingye Xie, Suncheng Xiang, Ting Liu, Yuzhuo Fu• 2022

Related benchmarks

TaskDatasetResultRank
Medical Image SegmentationBUSI (test)
Dice85.99
121
Polyp SegmentationKvasir-SEG (test)
mIoU77.53
87
Multi-organ SegmentationSynapse multi-organ segmentation (test)
Avg DSC0.7892
50
Prostate SegmentationPROMISE12
DSC89.42
24
Gland SegmentationGlaS (test)
F1 Score87.92
22
SegmentationTufts Dental Dataset (TDD) (test)
mIoU82.87
12
Multi-organ SegmentationSynapse multi-organ four-fold cross-validation (official)
Aorta DSC86.68
10
Showing 7 of 7 rows

Other info

Follow for update