Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

An OpenMind for 3D medical vision self-supervised learning

About

The field of self-supervised learning (SSL) for 3D medical images lacks consistency and standardization. While many methods have been developed, it is impossible to identify the current state-of-the-art, due to i) varying and small pretraining datasets, ii) varying architectures, and iii) being evaluated on differing downstream datasets. In this paper, we bring clarity to this field and lay the foundation for further method advancements through three key contributions: We a) publish the largest publicly available pre-training dataset comprising 114k 3D brain MRI volumes, enabling all practitioners to pre-train on a large-scale dataset. We b) benchmark existing 3D self-supervised learning methods on this dataset for a state-of-the-art CNN and Transformer architecture, clarifying the state of 3D SSL pre-training. Among many findings, we show that pre-trained methods can exceed a strong from-scratch nnU-Net ResEnc-L baseline. Lastly, we c) publish the code of our pre-training and fine-tuning frameworks and provide the pre-trained models created during the benchmarking process to facilitate rapid adoption and reproduction.

Tassilo Wald, Constantin Ulrich, Jonathan Suprijadi, Sebastian Ziegler, Michal Nohel, Robin Peretzke, Gregor K\"ohler, Klaus H. Maier-Hein• 2024

Related benchmarks

TaskDatasetResultRank
SegmentationLiTS
Dice Score55.8
45
SegmentationACDC
DSC75.8
41
ClassificationKidney Trauma 27 (test)
AUC60.3
27
ClassificationLiver Trauma 27 (test)
AUC68.9
27
ClassificationSpleen Trauma 27 (test)
AUC73.5
27
ClassificationRSNA ICH 19 (test)
AUC72.7
27
SegmentationBraTS T1CE
Dice Score70.3
25
SegmentationSS H&N
Dice (%)73
25
SegmentationBCV
Dice Coefficient80.2
25
SegmentationAMOS MR
Dice78.6
25
Showing 10 of 15 rows

Other info

Follow for update