Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis

About

Current volumetric biomedical foundation models struggle to generalize as public 3D datasets are small and do not cover the broad diversity of medical procedures, conditions, anatomical regions, and imaging protocols. We address this by creating a representation learning method that instead anticipates strong domain shifts at training time itself. We first propose a data engine that synthesizes highly variable training samples that would enable generalization to new biomedical contexts. To then train a single 3D network for any voxel-level task, we develop a contrastive learning method that pretrains the network to be stable against nuisance imaging variation simulated by the data engine, a key inductive bias for generalization. This network's features can be used as robust representations of input images for downstream tasks and its weights provide a strong, dataset-agnostic initialization for finetuning on new datasets. As a result, we set new standards across both multimodality registration and few-shot segmentation, a first for any 3D biomedical vision model, all without (pre-)training on any existing dataset of real images.

Neel Dey, Benjamin Billot, Hallee E. Wong, Clinton J. Wang, Mengwei Ren, P. Ellen Grant, Adrian V. Dalca, Polina Golland• 2024

Related benchmarks

TaskDatasetResultRank
SegmentationLiTS
Dice Score52.1
45
SegmentationACDC
DSC73.1
41
ClassificationSpleen Trauma 27 (test)
AUC65.7
27
ClassificationRSNA ICH 19 (test)
AUC63.8
27
ClassificationKidney Trauma 27 (test)
AUC54.4
27
ClassificationLiver Trauma 27 (test)
AUC58.5
27
SegmentationBCV
Dice Coefficient81
25
SegmentationAMOS MR
Dice78.8
25
SegmentationAutoPET
Dice Score62.8
25
SegmentationBraTS T1CE
Dice Score66.7
25
Showing 10 of 16 rows

Other info

Follow for update