Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels

About

Pre-training a recognition model with contrastive learning on a large dataset of unlabeled data has shown great potential to boost the performance of a downstream task, e.g., image classification. However, in domains such as medical imaging, collecting unlabeled data can be challenging and expensive. In this work, we propose to adapt contrastive learning to work with meta-label annotations, for improving the model's performance in medical image segmentation even when no additional unlabeled data is available. Meta-labels such as the location of a 2D slice in a 3D MRI scan or the type of device used, often come for free during the acquisition process. We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training, in which a reduced set of annotated data is used for training. Finally, to fully exploit the weak annotations, a self-paced learning approach is used to help the learning and discriminate useful labels from noise. Results on three different medical image segmentation datasets show that our approach: i) highly boosts the performance of a model trained on a few scans, ii) outperforms previous contrastive and semi-supervised approaches, and iii) reaches close to the performance of a model trained on the full data.

Jizong Peng, Ping Wang, Chrisitian Desrosiers, Marco Pedersoli• 2021

Related benchmarks

TaskDatasetResultRank
Medical Image SegmentationPROMISE12
Dice Coefficient74.02
23
Showing 1 of 1 rows

Other info

Follow for update