Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

About

This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.

\"Ozg\"un \c{C}i\c{c}ek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, Olaf Ronneberger• 2016

Related benchmarks

TaskDatasetResultRank
Semantic segmentationS3DIS (Area 5)
mIOU54.93
799
Part SegmentationShapeNet part
mIoU84.6
46
Abdominal multi-organ segmentationBTCV
Spleen88.34
35
Shape classificationALAN (test)
AUC0.8683
30
Liver SegmentationLiTS
Dice Score93.46
29
Brain Tissue SegmentationiSeg 2019 (test)
Dice (CSF)89.7
28
Brain Tissue SegmentationMRBrainS 2013 (test)
GM DSC85.44
25
Brain Tumor Volume SegmentationBraTS
Dice (ET)68.91
22
Brain Tissue SegmentationiSEG challenge 2017 (test)
CSF DSC94.39
20
Tumor SegmentationLiTS
Dice69.12
17
Showing 10 of 76 rows
...

Other info

Code

Follow for update