3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
About
This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | S3DIS (Area 5) | mIOU54.93 | 799 | |
| Part Segmentation | ShapeNet part | mIoU84.6 | 46 | |
| Abdominal multi-organ segmentation | BTCV | Spleen88.34 | 35 | |
| Shape classification | ALAN (test) | AUC0.8683 | 30 | |
| Liver Segmentation | LiTS | Dice Score93.46 | 29 | |
| Brain Tissue Segmentation | iSeg 2019 (test) | Dice (CSF)89.7 | 28 | |
| Brain Tissue Segmentation | MRBrainS 2013 (test) | GM DSC85.44 | 25 | |
| Brain Tumor Volume Segmentation | BraTS | Dice (ET)68.91 | 22 | |
| Brain Tissue Segmentation | iSEG challenge 2017 (test) | CSF DSC94.39 | 20 | |
| Tumor Segmentation | LiTS | Dice69.12 | 17 |