Preservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts
About
Preserving maximal information is one of principles of designing self-supervised learning methodologies. To reach this goal, contrastive learning adopts an implicit way which is contrasting image pairs. However, we believe it is not fully optimal to simply use the contrastive estimation for preservation. Moreover, it is necessary and complemental to introduce an explicit solution to preserve more information. From this perspective, we introduce Preservational Learning to reconstruct diverse image contexts in order to preserve more information in learned representations. Together with the contrastive loss, we present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations. PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Image Segmentation | MM-WHS (test) | Dice Score86.58 | 62 | |
| Multi-organ Segmentation | BTCV (test) | Spl95.73 | 55 | |
| Liver Segmentation | LiTS | Dice Score93.87 | 29 | |
| Image Classification | NIH ChestX-ray | -- | 21 | |
| Medical Image Segmentation | MSD Spleen (test) | Dice Score94.32 | 18 | |
| Brain Tumor Segmentation | BraTS 21 | Dice TC81.96 | 14 | |
| Classification | CC-CCII 68 | Accuracy88.72 | 12 |