PRIOR: Prototype Representation Joint Learning from Medical Images and Reports
About
Contrastive learning based vision-language joint pre-training has emerged as a successful representation learning strategy. In this paper, we present a prototype representation learning framework incorporating both global and local alignment between medical images and reports. In contrast to standard global multi-modality alignment methods, we employ a local alignment module for fine-grained representation. Furthermore, a cross-modality conditional reconstruction module is designed to interchange information across modalities in the training phase by reconstructing masked images and reports. For reconstructing long reports, a sentence-wise prototype memory bank is constructed, enabling the network to focus on low-level localized visual and high-level clinical linguistic features. Additionally, a non-auto-regressive generation paradigm is proposed for reconstructing non-sequential reports. Experimental results on five downstream tasks, including supervised classification, zero-shot classification, image-to-text retrieval, semantic segmentation, and object detection, show the proposed method outperforms other state-of-the-art methods across multiple datasets and under different dataset size settings. The code is available at https://github.com/QtacierP/PRIOR.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | RSNA | mAP (%)25.2 | 99 | |
| Object Detection | Object-CXR | mAP19.8 | 58 | |
| Classification | SIIM | AUC92.3 | 54 | |
| Medical Image Classification | COVID | Accuracy86.27 | 54 | |
| Medical Image Segmentation | RSNA Pneumonia | Dice Score74.43 | 49 | |
| Image Classification | RSNA (test) | AUC89.19 | 49 | |
| Classification | CheXpert (test) | AUC ROC88.61 | 48 | |
| Image Classification | SIIM-ACR (test) | AUROC92.49 | 45 | |
| Linear Classification | COVIDx (test) | Accuracy91 | 39 | |
| Linear Classification | CheXpert (test) | AUC0.886 | 39 |