Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports

About

The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multi-label classification of multiple diseases, which demonstrate the performance and generality of our foundation model. The sourse code and pre-trained model are available at https://github.com/sStonemason/RET-CLIP.

Jiawei Du, Jia Guo, Weihang Zhang, Shengzhu Yang, Hanruo Liu, Huiqi Li, Ningli Wang• 2024

Related benchmarks

TaskDatasetResultRank
ClassificationRFMiD (test)
AUC86.12
18
ClassificationADAM (test)
AUC0.9327
18
ClassificationREFUGE (test)
AUC90.46
18
Medical Image ClassificationPALM
AUC98.67
18
Retinal Fundus Image ClassificationFIVES
AUC92.04
18
ClassificationRIM-ONE
AUC92.58
18
Diabetic Retinopathy DetectionIDRiD DR
AUC79.32
18
ClassificationOIA-DDR
AUC83.68
18
Diabetic Macular Edema detectionIDRiD DME
AUC70.14
18
Diabetic Retinopathy ClassificationMESSIDOR-2
AUROC0.951
7
Showing 10 of 19 rows

Other info

Follow for update