Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents

About

Foundation models trained on large-scale dataset gain a recent surge in CV and NLP. In contrast, development in biomedical domain lags far behind due to data scarcity. To address this issue, we build and release PMC-OA, a biomedical dataset with 1.6M image-caption pairs collected from PubMedCentral's OpenAccess subset, which is 8 times larger than before. PMC-OA covers diverse modalities or diseases, with majority of the image-caption samples aligned at finer-grained level, i.e., subfigure and subcaption. While pretraining a CLIP-style model on PMC-OA, our model named PMC-CLIP achieves state-of-the-art results on various downstream tasks, including image-text retrieval on ROCO, MedMNIST image classification, Medical VQA, i.e. +8.1% R@10 on image-text retrieval, +3.9% accuracy on image classification.

Weixiong Lin, Ziheng Zhao, Xiaoman Zhang, Chaoyi Wu, Ya Zhang, Yanfeng Wang, Weidi Xie• 2023

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringVQA-RAD--
198
Visual Question AnsweringVQA-RAD
Closed Accuracy84
64
Image ClassificationBreastMNIST
Accuracy81.89
64
Image ClassificationRSNA (test)
AUC64.59
59
Medical Visual Question AnsweringSLAKE (test)
Overall Accuracy84.3
56
Multiple-choice Visual Question AnsweringPMC-VQA (test)
Accuracy24.7
50
Ultrasound Image ClassificationGIST514-DB
Accuracy68.41
48
Visual Question AnsweringVQA-RAD (test)
Open-ended Accuracy67
46
Anatomy-conditioned Image RetrievalMIMIC-IR official (test)
Recall@316.83
44
ClassificationBreastMNIST
Accuracy87.82
39
Showing 10 of 88 rows
...

Other info

Follow for update