Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training

About

Recently, medical vision-language pre-training (VLP) has reached substantial progress to learn global visual representation from medical images and their paired radiology reports. However, medical imaging tasks in real world usually require finer granularity in visual features. These tasks include visual localization tasks (e.g., semantic segmentation, object detection) and visual grounding task. Yet, current medical VLP methods face challenges in learning these fine-grained features, as they primarily focus on brute-force alignment between image patches and individual text tokens for local visual feature learning, which is suboptimal for downstream dense prediction tasks. In this work, we propose a new VLP framework, named \textbf{G}lobal to \textbf{D}ense level representation learning (G2D) that achieves significantly improved granularity and more accurate grounding for the learned features, compared to existing medical VLP approaches. In particular, G2D learns dense and semantically-grounded image representations via a pseudo segmentation task parallel with the global vision-language alignment. Notably, generating pseudo segmentation targets does not incur extra trainable parameters: they are obtained on the fly during VLP with a parameter-free processor. G2D achieves superior performance across 6 medical imaging tasks and 25 diseases, particularly in semantic segmentation, which necessitates fine-grained, semantically-grounded image features. In this task, G2D surpasses peer models even when fine-tuned with just 1\% of the training data, compared to the 100\% used by these models. The code can be found in https://github.com/cheliu-computation/G2D-NeurIPS24/tree/main.

Che Liu, Cheng Ouyang, Sibo Cheng, Anand Shah, Wenjia Bai, Rossella Arcucci• 2023

Related benchmarks

TaskDatasetResultRank
Object DetectionRSNA
mAP (%)27.2
99
Semantic segmentationSIIM
Dice Coefficient (%)68.4
96
Semantic segmentationRSNA
Dice Score76.9
90
Multi-Label ClassificationChestX-Ray14 (test)
AUROC (%)83.1
88
Object DetectionObject-CXR
mAP20.4
58
ClassificationSIIM
AUC89.7
54
Linear ClassificationCheXpert 1% (train)
AUC89.7
9
Linear ClassificationCheXpert (10% train)
AUC90.4
9
Linear ClassificationCheXpert 100% (train)
AUC91.1
9
Linear ClassificationRSNA 1% (train)
AUC92.2
9
Showing 10 of 19 rows

Other info

Code

Follow for update