Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-Guided Diagnostic Evidence Alignment for Medical Vision-Language Pretraining under Limited Pairing

About

Most existing CLIP-style medical vision--language pretraining methods rely on global or local alignment with substantial paired data. However, global alignment is easily dominated by non-diagnostic information, while local alignment fails to integrate key diagnostic evidence. As a result, learning reliable diagnostic representations becomes difficult, which limits their applicability in medical scenarios with limited paired data. To address this issue, we propose an LLM-Guided Diagnostic Evidence Alignment method (LGDEA), which shifts the pretraining objective toward evidence-level alignment that is more consistent with the medical diagnostic process. Specifically, we leverage LLMs to extract key diagnostic evidence from radiology reports and construct a shared diagnostic evidence space, enabling evidence-aware cross-modal alignment and allowing LGDEA to effectively exploit abundant unpaired medical images and reports, thereby substantially alleviating the reliance on paired data. Extensive experimental results demonstrate that our method achieves consistent and significant improvements on phrase grounding, image--text retrieval, and zero-shot classification, and even rivals pretraining methods that rely on substantial paired data.

Huimin Yan, Liang Bai, Xian Yang, Long Chen• 2026

Related benchmarks

TaskDatasetResultRank
Medical Image ClassificationCOVID
Accuracy90.47
54
Image ClassificationNIH ChestX-ray
Accuracy87.06
21
ClassificationRSNA Pneumonia
Accuracy79.26
21
ClassificationMIMIC-5 × 200
Accuracy80
15
Image-Text RetrievalMIMIC 5x200
Precision@156.31
15
Phrase groundingMS-CXR
Atelectasis Accuracy0.8449
15
Showing 6 of 6 rows

Other info

Follow for update