Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Medical Large Vision-Language Models via Alignment Distillation

About

Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs.

Aofei Chang, Ting Wang, Fenglong Ma• 2025

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSLAKE closed-end
Accuracy92.39
54
Medical Visual Question AnsweringVQA-RAD closed-end
Accuracy78.74
45
Medical Visual Question AnsweringPathVQA closed-end
Accuracy93.63
35
Medical Visual Question AnsweringSLAKE Open
Accuracy86.85
26
Medical Visual Question AnsweringVQA-RAD Open
Accuracy43.75
26
Medical Report GenerationMIMIC-CXR
BLEU4.76
22
Medical Visual Question AnsweringPathVQA Open
Accuracy38.65
22
Medical Visual Question AnsweringIU-Xray (Close)
Accuracy86.22
22
Medical Visual Question AnsweringOmniMedVQA Close
Accuracy93.6
22
Medical Report GenerationIU-Xray
BLEU10.31
11
Showing 10 of 10 rows

Other info

Follow for update