Enhancing Medical Large Vision-Language Models via Alignment Distillation
About
Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Visual Question Answering | SLAKE closed-end | Accuracy92.39 | 54 | |
| Medical Visual Question Answering | VQA-RAD closed-end | Accuracy78.74 | 45 | |
| Medical Visual Question Answering | PathVQA closed-end | Accuracy93.63 | 35 | |
| Medical Visual Question Answering | SLAKE Open | Accuracy86.85 | 26 | |
| Medical Visual Question Answering | VQA-RAD Open | Accuracy43.75 | 26 | |
| Medical Report Generation | MIMIC-CXR | BLEU4.76 | 22 | |
| Medical Visual Question Answering | PathVQA Open | Accuracy38.65 | 22 | |
| Medical Visual Question Answering | IU-Xray (Close) | Accuracy86.22 | 22 | |
| Medical Visual Question Answering | OmniMedVQA Close | Accuracy93.6 | 22 | |
| Medical Report Generation | IU-Xray | BLEU10.31 | 11 |