Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Making medical vision-language models think causally across modalities with retrieval-augmented cross-modal reasoning

About

Medical vision-language models (VLMs) achieve strong performance in diagnostic reporting and image-text alignment, yet their underlying reasoning mechanisms remain fundamentally correlational, exhibiting reliance on superficial statistical associations that fail to capture the causal pathophysiological mechanisms central to clinical decision-making. This limitation makes them fragile, prone to hallucinations, and sensitive to dataset biases. Retrieval-augmented generation (RAG) offers a partial remedy by grounding predictions in external knowledge. However, conventional RAG depends on semantic similarity, introducing new spurious correlations. We propose Multimodal Causal Retrieval-Augmented Generation, a framework that integrates causal inference principles with multimodal retrieval. It retrieves clinically relevant exemplars and causal graphs from external sources, conditioning model reasoning on counterfactual and interventional evidence rather than correlations alone. Applied to radiology report generation, diagnosis prediction, and visual question answering, it improves factual accuracy, robustness to distribution shifts, and interpretability. Our results highlight causal retrieval as a scalable path toward medical VLMs that think beyond pattern matching, enabling trustworthy multimodal reasoning in high-stakes clinical settings.

Weiqin Yang, Haowen Xue, Qingyi Peng, Hexuan Hu, Qian Huang, Tingbo Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Radiology Report GenerationMIMIC-CXR
ROUGE-L15.05
32
Radiology Report GenerationIU-Xray
BLEU Score35.02
9
Radiology VQAIU-Xray
Accuracy90.12
9
Radiology VQAMIMIC-CXR
Accuracy84.91
9
Showing 4 of 4 rows

Other info

Follow for update