Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning

About

Reasoning is a critical frontier for advancing medical image analysis, where transparency and trustworthiness play a central role in both clinician trust and regulatory approval. Although Medical Visual Language Models (VLMs) show promise for radiological tasks, most existing VLMs merely produce final answers without revealing the underlying reasoning. To address this gap, we introduce MedVLM-R1, a medical VLM that explicitly generates natural language reasoning to enhance transparency and trustworthiness. Instead of relying on supervised fine-tuning (SFT), which often suffers from overfitting to training distributions and fails to foster genuine reasoning, MedVLM-R1 employs a reinforcement learning framework that incentivizes the model to discover human-interpretable reasoning paths without using any reasoning references. Despite limited training data (600 visual question answering samples) and model parameters (2B), MedVLM-R1 boosts accuracy from 55.11% to 78.22% across MRI, CT, and X-ray benchmarks, outperforming larger models trained on over a million samples. It also demonstrates robust domain generalization under out-of-distribution tasks. By unifying medical image analysis with explicit reasoning, MedVLM-R1 marks a pivotal step toward trustworthy and interpretable AI in clinical practice. Inference model is available at: https://huggingface.co/JZPeterPan/MedVLM-R1.

Jiazhen Pan, Che Liu, Junde Wu, Fenglin Liu, Jiayuan Zhu, Hongwei Bran Li, Chen Chen, Cheng Ouyang, Daniel Rueckert• 2025

Related benchmarks

TaskDatasetResultRank
Multi-Modal Visual Question Answering (MMVQA)CT-RATE (val)
Accuracy26.58
57
Multi-Modal Visual Question Answering (MMVQA)RAD-ChestCT (val)
Accuracy26.11
57
Medical Visual Question AnsweringSLAKE (test)--
29
Multimodal Dental Image AnalysisMMOral-Uni 1.0 (test)
Loc Score10.1
28
Radiology Report GenerationCHEXPERT Plus
R-L20.9
22
Medical Visual Question AnsweringOmniMedVQA
Accuracy77.38
18
Medical Visual Question AnsweringMedical VQA Suite (MMMU-Med, VQA-RAD, SLAKE, PathVQA, PMC-VQA, OmniMedVQA, MedXpertQA)
MMMU-Med Score35.2
18
Medical Multi-task Visual ReasoningMedAD-38K (test)
Anatomy ID Accuracy91.17
17
Medical Report GenerationIU-Xray
ROUGE-L22.7
17
Grounded ECG InterpretationECG-Grounding
Diagnosis Accuracy16.62
17
Showing 10 of 27 rows

Other info

Follow for update