Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Does RL Help Medical VLMs? Disentangling Vision, SFT, and RL Gains

About

Reinforcement learning (RL) is increasingly used to post-train medical Vision-Language Models (VLMs), yet it remains unclear whether RL improves medical visual reasoning or mainly sharpens behaviors already induced by supervised fine-tuning (SFT). We present a controlled study that disentangles these effects along three axes: vision, SFT, and RL. Using MedMNIST as a multi-modality testbed, we probe visual perception by benchmarking VLM vision towers against vision-only baselines, quantify reasoning support and sampling efficiency via Accuracy@1 versus Pass@K, and evaluate when RL closes the support gap and how gains transfer across modalities. We find that RL is most effective when the model already has non-trivial support (high Pass@K): it primarily sharpens the output distribution, improving Acc@1 and sampling efficiency, while SFT expands support and makes RL effective. Based on these findings, we propose a boundary-aware recipe and instantiate it by RL post-training an OctoMed-initialized model on a small, balanced subset of PMC multiple-choice VQA, achieving strong average performance across six medical VQA benchmarks.

Ahmadreza Jeddi, Kimia Shaban, Negin Baghbanzadeh, Natasha Sharan, Abhishek Moturu, Elham Dolatabadi, Babak Taati• 2026

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSlake
Accuracy88
239
Medical Visual Question AnsweringVQA-RAD
Accuracy79
198
Medical Visual Question AnsweringPathVQA
Accuracy65.5
50
Medical Visual Question AnsweringPMC
Accuracy59
18
Medical Visual Question AnsweringMedX-M
Accuracy34.5
18
Multimodal Medical UnderstandingMMMU
Accuracy62.94
7
Showing 6 of 6 rows

Other info

GitHub

Follow for update