When Does RL Help Medical VLMs? Disentangling Vision, SFT, and RL Gains
About
Reinforcement learning (RL) is increasingly used to post-train medical Vision-Language Models (VLMs), yet it remains unclear whether RL improves medical visual reasoning or mainly sharpens behaviors already induced by supervised fine-tuning (SFT). We present a controlled study that disentangles these effects along three axes: vision, SFT, and RL. Using MedMNIST as a multi-modality testbed, we probe visual perception by benchmarking VLM vision towers against vision-only baselines, quantify reasoning support and sampling efficiency via Accuracy@1 versus Pass@K, and evaluate when RL closes the support gap and how gains transfer across modalities. We find that RL is most effective when the model already has non-trivial support (high Pass@K): it primarily sharpens the output distribution, improving Acc@1 and sampling efficiency, while SFT expands support and makes RL effective. Based on these findings, we propose a boundary-aware recipe and instantiate it by RL post-training an OctoMed-initialized model on a small, balanced subset of PMC multiple-choice VQA, achieving strong average performance across six medical VQA benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Visual Question Answering | Slake | Accuracy88 | 239 | |
| Medical Visual Question Answering | VQA-RAD | Accuracy79 | 198 | |
| Medical Visual Question Answering | PathVQA | Accuracy65.5 | 50 | |
| Medical Visual Question Answering | PMC | Accuracy59 | 18 | |
| Medical Visual Question Answering | MedX-M | Accuracy34.5 | 18 | |
| Multimodal Medical Understanding | MMMU | Accuracy62.94 | 7 |