Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MedLVR: Latent Visual Reasoning for Reliable Medical Visual Question Answering

About

Medical vision--language models (VLMs) have shown strong potential for medical visual question answering (VQA), yet their reasoning remains largely text-centric: images are encoded once as static context, and subsequent inference is dominated by language. This paradigm is fundamentally limited in clinical scenarios, where accurate answers often depend on subtle, localized visual evidence that cannot be reliably preserved in static embeddings. We propose \textsc{MedLVR}, a latent visual reasoning framework that introduces an explicit visual evidence state into autoregressive decoding. Instead of relying solely on text-based intermediate reasoning, \textsc{MedLVR} interleaves a short latent reasoning segment within the decoder by reusing hidden states as continuous latent steps, enabling iterative preservation and refinement of query-relevant visual evidence before answer generation. To support effective visual supervision, we adopt a two-stage training strategy: region of interest (ROI)-supervised fine-tuning aligns latent states with clinically relevant image evidence, and Visual-Latent Policy Optimization (VLPO) further optimizes latent reasoning and answer generation under outcome-level rewards. Experiments on OmniMedVQA and five external medical VQA benchmarks show that \textsc{MedLVR} consistently outperforms recent reasoning baselines and improves the average score over the Qwen2.5-VL-7B backbone from 48.3\% to 53.4\%. These results show that latent visual reasoning provides an effective mechanism for preserving diagnostically relevant visual evidence and improving the reliability of medical VQA.

Suyang Xi, Songtao Hu, Yuxiang Lai, Wangyun Dan, Yaqi Liu, Shansong Wang, Xiaofeng Yang• 2026

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSlake
Accuracy66.4
239
Medical Visual Question AnsweringVQA-RAD
Accuracy65.9
198
Medical Visual Question AnsweringPMC-VQA
Accuracy53.6
74
Medical Visual Question AnsweringOmniMedVQA (test)
CT Accuracy80.4
50
Medical Visual Question AnsweringMedXpertQA
Accuracy24.3
44
Medical Visual Question AnsweringMMMU Health & Medicine (test)
Accuracy56.6
39
Showing 6 of 6 rows

Other info

Follow for update