Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MedMO: Grounding and Understanding Multimodal Large Language Model for Medical Images

About

Multimodal large language models have advanced rapidly, but their adoption in medicine is constrained by limited domain coverage, imperfect modality alignment, and insufficient grounded reasoning. We introduce MedMO, a medical multimodal foundation model built on a general MLLM architecture and trained exclusively on large-scale domain-specific data. MedMO uses a multi-stage training recipe that includes cross-modal pretraining to align heterogeneous visual encoders with a medical language backbone, instruction tuning with multi-task supervision spanning captioning, VQA, report generation, retrieval, and bounding-box disease localization, and reinforcement learning with verifiable rewards that combine factuality checks with a box-level GIoU signal to improve spatial grounding and step-by-step reasoning in challenging clinical settings. Across modalities and tasks, MedMO surpasses strong open-source medical baselines. MedMO-8B-Next achieves consistent gains on VQA benchmarks, improving by 6.6% on average over Fleming-VL-8B, including gains of 6.0% on MMMU-Med, 9.8% on PMC-VQA, and 21.3% on MedXpertQA. On text-based QA, it improves by 14.4% over Fleming-VL-8B, driven by gains of 8.4% on MMLU-Med and 30.1% on MedQA. For medical report generation, it improves by 6.7% on MIMIC-CXR. MedMO-8B-Next also demonstrates strong grounding performance, reaching 56.1 IoU on Bacteria, which is a 47.8 IoU gain over Fleming-VL-8B. At smaller scale, MedMO-4B-Next remains competitive and exceeds Fleming-VL-8B across VQA, QA, and report generation. Evaluations spanning radiology, ophthalmology, and pathology microscopy further confirm broad cross-modality generalization. Project is available at https://genmilab.github.io/MedMO-Page

Ankan Deria, Komal Kumar, Adinath Madhavrao Dukre, Eran Segal, Salman Khan, Imran Razzak• 2026

Related benchmarks

TaskDatasetResultRank
Radiology Report GenerationCHEXPERT Plus
R-L23.6
37
Medical Visual Question AnsweringMedical VQA Suite (MMMU-Med, VQA-RAD, SLAKE, PathVQA, PMC-VQA, OmniMedVQA, MedXpertQA)
MMMU-Med Score64.6
18
Medical Question AnsweringMedical Text QA Suite (MMLU-Med, PubMedQA, MedMCQA, MedQA, Medbullets, MedXpertQA, SGPQA)
MMLU-Med81
17
Medical Report GenerationIU-Xray
ROUGE-L31.1
17
Medical Report GenerationMed-Trinity
ROUGE-L37
8
Multi-view GroundingMedSG
IoU75.8
6
Object TrackingMedSG
IoU77.2
6
Referring Expression GroundingMedSG
IoU (%)70.1
6
Bacteria DetectionBacteria
IoU5.46e+3
5
Lesion DetectionDeepLesion
IoU (%)38.5
5
Showing 10 of 11 rows

Other info

Follow for update