MedXIAOHE: A Comprehensive Recipe for Building Medical MLLMs
About
We present MedXIAOHE, a medical vision-language foundation model designed to advance general-purpose medical understanding and reasoning in real-world clinical applications. MedXIAOHE achieves state-of-the-art performance across diverse medical benchmarks and surpasses leading closed-source multimodal systems on multiple capabilities. To achieve this, we propose an entity-aware continual pretraining framework that organizes heterogeneous medical corpora to broaden knowledge coverage and reduce long-tail gaps (e.g., rare diseases). For medical expert-level reasoning and interaction, MedXIAOHE incorporates diverse medical reasoning patterns via reinforcement learning and tool-augmented agentic training, enabling multi-step diagnostic reasoning with verifiable decision traces. To improve reliability in real-world use, MedXIAOHE integrates user-preference rubrics, evidence-grounded reasoning, and low-hallucination long-form report generation, with improved adherence to medical instructions. We release this report to document our practical design choices, scaling insights, and evaluation framework, hoping to inspire further research.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Question Answering | HealthBench Hard | -- | 16 | |
| Medical Information Retrieval and Comparison | MedBrowseComp | Pass@129 | 4 | |
| Medical Instruction Following | MulDimIF | Pass@178.7 | 4 | |
| Medical Instruction Following | MedMTbench | Pass@10.6375 | 4 | |
| Medical Multi-discipline Multimodal Understanding | MMMU Med (val) | Pass@187.53 | 4 | |
| Medical Multi-discipline Multimodal Understanding | MMMU Pro Med | Pass@173.88 | 4 | |
| Medical Question Answering | PubMedQA | Pass@186 | 4 | |
| Medical Question Answering | MedQA MCMLE | Pass@196.21 | 4 | |
| Medical Question Answering | MedQA USMLE | Pass@197.88 | 4 | |
| Medical Question Answering | Medbullets op4 | Pass@195.78 | 4 |