Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning

About

Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in understanding common visual elements, largely due to their large-scale datasets and advanced training strategies. However, their effectiveness in medical applications remains limited due to the inherent discrepancies between data and tasks in medical scenarios and those in the general domain. Concretely, existing medical MLLMs face the following critical limitations: (1) limited coverage of medical knowledge beyond imaging, (2) heightened susceptibility to hallucinations due to suboptimal data curation processes, (3) lack of reasoning capabilities tailored for complex medical scenarios. To address these challenges, we first propose a comprehensive data curation procedure that (1) efficiently acquires rich medical knowledge data not only from medical imaging but also from extensive medical texts and general-domain data; and (2) synthesizes accurate medical captions, visual question answering (VQA), and reasoning samples. As a result, we build a multimodal dataset enriched with extensive medical knowledge. Building on the curated data, we introduce our medical-specialized MLLM: Lingshu. Lingshu undergoes multi-stage training to embed medical expertise and enhance its task-solving capabilities progressively. Besides, we preliminarily explore the potential of applying reinforcement learning with verifiable rewards paradigm to enhance Lingshu's medical reasoning ability. Additionally, we develop MedEvalKit, a unified evaluation framework that consolidates leading multimodal and textual medical benchmarks for standardized, fair, and efficient model assessment. We evaluate the performance of Lingshu on three fundamental medical tasks, multimodal QA, text-based QA, and medical report generation. The results show that Lingshu consistently outperforms the existing open-source multimodal models on most tasks ...

LASA Team, Weiwen Xu, Hou Pong Chan, Long Li, Mahani Aljunied, Ruifeng Yuan, Jianyu Wang, Chenghao Xiao, Guizhen Chen, Chaoqun Liu, Zhaodonghui Li, Yu Sun, Junao Shen, Chaojun Wang, Jie Tan, Deli Zhao, Tingyang Xu, Hao Zhang, Yu Rong• 2025

Related benchmarks

TaskDatasetResultRank
Medical Question AnsweringMedMCQA
Accuracy66.1
253
Medical Visual Question AnsweringSlake
Accuracy89.2
134
Medical Visual Question AnsweringVQA-RAD
Accuracy68.9
106
Medical Visual Question AnsweringPathVQA
Overall Accuracy68.7
86
Hierarchical UnlearningMedForget 1.0 (Forget)
Gen Score99.84
72
Question AnsweringMedQA
Accuracy74.7
70
Multi-Modal Visual Question Answering (MMVQA)RAD-ChestCT (val)
Accuracy35.79
57
Multi-Modal Visual Question Answering (MMVQA)CT-RATE (val)
Accuracy31.88
57
Medical Visual Question AnsweringPMC-VQA
Accuracy56.3
44
Medical Visual Question AnsweringOmniMedVQA (test)
CT Accuracy77.2
29
Showing 10 of 67 rows

Other info

Follow for update