Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI

About

Despite significant advancements in general AI, its effectiveness in the medical domain is limited by the lack of specialized medical knowledge. To address this, we formulate GMAI-VL-5.5M, a multimodal medical dataset created by converting hundreds of specialized medical datasets with various annotations into high-quality image-text pairs. This dataset offers comprehensive task coverage, diverse modalities, and rich image-text data. Building upon this dataset, we develop GMAI-VL, a general medical vision-language model, with a three-stage training strategy that enhances the integration of visual and textual information. This approach significantly improves the model's ability to process multimodal data, supporting accurate diagnoses and clinical decision-making. Experiments show that GMAI-VL achieves state-of-the-art performance across various multimodal medical tasks, including visual question answering and medical image diagnosis.

Tianbin Li, Yanzhou Su, Wei Li, Bin Fu, Zhe Chen, Ziyan Huang, Guoan Wang, Chenglong Ma, Ying Chen, Ming Hu, Yanjun Li, Pengcheng Chen, Xiaowei Hu, Zhongying Deng, Yuanfeng Ji, Jin Ye, Yu Qiao, Junjun He• 2024

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSLAKE (test)--
29
Medical Visual Question AnsweringVQA-RAD (test)
Accuracy64.6
13
Medical Visual Question AnsweringPMC-VQA (test)
Accuracy52.3
13
Medical Visual Question AnsweringPathVQA (test)
Accuracy47.2
13
Medical Visual Question AnsweringMMMU Health & Medicine (test)
Accuracy51.2
12
Showing 5 of 5 rows

Other info

Follow for update