Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks

About

Several medical Multimodal Large Languange Models (MLLMs) have been developed to address tasks involving visual images with textual instructions across various medical modalities, achieving impressive results. Most current medical generalist models are region-agnostic, treating the entire image as a holistic representation. However, they struggle to identify which specific regions they are focusing on when generating a sentence. To mimic the behavior of doctors, who typically begin by reviewing the entire image before concentrating on specific regions for a thorough evaluation, we aim to enhance the capability of medical MLLMs in understanding anatomical regions within entire medical scans. To achieve it, we first formulate Region-Centric tasks and construct a large-scale dataset, MedRegInstruct, to incorporate regional information into training. Combining our collected dataset with other medical multimodal corpora for training, we propose a Region-Aware medical MLLM, MedRegA, which is the first bilingual generalist medical AI system to simultaneously handle image-level and region-level medical vision-language tasks across a broad range of modalities. Our MedRegA not only enables three region-centric tasks, but also achieves the best performance for visual question answering, report generation and medical image classification over 8 modalities, showcasing significant versatility. Experiments demonstrate that our model can not only accomplish powerful performance across various medical vision-language tasks in bilingual settings, but also recognize and detect structures in multimodal medical scans, boosting the interpretability and user interactivity of medical MLLMs. Our project page is https://medrega.github.io.

Lehan Wang, Haonan Wang, Honglong Yang, Jiaji Mao, Zehong Yang, Jun Shen, Xiaomeng Li• 2024

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSLAKE (test)
Overall Accuracy84.1
56
Medical Visual Question AnsweringPathVQA (test)
Accuracy68.5
55
Medical Visual Question AnsweringVQA-RAD (test)
Accuracy76.9
38
Medical Visual Question AnsweringPMC-VQA (test)
Accuracy79.5
36
Medical Visual Question AnsweringAggregate (SLAKE, VQA-RAD, PathVQA, PMC-VQA) Average (test)
Accuracy77.3
11
Region-to-Text IdentificationCRMed Structure Identification (test)
BLEU-178.34
8
Region-to-Text IdentificationCRMed (test)
BLEU-161.09
8
Text-to-Region DetectionCRMed Single-Region (test)
Obj-F177.93
8
Text-to-Region DetectionCRMed Multi-Region (test)
Object F1 Score68.52
8
Grounded Report GenerationMIMIC-CXR
BLEU-133.18
6
Showing 10 of 11 rows

Other info

Follow for update