Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3DMedAgent: Unified Perception-to-Understanding for 3D Medical Analysis

About

3D CT analysis spans a continuum from low-level perception to high-level clinical understanding. Existing 3D-oriented analysis methods adopt either isolated task-specific modeling or task-agnostic end-to-end paradigms to produce one-hop outputs, impeding the systematic accumulation of perceptual evidence for downstream reasoning. In parallel, recent multimodal large language models (MLLMs) exhibit improved visual perception and can integrate visual and textual information effectively, yet their predominantly 2D-oriented designs fundamentally limit their ability to perceive and analyze volumetric medical data. To bridge this gap, we propose 3DMedAgent, a unified agent that enables 2D MLLMs to perform general 3D CT analysis without 3D-specific fine-tuning. 3DMedAgent coordinates heterogeneous visual and textual tools through a flexible MLLM agent, progressively decomposing complex 3D analysis into tractable subtasks that transition from global to regional views, from 3D volumes to informative 2D slices, and from visual evidence to structured textual representations. Central to this design, 3DMedAgent maintains a long-term structured memory that aggregates intermediate tool outputs and supports query-adaptive, evidence-driven multi-step reasoning. We further introduce the DeepChestVQA benchmark for evaluating unified perception-to-understanding capabilities in 3D thoracic imaging. Experiments across over 40 tasks demonstrate that 3DMedAgent consistently outperforms general, medical, and 3D-specific MLLMs, highlighting a scalable path toward general-purpose 3D clinical assistants.Code and data are available at \href{https://github.com/jinlab-imvr/3DMedAgent}{https://github.com/jinlab-imvr/3DMedAgent}.

Ziyue Wang, Linghan Cai, Chang Han Low, Haofeng Liu, Junde Wu, Jingyu Wang, Rui Wang, Lei Song, Jiang Bian, Jingjing Fu, Yueming Jin• 2026

Related benchmarks

TaskDatasetResultRank
3D Medical Visual Question Answering (Overall)DeepChestVQA
Accuracy57
9
MeasurementDeepTumorVQA refined 2025b
Lesion Volume Measurement42
9
Medical ReasoningDeepTumorVQA refined 2025b
Fatty Liver Accuracy0.77
9
Medical ReasoningDeepChestVQA
Accuracy (Attenuation Pattern)62
9
Medical Visual Question AnsweringDeepTumorVQA refined 2025b
Total Average66
9
RecognitionDeepTumorVQA refined subset 2025b
Colon Lesion Existence82
9
RecognitionDeepChestVQA
Accuracy (Bronchus Lesion)65
9
Visual ReasoningDeepChestVQA
Acc (Largest Lesion Diameter)38
9
Visual ReasoningDeepTumorVQA refined 2025b
Adjacent Organ Accuracy45
9
Showing 9 of 9 rows

Other info

Follow for update