Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AnatomiX, an Anatomy-Aware Grounded Multimodal Large Language Model for Chest X-Ray Interpretation

About

Multimodal medical large language models have shown impressive progress in chest X-ray interpretation but continue to face challenges in spatial reasoning and anatomical understanding. Although existing grounding techniques improve overall performance, they often fail to establish a true anatomical correspondence, resulting in incorrect anatomical understanding in the medical domain. To address this gap, we introduce AnatomiX, a multitask multimodal large language model explicitly designed for anatomically grounded chest X-ray interpretation. Inspired by the radiological workflow, AnatomiX adopts a two stage approach: first, it identifies anatomical structures and extracts their features, and then leverages a large language model to perform diverse downstream tasks such as phrase grounding, report generation, visual question answering, and image understanding. Extensive experiments across multiple benchmarks demonstrate that AnatomiX achieves superior anatomical reasoning and delivers over 25% improvement in performance on anatomy grounding, phrase grounding, grounded diagnosis and grounded captioning tasks compared to existing approaches. Code and pretrained model are available at https://github.com/aneesurhashmi/anatomix

Anees Ur Rehman Hashmi, Numan Saeed, Christoph Lippert• 2026

Related benchmarks

TaskDatasetResultRank
Radiology Report GenerationMIMIC-CXR (test)--
121
Abnormality DetectionCXR
IoU31
8
Close-Ended Visual Question AnsweringCXR
BERTScore89
8
Image ClassificationCXR
AUROC0.92
8
Open-Ended Visual Question AnsweringCXR
BERTScore0.86
8
Anatomy GroundingChest ImaGenome
IoU0.73
6
Grounded captioningChest ImaGenome
BERTScore0.65
6
Grounded DiagnosisChest ImaGenome
BERTScore0.63
6
Phrase groundingVinDR-CXR
IoU46
6
Showing 9 of 9 rows

Other info

Follow for update