Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives

About

Multi-modal models are data hungry. While datasets with natural images are abundant, medical image datasets can not afford the same luxury. To enable representation learning for medical images at scale, we turn to YouTube, a platform with a large reservoir of open-source medical pedagogical videos. We curate MedicalNarratives, a dataset 4.7M medical image-text pairs, with 1M samples containing dense annotations in the form of spatial traces (and bounding boxes), and 118K videos centered on the trace event (with aligned text), enabling spatiotemporal grounding beyond single frames. Similar to $\textit{think-aloud}$ studies where instructors speak while hovering their mouse cursor movements over relevant image regions, 1M images in MedicalNarratives contains localized mouse traces in image pixels, creating a spatial and temporal association between the text and pixels. To evaluate the utility of MedicalNarratives, we train GenMedClip with a CLIP-like objective using our dataset spanning 12 medical domains. GenMedClip outperforms previous state-of-the-art models on all 12 domains on a newly constructed medical imaging benchmark. $\href{https://huggingface.co/datasets/wisdomik/MedicalNarratives}{[Data]}$

Wisdom O. Ikezogwo, Kevin Zhang, Mehmet Saygin Seyfioglu, Fatemeh Ghezloo, Linda Shapiro, Ranjay Krishna• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCOVID-CT
Accuracy60.24
18
Region-level ClassificationBr35H
Accuracy78.93
13
Image-level classificationChestCT
Accuracy37.85
9
Image-level classificationACL
Accuracy55.83
9
Image-level classificationACRIMA
Accuracy56.61
9
Showing 5 of 5 rows

Other info

Follow for update