Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing

About

Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the alignment of single image and report pairs even though clinical notes commonly refer to prior images. This does not only introduce poor alignment between the modalities but also a missed opportunity to exploit rich self-supervision through existing temporal content in the data. In this work, we explicitly account for prior images and reports when available during both training and fine-tuning. Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model. It is designed to be versatile to arising challenges such as pose variations and missing input images across time. The resulting model excels on downstream tasks both in single- and multi-image setups, achieving state-of-the-art performance on (I) progression classification, (II) phrase grounding, and (III) report generation, whilst offering consistent improvements on disease classification and sentence-similarity tasks. We release a novel multi-modal temporal benchmark dataset, MS-CXR-T, to quantify the quality of vision-language representations in terms of temporal semantics. Our experimental results show the advantages of incorporating prior images and reports to make most use of the data.

Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando P\'erez-Garc\'ia, Maximilian Ilse, Daniel C. Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anja Thieme, Anton Schwaighofer, Maria Wetscherek, Matthew P. Lungren, Aditya Nori, Javier Alvarez-Valle, Ozan Oktay• 2023

Related benchmarks

TaskDatasetResultRank
Multi-Label ClassificationChestX-Ray14 (test)
AUROC (%)72.9
88
Report GenerationMIMIC-CXR (test)
BLEU-49.2
20
Temporal Image ClassificationMS-CXR-T (test)
Macro Acc (Pleural Effusion)67
14
Multi-label CXR ClassificationPadChest (test)
AUC0.655
8
Multi-label CXR ClassificationChestXDet10 (test)
AUC0.708
8
Multi-label CXR ClassificationOpen-i (test)
AUC0.702
8
Multi-label CXR ClassificationPadChest20 (test)
AUC0.608
8
Multi-label CXR ClassificationCheXpert (test)
AUC78.9
8
Image ClassificationRSNA Pneumonia Detection 70% - 30% (test)
Accuracy81.4
5
Report GenerationMIMIC-CXR
NEM17.55
3
Showing 10 of 12 rows

Other info

Code

Follow for update