Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multimodal Whole Slide Foundation Model for Pathology

About

The field of computational pathology has been transformed with recent advances in foundation models that encode histopathology region-of-interests (ROIs) into versatile and transferable feature representations via self-supervised learning (SSL). However, translating these advancements to address complex clinical challenges at the patient and slide level remains constrained by limited clinical data in disease-specific cohorts, especially for rare clinical conditions. We propose TITAN, a multimodal whole slide foundation model pretrained using 335,645 WSIs via visual self-supervised learning and vision-language alignment with corresponding pathology reports and 423,122 synthetic captions generated from a multimodal generative AI copilot for pathology. Without any finetuning or requiring clinical labels, TITAN can extract general-purpose slide representations and generate pathology reports that generalize to resource-limited clinical scenarios such as rare disease retrieval and cancer prognosis. We evaluate TITAN on diverse clinical tasks and find that TITAN outperforms both ROI and slide foundation models across machine learning settings such as linear probing, few-shot and zero-shot classification, rare cancer retrieval and cross-modal retrieval, and pathology report generation.

Tong Ding, Sophia J. Wagner, Andrew H. Song, Richard J. Chen, Ming Y. Lu, Andrew Zhang, Anurag J. Vaidya, Guillaume Jaume, Muhammad Shaban, Ahrong Kim, Drew F.K. Williamson, Bowen Chen, Cristina Almagro-Perez, Paul Doucet, Sharifa Sahai, Chengkuan Chen, Daisuke Komura, Akihiro Kawabe, Shumpei Ishikawa, Georg Gerber, Tingying Peng, Long Phi Le, Faisal Mahmood• 2024

Related benchmarks

TaskDatasetResultRank
Morphological Classification33-task benchmark Morphological Classification
Average Accuracy87.7
24
Morphological ClassificationMorphological Classification
Average AUC90.1
24
Molecular Classification33-task benchmark Molecular Classification
Average Accuracy71.7
24
Molecular ClassificationMolecular Classification
Average AUC83.3
24
ClassificationTask 2
Accuracy75.9
16
ClassificationTask 6
Accuracy89.5
16
Molecular ClassificationTask 19
Accuracy (ACC)61.2
16
Molecular ClassificationTask 22
Accuracy79.6
16
ClassificationTask 1
Accuracy88.3
16
ClassificationTask 4
ACC97.9
16
Showing 10 of 93 rows
...

Other info

Follow for update