Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Multimodal Knowledge-enhanced Whole-slide Pathology Foundation Model

About

Remarkable strides in computational pathology have been made in the task-agnostic foundation model that advances the performance of a wide array of downstream clinical tasks. Despite the promising performance, there are still several challenges. First, prior works have resorted to either vision-only or image-caption data, disregarding pathology reports with more clinically authentic information from pathologists and gene expression profiles which respectively offer distinct knowledge for versatile clinical applications. Second, the current progress in pathology FMs predominantly concentrates on the patch level, where the restricted context of patch-level pretraining fails to capture whole-slide patterns. Even recent slide-level FMs still struggle to provide whole-slide context for patch representation. In this study, for the first time, we develop a pathology foundation model incorporating three levels of modalities: pathology slides, pathology reports, and gene expression data, which resulted in 26,169 slide-level modality pairs from 10,275 patients across 32 cancer types, amounting to over 116 million pathological patch images. To leverage these data for CPath, we propose a novel whole-slide pretraining paradigm that injects the multimodal whole-slide context into the patch representation, called Multimodal Self-TAught PRetraining (mSTAR). The proposed paradigm revolutionizes the pretraining workflow for CPath, enabling the pathology FM to acquire the whole-slide context. To the best of our knowledge, this is the first attempt to incorporate three modalities at the whole-slide context for enhancing pathology FMs. To systematically evaluate the capabilities of mSTAR, we built the largest spectrum of oncological benchmark, spanning 7 categories of oncological applications in 15 types of 97 practical oncological tasks.

Yingxue Xu, Yihui Wang, Fengtao Zhou, Jiabo Ma, Cheng Jin, Shu Yang, Jinbang Li, Zhengyu Zhang, Chenglong Zhao, Huajun Zhou, Zhenhui Li, Huangjing Lin, Xin Wang, Jiguang Wang, Anjia Han, Ronald Cheong Kin Chan, Li Liang, Xiuming Zhang, Hao Chen• 2024

Related benchmarks

TaskDatasetResultRank
Survival PredictionTCGA-LUAD
C-index0.6329
116
Cancer Subtypingcohort of lung cancer H1 (internal)
Mean AUC0.9575
46
Survival AnalysisTCGA-LUSC
C-index0.6323
38
ClusteringDLPFC
ARI15.9
30
IHC Status Prediction - P63Lung cancer P63 (Internal H1)
Mean AUC0.8305
23
Primary/Metastatic ClassificationLung Cancer External cohort H5
Mean AUC0.8811
23
TNM-N Staging (N0/N+)Breast Cancer External Cohort H9
Mean AUC0.7008
23
Pathological SubtypingGastric Cancer Internal cohort H1
Mean AUC0.8057
23
Primary/Metastatic ClassificationLung Cancer External cohort H6
Mean AUC0.9034
23
Androgen Receptor (AR) status predictionBreast cancer H2 (internal cohort)
Mean AUC0.7402
23
Showing 10 of 55 rows

Other info

Follow for update