Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards A Generalizable Pathology Foundation Model via Unified Knowledge Distillation

About

Foundation models pretrained on large-scale datasets are revolutionizing the field of computational pathology (CPath). The generalization ability of foundation models is crucial for the success in various downstream clinical tasks. However, current foundation models have only been evaluated on a limited type and number of tasks, leaving their generalization ability and overall performance unclear. To address this gap, we established a most comprehensive benchmark to evaluate the performance of off-the-shelf foundation models across six distinct clinical task types, encompassing a total of 72 specific tasks, including slide-level classification, survival prediction, ROI-tissue classification, ROI retrieval, visual question answering, and report generation. Our findings reveal that existing foundation models excel at certain task types but struggle to effectively handle the full breadth of clinical tasks. To improve the generalization of pathology foundation models, we propose a unified knowledge distillation framework consisting of both expert and self-knowledge distillation, where the former allows the model to learn from the knowledge of multiple expert models, while the latter leverages self-distillation to enable image representation learning via local-global alignment. Based on this framework, we curated a dataset of 96,000 whole slide images (WSIs) and developed a Generalizable Pathology Foundation Model (GPFM). This advanced model was trained on a substantial dataset comprising 190 million images extracted from approximately 72,000 publicly available slides, encompassing 34 major tissue types. Evaluated on the established benchmark, GPFM achieves an impressive average rank of 1.6, with 42 tasks ranked 1st, while the second-best model, UNI, attains an average rank of 3.7, with only 6 tasks ranked 1st.

Jiabo Ma, Zhengrui Guo, Fengtao Zhou, Yihui Wang, Yingxue Xu, Jinbang Li, Fang Yan, Yu Cai, Zhengjie Zhu, Cheng Jin, Yi Lin, Xinrui Jiang, Chenglong Zhao, Danyi Li, Anjia Han, Zhenhui Li, Ronald Cheong Kin Chan, Jiguang Wang, Peng Fei, Kwang-Ting Cheng, Shaoting Zhang, Li Liang, Hao Chen• 2024

Related benchmarks

TaskDatasetResultRank
Survival PredictionTCGA-LUAD
C-index0.6467
116
Slide-level classificationCamelyon16--
52
Survival AnalysisTCGA-LUSC
C-index0.63
38
ClusteringDLPFC
ARI15
30
WSI ClassificationPanda
Accuracy73.21
23
Linear ProbingHBC
Balanced Accuracy83.4
22
Unsupervised ClusteringHBC
ARI45.7
22
Linear ProbingDLPFC
Balanced Accuracy54.7
22
WSI ClassificationTCGA-NSCLC
Accuracy91.55
19
gene expression predictionHER2+
MSE0.922
16
Showing 10 of 22 rows

Other info

Follow for update