Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StainNet: Scaling Self-Supervised Foundation Models on Immunohistochemistry and Special Stains for Computational Pathology

About

Foundation models trained with self-supervised learning (SSL) on large-scale histological images have significantly accelerated the development of computational pathology. These models can serve as backbones for region-of-interest (ROI) image analysis or patch-level feature extractors in whole-slide images (WSIs) based on multiple instance learning (MIL). Existing pathology foundation models (PFMs) are typically pre-trained on Hematoxylin-Eosin (H\&E) stained pathology images. However, images such as immunohistochemistry (IHC) and special stains are also frequently used in clinical practice. PFMs pre-trained mainly on H\&E-stained images may be limited in clinical applications involving these non-H\&E images. To address this issue, we propose StainNet, a collection of self-supervised foundation models specifically trained for IHC and special stains in pathology images based on the vision transformer (ViT) architecture. StainNet contains a ViT-Small and a ViT-Base model, both of which are trained using a self-distillation SSL approach on over 1.4 million patch images extracted from 20,231 publicly available IHC and special staining WSIs in the HISTAI database. To evaluate StainNet models, we conduct experiments on three in-house slide-level IHC classification tasks, three in-house ROI-level special stain and two public ROI-level IHC classification tasks to demonstrate their strong ability. We also perform ablation studies such as few-ratio learning and retrieval evaluations, and compare StainNet models with recent larger PFMs to further highlight their strengths. The StainNet model weights are available at https://github.com/WonderLandxD/StainNet.

Jiawen Li, Jiali Hu, Xitong Ling, Yongqiang Lv, Yuxuan Chen, Yizhi Wang, Tian Guan, Yifei Liu, Yonghong He• 2025

Related benchmarks

TaskDatasetResultRank
WSI ClassificationNTUH-Ki67-Liver (5-fold cross-val)
Balanced Acc92.8
98
RoI-level classificationMIST
Accuracy92.3
28
RoI-level classificationBCI
Accuracy91.3
28
Slide-level classificationP53-UCEC (test)
Accuracy93.3
14
Slide-level classificationMLH1-UCEC (test)
Accuracy80.3
14
ClassificationGlomerulus-Masson
Balanced Accuracy50.5
12
ClassificationGlomerulus-PAS
Balanced Accuracy66.9
12
ClassificationGlomerulus-PASM
Balanced Accuracy47.5
12
WSI-level classificationPANDA Karo
Balanced Accuracy32.2
6
RoI-level classificationCRC-100K
Balanced Accuracy93.1
6
Showing 10 of 11 rows

Other info

Follow for update