Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Masked Pre-Training of Transformers for Histology Image Analysis

About

In digital pathology, whole slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Visual transformer models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. Inspired by masked language models, we propose a pretext task for training the transformer model without labeled data to address this problem. Our model, MaskHIT, uses the transformer output to reconstruct masked patches and learn representative histological features based on their positions and visual features. The experimental results demonstrate that MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, respectively. Furthermore, MaskHIT also outperforms two of the most recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures in each task.

Shuai Jiang, Liesbeth Hondelink, Arief A. Suriawinata, Saeed Hassanpour• 2023

Related benchmarks

TaskDatasetResultRank
Gleason GradingSICAP v2
AUC93.8
17
Prostate cancer gradingTCGA-PRAD
AUC0.909
9
Prostate cancer gradingGLEASON19
AUC0.887
9
Prostate cancer gradingPanda
AUC0.922
9
Prostate cancer gradingDiagset
AUC78.5
9
Prostate cancer gradingPrivate
AUC0.746
9
Showing 6 of 6 rows

Other info

Follow for update