Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-task pre-training of deep neural networks for digital pathology

About

In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.

Romain Mormont, Pierre Geurts, Rapha\"el Mar\'ee• 2020

Related benchmarks

TaskDatasetResultRank
WSI-level retrievalPrivate-Liver Internal (test)
Macro F1 Score53
46
Patch-Level ClassificationPrivate-Breast (5-Fold CV)
Macro F1 Score52.62
32
Patch-level searchPrivate-Breast
Accuracy36.6
24
Patch-Level ClassificationPrivate-Breast
Accuracy54.06
24
WSI ClassificationPrivate-CRC
Top-1 Macro F166
23
WSI-level classificationPrivate-CRC
MV@5 Accuracy66
23
WSI-level retrievalPrivate-CRC internal (test)
Macro F166
23
WSI-level retrievalCAMELYON16 (test)
Macro F166
23
Whole Slide Image RetrievalCamelyon16
Macro F1 Score0.64
23
WSI ClassificationPrivate-Liver
Top-1 Macro F10.57
23
Showing 10 of 46 rows

Other info

Follow for update