Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Attention-based Interpretable Regression of Gene Expression in Histology

About

Interpretability of deep learning is widely used to evaluate the reliability of medical imaging models and reduce the risks of inaccurate patient recommendations. For models exceeding human performance, e.g. predicting RNA structure from microscopy images, interpretable modelling can be further used to uncover highly non-trivial patterns which are otherwise imperceptible to the human eye. We show that interpretability can reveal connections between the microscopic appearance of cancer tissue and its gene expression profiling. While exhaustive profiling of all genes from the histology images is still challenging, we estimate the expression values of a well-known subset of genes that is indicative of cancer molecular subtype, survival, and treatment response in colorectal cancer. Our approach successfully identifies meaningful information from the image slides, highlighting hotspots of high gene expression. Our method can help characterise how gene expression shapes tissue morphology and this may be beneficial for patient stratification in the pathology unit. The code is available on GitHub.

Mara Graziani, Niccol\`o Marini, Nicolas Deutschmann, Nikita Janakarajan, Henning M\"uller, Mar\'ia Rodr\'iguez Mart\'inez• 2022

Related benchmarks

TaskDatasetResultRank
Slide-level gene expression estimationTCGA-KIRC
PCC0.242
14
Slide-level gene expression estimationTCGA-BRCA
PCC0.267
14
Slide-level gene expression estimationTCGA-LUAD
PCC0.243
14
Showing 3 of 3 rows

Other info

Follow for update