Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer-based Annotation Bias-aware Medical Image Segmentation

About

Manual medical image segmentation is subjective and suffers from annotator-related bias, which can be mimicked or amplified by deep learning methods. Recently, researchers have suggested that such bias is the combination of the annotator preference and stochastic error, which are modeled by convolution blocks located after decoder and pixel-wise independent Gaussian distribution, respectively. It is unlikely that convolution blocks can effectively model the varying degrees of preference at the full resolution level. Additionally, the independent pixel-wise Gaussian distribution disregards pixel correlations, leading to a discontinuous boundary. This paper proposes a Transformer-based Annotation Bias-aware (TAB) medical image segmentation model, which tackles the annotator-related bias via modeling annotator preference and stochastic errors. TAB employs the Transformer with learnable queries to extract the different preference-focused features. This enables TAB to produce segmentation with various preferences simultaneously using a single segmentation head. Moreover, TAB takes the multivariant normal distribution assumption that models pixel correlations, and learns the annotation distribution to disentangle the stochastic error. We evaluated our TAB on an OD/OC segmentation benchmark annotated by six annotators. Our results suggest that TAB outperforms existing medical image segmentation models which take into account the annotator-related bias.

Zehui Liao, Yutong Xie, Shishuai Hu, Yong Xia• 2023

Related benchmarks

TaskDatasetResultRank
Optic Disc and Optic Cup SegmentationRIGA
Disc Segmentation Score97.82
32
Multi-rater Medical Image SegmentationNPC-170 in-house (test)
GED0.276
15
Multi-rater Medical Image SegmentationLIDC-IDRI (test)
GED0.2322
15
Showing 3 of 3 rows

Other info

Follow for update