Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MRL: Learning to Mix with Attention and Convolutions

About

In this paper, we present a new neural architectural block for the vision domain, named Mixing Regionally and Locally (MRL), developed with the aim of effectively and efficiently mixing the provided input features. We bifurcate the input feature mixing task as mixing at a regional and local scale. To achieve an efficient mix, we exploit the domain-wide receptive field provided by self-attention for regional-scale mixing and convolutional kernels restricted to local scale for local-scale mixing. More specifically, our proposed method mixes regional features associated with local features within a defined region, followed by a local-scale features mix augmented by regional features. Experiments show that this hybridization of self-attention and convolution brings improved capacity, generalization (right inductive bias), and efficiency. Under similar network settings, MRL outperforms or is at par with its counterparts in classification, object detection, and segmentation tasks. We also show that our MRL-based network architecture achieves state-of-the-art performance for H&E histology datasets. We achieved DICE of 0.843, 0.855, and 0.892 for Kumar, CoNSep, and CPM-17 datasets, respectively, while highlighting the versatility offered by the MRL framework by incorporating layers like group convolutions to improve dataset-specific generalization.

Shlok Mohta, Hisahiro Suganuma, Yoshiki Tanaka• 2022

Related benchmarks

TaskDatasetResultRank
Nuclei Instance SegmentationCoNSeP (test)
PQ0.559
26
Nuclei Instance SegmentationKumar (test)
Dice Coefficient0.843
11
Nuclei Instance SegmentationCPM-17 (test)
DICE89.2
11
Showing 3 of 3 rows

Other info

Code

Follow for update