Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection

About

Most existing salient object detection methods mostly use U-Net or feature pyramid structure, which simply aggregates feature maps of different scales, ignoring the uniqueness and interdependence of them and their respective contributions to the final prediction. To overcome these, we propose the M$^3$Net, i.e., the Multilevel, Mixed and Multistage attention network for Salient Object Detection (SOD). Firstly, we propose Multiscale Interaction Block which innovatively introduces the cross-attention approach to achieve the interaction between multilevel features, allowing high-level features to guide low-level feature learning and thus enhancing salient regions. Secondly, considering the fact that previous Transformer based SOD methods locate salient regions only using global self-attention while inevitably overlooking the details of complex objects, we propose the Mixed Attention Block. This block combines global self-attention and window self-attention, aiming at modeling context at both global and local levels to further improve the accuracy of the prediction map. Finally, we proposed a multilevel supervision strategy to optimize the aggregated feature stage-by-stage. Experiments on six challenging datasets demonstrate that the proposed M$^3$Net surpasses recent CNN and Transformer-based SOD arts in terms of four metrics. Codes are available at https://github.com/I2-Multimedia-Lab/M3Net.

Yao Yuan, Pan Gao, XiaoYang Tan• 2023

Related benchmarks

TaskDatasetResultRank
Salient Object DetectionDUTS (test)
M (MAE)0.024
302
Salient Object DetectionPASCAL-S (test)
MAE0.047
149
Salient Object DetectionHKU-IS (test)
MAE0.019
137
Salient Object DetectionECSSD (test)
S-measure (Sa)0.948
104
Salient Object DetectionDUT-O (test)
Fm81.1
46
Salient Object DetectionSOD (test)
Max F-Score87.1
39
Showing 6 of 6 rows

Other info

Code

Follow for update